Prompt Engineering Is Dead
Cozy with the East Africans up North West
I’ve been thinking a lot about that scene from I, Robot lately. You know the one where Will Smith asks the robot if it can write a symphony or turn a canvas into a masterpiece, and the robot just fires back: “Can you?”
That moment stuck with me for years. Here’s this machine, sounding almost human, almost sentient, but in this weirdly funny way. And the whole thing was brilliant because it was humans creating robot characters to play roles in movies that were also created by humans. The constant in all of it? Us. The human touch that makes these machines actually do anything meaningful.
Everyone’s freaking out about AI replacing jobs right now, and I get it. But here’s what’s actually happening that nobody seems to be talking about: AI engineering has become one of the fastest-growing, highest-paying fields in tech. The irony is almost painful.
If you’re in cloud or IT, you don’t need to suddenly make AI your whole personality or switch careers entirely. Think of it more like adding another tool to your kit. One that makes you pretty much invincible in today’s job market. Sure, becoming an AI engineer at AWS or Google takes years, but learning the basics? You can do that in a few months. And those basics are enough to start creating real value, whether that’s through freelancing, working at startups, or building your own AI-powered tools.
The thing about AI engineering is it’s really just the process of turning ideas into actual products using AI. Could be developing models, building systems, whatever. AI engineers know a little bit of everything: machine learning, cloud infrastructure, software engineering. They’re not just playing around with prompts or demos. They’re building end-to-end applications that people actually use.
One AI engineer might be creating a recommendation engine like the one that knows exactly which show you want to binge next on Netflix. Another might be integrating large language models into internal company tools to help employees generate reports or automate support tasks. Data scientists create the prototypes. AI engineers make them production ready. Every major AI feature you see online exists because an AI engineer made it happen.
For anyone with cloud experience, there’s this emerging role called the cloud AI engineer that combines AI engineering with cloud infrastructure. You’d be building and deploying AI systems on cloud platforms, handling security, scalability, and infrastructure optimisation. Even just being a cloud professional who specialises in AI opens up strong opportunities.
Remember last year when everyone was losing their minds over prompt engineering? Calling it the next big thing? In 2025 and beyond, writing effective prompts will be expected. It won’t be special anymore. That’s why companies shifted their focus to AI engineering. It goes way beyond using AI tools at a surface level.
When I first began exploring LLMs, I thought they were only for data scientists or software engineers. But once I built my own AI-powered tools, I realised you don’t need to know everything. Every model, every algorithm, all of it. These models can actually help you learn AI. Claude helped me tons when I was troubleshooting projects. Don’t let the technical side stop you. You just need to be curious enough to experiment.
If you want a realistic roadmap to get started, here’s what I’d follow.
First, learn Python and basic machine learning. If you’re already in cloud or IT, you probably know one programming language, but now it’s time to focus on Python. Learn libraries like NumPy and Pandas to understand how data flows through a model.
Next, get comfortable with APIs and AI frameworks. Discover how to connect and utilise APIs from platforms like OpenAI and Hugging Face. Integrate them into small projects. Build a chatbot or an AI-powered tool.
Then learn LLM ops and deployment. If you have cloud knowledge, this is where it pays off. Utilise services such as AWS Lambda, Google Cloud Run, or Azure ML to deploy your models. Learn containerization with Docker and version control with Git.
After that, build projects. Create two or three small real-world projects that combine cloud and AI. The best project you can start is building something that makes your own life easier. Maybe an automated email responder using generative AI.
Finally, show your work. No matter how good your projects or skills are, nobody will know about them if you don’t put them out there. Document your journey on LinkedIn, push to GitHub, and build a portfolio of experience.
The timeline really depends on how much time you can dedicate each week. If you’re putting in around 10 to 15 hours a week, you can realistically go from a complete beginner to building your first AI application in about three to four months. But here’s the better news: you don’t have to wait months to see results. After just two to three weeks of learning Python basics and working with APIs, you can already start building simple but useful AI tools like an automated summarizer or a basic chatbot.
The key is consistency over intensity. Thirty minutes a day will get you further than cramming five hours on a weekend. With AI engineering, you need time to absorb concepts, experiment, break things, and debug.
Let me share the three biggest mistakes I see people make when getting started.
First is tutorial hell. You spend months watching courses and videos, but never actually build anything. AI engineering can’t be learned by just reading or watching. You need to open your terminal and sign up for tools. For every hour of learning, try to spend at least an hour and a half building.
The second mistake is chasing every new AI tool and trend. Every week, there’s a new model, a new framework, a new tool everyone’s talking about. While it’s tempting to try all of them, don’t let that stop you from building your foundations. You’ll end up with surface-level knowledge of everything and deep knowledge of nothing. Pick one stack and get really good at it. Master OpenAI’s API and combine it with Langchain.
The third mistake is copying and pasting code without understanding it. We all utilise tools like Claude code and Cursor to assist us in writing code. However, to truly learn, you need to understand why the code works. The goal isn’t to memorise everything, but to understand the logic.
The human touch remains constant. Innovation runs on previous knowledge. Great inventors synthesise what came before. And right now, you have the opportunity to be part of that synthesis, to understand these tools well enough to shape them instead of being shaped by them.



I resonate with what you wrote. "The constant in all of it? Us." is so spot on.