For the past several months, whenever I scroll my social media feed, particularly on X, i keep come across hysterical posts about AI-driven coding, aka vibe coding. These posts follow a really one pattern:



SHOCKING, BREAKING, LITERALLY….Believing these folks, with just a few hours and well-crafted prompts (which they’re kindly ready to share), you can build a service as good as Twitch, YouTube, or X.
You gotta start vibe-coding ASAP. It’s not just the future — it’s already here. And you seriously risk losing your job, your family, your dog if you don’t hop on vibe-coding train right now.
I’m especially l obsessed with these kinds of hooks:

But some of these posts pull in to millions of views, which is why I feel compelled to say a few words about it.

Most often, these posts show you a really badly coded, unoptimized 1-2-page landing page as their grand example. I have no clue why, but every one of these videos zooms in on the pricing section 😂. Probably to show you how close you are to start making cash! The money’s practically in your pocket already.
This nonsense narrative really mimics marketing shady tricks of cults and pseudosciences. The recipe is simple: hand out easy answers to all questions, make people swallow it whole, and twist facts to fit the story.
But the truth is, there ain’t no easy answers here. AI-driven coding got its pros and cons. Some cons might be solved as tech gets better, but others won’t. You will need critical thinking and a no-BS mindset to deal with it.
I want to be clear: I definitely think AI is a game changer and massive innovation for coding and building tech services. Now, we can spend much more time organizing our codebase structure and platform architecture while barely typing the code itself. We can try different approaches and pick the optimal one really quickly. A skilled developer can do a magic with AI.
But again, as i said, AI-driven coding comes with its own bottlenecks i want to light up. And I’m certain some of them will stick around forever.
Problem #1: Large Projects, Context, and Long-Term Thinking
How does an LLM work? What’s its goal? How does it evaluate its own results? Does it think like a human? Knowing the answers makes everything else much easier to understand. Let’s start from the basics and dive deep very quickly.
So, when you send a prompt to an LLM (input layer), its goal is to give the best damn response (output layer). It means the most satisfactory response to you, not always relevant or right. The model use weights (hidden layer) to guess the response. These weights are the result of its training, and after that, it just processes operations without changing anything on the fly.

Both the user and the operator (who provides access to the model) only influence the input layer. All those add-ons like DeepSearch, Thinking, or Agent Systems are just wrappers around the input layer. Sometimes, they create a cyclical system. That setup grabs the output, recycles it as a new additional info for input layer, and keeps going in a loop. But the core remains the same. Nothing changes on the weights.

The key thing to understand here is that each cycle’s task is the same—to deliver the most relevant and satisfactory result right now. And that’s the hidden problem with LLM-based AI. In this sense, LLM solutions are like an eager junior employee — so eager that it’s actions screw up more than they help.
In simple terms, LLM-based AI can’t think and plan long–term, it’s got no memory like ours, and it doesn’t update its weights during interactions with us. Its job is to give a relevant response, and that’s it. Every new session is a clean slate. The model just recalls what you fed in the current request. And it’s context window caps what it can handle. If you’re working on a big project with tons of files and connections between them, AI will likely struggle to handle the full context. Because of the size of input, it might start hallucinating or messing up.
Here’s a simple example: in the screenshot below, I asked an AI (the latest model) to make button icons in my app color-friendly (adapt to the parent color). I wanted the icons to shift with the text for dark or light mode. I made the request, and in 20 seconds, the icons on my localhost server were indeed color-friendly, boom.
Now let’s look at the code: the LLM model couldn’t do better than just manually importing all icons and hardcoding them into the button.js component before the prior piece of code.

Anyone who has made more than zero real projects knows that within a month, the number of unique icons in an app will be around 50–100, and this implementation in button.js is straight-up absurd. Yet, the latest AI model thinks it’s perfectly fine and even wrote me comments on how to work with it.
So, from the LLM’s perspective, the task is done—the code works, and the icons match the parent color. But what’s wrong here? It’s about the same as this picture:

If you accept every AI change without careful review, by the end of the day, your project will be a complete mess. Once you’ve got over 30+ files in your project, working with that will be a total nightmare. If you plan to create a real product with thousands of users, the number of files will grow for sure. To illustrate, let me give you an example from one of my projects – Raizer.
Raizer is a fundraising platform where founders like me can find relevant investors and connect with them to pitch their projects. The platform has been live for 3+ years, and has more than 10,000 users, both founders and investors.

The backend in Raizer is fully isolated system. It consists of 9 tables, 65 files, and 36 API endpoints. We won’t even touch that in this discussion, instead i want to dive deep in the frontend part.
The frontend is built with the most common technologies: NextJS, Tailwind. The platform has 21 unique pages (most of them auxiliary, like /auth, /billing, etc.). The entire frontend totals 178 unique files, from the sitemap to basic UI kit components like checkboxes, buttons, inputs, modals, etc.
I can’t imagine how you could create a project like this (with 200+ files) by relying purely on vibe-coding methods. It’s just impossible. Technically, sure, it’s possible, but it’d be a house of cards.
That said, I want to make it clear: I absolutely love and use AI every day. I interact with it all day long — clarifying concepts, asking for help with syntax, posing silly questions.
When I develop new features, I break the work into small components. I know exactly what I want as inputs for a given component and what I expect as output. I don’t waste time trying to explain to the AI what I’m doing overall, what larger component this file is part of, or why it’s needed.
I’ve found this programming method to be very effective. In it, I act as the system architect who understands how things connect and work. I know which files might be affected by changes AI suggests, and I understand whether the current solution works for me in the long term or not.
AI is incredibly useful when you know exactly what you want to do and how to do it. You just want to save time on typing code. This allows people who already understand development to move much faster.
Problem #2: You Actually Don’t Own Your Code
The second problem with vibe-coding is far more serious to consider when you tie your success to vibe-coding. It can’t be solved even if LLM solutions somehow start thinking long-term or if new AI were built on different tech foundations.
Have you ever felt like you’re not in control of a situation? You don’t understand what’s going on around you or how to interact with it. You’re afraid of being left alone with it. The mere thought of it makes you anxious. Deep down, you know you’re not competent, and that feeling eats away at you.
That’s exactly what will happen if you try to build a full-fledged product using prompt engineering without having engineering skills or knowledge to read and work with your code independently.
You don’t own your code. You don’t understand how it works or why it works. You don’t know why it’s written this way or another way.
If tomorrow, for some reason, your access to AI gets cut off, you’ll be done. You literally won’t be able to change anything in your own code. It turns out the tool controls the situation, not you.
As a result, you’ll find yourself in this situation: You don’t own everything from A to Z. You don’t even own the path from A to B. And it’s not fun.
Confidence is a crucial psychological aspect for your success. Whether you control the situation or not — it shows. That feeling carries over to investors, clients, employees—everyone you interact with.
So how to vibe-code
Imagine you’re the owner and headmaster of an art studio in Milan, Italy. And you want to boost your studio to thrive. Here it is:

Nice one, huh? Agreed. So, one day, a young employee named Adam Inch (let’s call him A.I.) comes to you and really wants to work for you. He wants to be your assistant and he’s ready to take any work you assign. You’re impressed with his encyclopedic knowledge and decide to hire him.
On his first day, you task him with start to think about cutting studio costs. But you don’t specify that he mustn’t harm the business or explain obvious connections.
The next day, you come in and see that A.I. fired 70% of the artists, abandoned the studio’s prime location in Milan Central (which, by the way, shaped your studio’s image as an art hub), and rented a barn 30 km outside the city with no terrible road access.

You’re furious and can’t comprehend how someone could cause so much damage in one day. But from A.I.’s perspective, everything’s fine—he cut costs, just look at how much money he saved. He immediately offers to ‘fix’ things and waits for your next orders. Well, you get the picture…
A.I. can’t run your studio. It won’t paint a masterpiece for you. A.I. is your aide, your assistant. It can mix paints, buy a canvas, or post a job ad for new staff. It can handle all sorts of isolated tasks brilliantly. But it can’t replace you.
Owning the big picture, taking initiative, being thorough, and thinking long-term — these things require A.I. to have AGI (general artificial intelligence), which we’re still very far from. Don’t try to make AI your director; you’ll just waste your time.
Thankfully, unlike the art studio owner, we’ve got CMD + Z. So we can just roll back the changes and return to our beloved studio in the heart of the city.
So, how do you use AI without falling into vibe-coding and unexpectedly ending up in a barn:
- AI is your assistant, not more. It can save time, but it won’t think for you.
- Break tasks into pieces. Let it write functions, templates, or fix syntax, but keep it isolated.
- Check everything and review it. AI code must not only work but also fit into your project, coding style and long-term strategy.
- Learn from it, but don’t get lazy. Look at how it solves things, figure out why, but don’t expect it to do all the work for you.
- Keep leveling up your skills. AI won’t replace your brain (and shouldn’t). Read docs, talk to people, stay sharp. That’s the only way it’ll be a joy to work with, not a burden.