Vibe coding: Has the vibe ended?
If you clicked on this blog post, you’ve probably already come across the term Vibe Coding. In fact, it became so widespread that Collins Dictionary selected it as the Word of the Year in 2025. The idea is simple, instead of writing every line of code yourself, developers now “vibe” with an AI assistant by prompting and steering the agent towards a solution.
But there’s a catch 🎣, and it’s one many developers are now starting to notice.
Why the vibe caught on
I remember when GitHub Copilot was first introduced. It already felt like a big step forward, something that was noticeably smarter than simple IntelliSense. Developers were still typing in their IDEs, still focused on the problem at hand, just with a bit of extra help turning the ideas they already had into actual lines of code.
Then OpenAI released ChatGPT. At first, it felt like a cool gimmick. It was a more impressive version of earlier chatbots. Everyone was posting about it. But it didn’t take long before people started exploring real use cases. By the end of 2024, a wave of new AI tools appeared, many of them centered around a no-code or low-code promise. You would describe what you wanted, and the AI would generate the code for you. In theory, that meant almost anyone could write software.
What really changed though, wasn’t just the tools themselves. Developers began asking AI for complete, fully formed solutions and copy-pasting them straight into their codebases. And the surprising part was that, in many cases, it just… worked. ⚙️
When prompting replaced understanding
But now, sometimes, we stopped asking how things work, and started asking for things to work. Without any additional context, such as project requirements, AI would just give you the kind of code that looks impressive at first glance: defensive checks everywhere, abstractions layered on abstractions, performance optimizations baked in “just in case”. You read it and think: “oh wow, this is solid code!”.
And that’s exactly the problem.
AI tends to generate picture-perfect solutions. Everything is guarded. Everything is configurable. Everything is optimized “just in case.” From a distance, that feels responsible. In reality, it often means we’re shipping far more logic than the problem actually requires. No extra value, just extra paths, extra branches, extra things to hold in our head later.
And sometimes, it’s easy to miss this. The code works. Tests pass. Performance metrics look great. But what we quietly accumulate is complexity debt. Not technical debt in the classic “we’ll clean this up later” sense, but logic we never needed in the first place, logic no one on the team fully understands because no one designed it. We approved it.
A year later, when something breaks, that’s when it bites. Every safeguard becomes a question mark. Every optimization becomes a suspicion. And suddenly, “AI wrote it” isn’t an answer anymore.
Here is what I mean, let’s say you ask the following question to AI: “Give me a robust C# method to calculate a discounted price.”
You might get something like this:
public decimal CalculateDiscountedPrice(
decimal originalPrice,
decimal discountPercentage,
int precision = 2,
MidpointRounding roundingMode = MidpointRounding.AwayFromZero)
{
if (originalPrice < 0)
throw new ArgumentOutOfRangeException(nameof(originalPrice));
if (discountPercentage < 0 || discountPercentage > 100)
throw new ArgumentOutOfRangeException(nameof(discountPercentage));
var discountFactor = discountPercentage / 100m;
var discountedPrice = originalPrice * (1 - discountFactor);
if (discountedPrice < 0)
discountedPrice = 0;
return Math.Round(discountedPrice, precision, roundingMode);
}
Is this bad code? No. Is it impressive? Sure.
But the real question is: what does the use case actually need?
Imagine this method is used internally, values already validated upstream, prices never go negative, rounding rules defined elsewhere. The real requirement might be this:
public decimal CalculateDiscountedPrice(decimal price, decimal discountPercentage)
{
return price * (1 - discountPercentage / 100m);
}
That’s it. 🙌
The extra guards don’t protect anything meaningful. The rounding configuration isn’t used. The negative checks will never trigger. Yet every future reader has to scan them, interpret them, and keep them in mind.
And this is my biggest problem with Vibe Coding.
What prompting should be used for
In itself, prompting isn’t the problem. Uncritical prompting is.
Using generative AI to produce a full solution is fine, only if you treat that solution as a draft, not as an answer. The moment you paste it in, you own it. That means understanding all of it and removing/adjusting it to what your context needs.
If you don’t do that, you’re not speeding up development, you’re outsourcing design decisions to a system that knows nothing about your domain, your constraints, or your long-term maintenance cost.
Where AI does shine, at least in my experience, is elsewhere.
I find it genuinely useful as:
- An explainer 🧠, when I’m hitting a compiler error or runtime exception that doesn’t immediately click
- A rubber duck 🐤, to sanity-check a solution I already have in mind
- A second perspective 👀, when I want to compare trade-offs or explore alternatives I might not have considered
In those cases, I stay in control. I’m still reasoning about the problem. I’m still deciding what belongs in the codebase. The AI accelerates understanding, not decision-making.
So, as a conclusion
For me the vibe isn’t over. AI is still a powerful tool for asking questions, exploring ideas, and speeding up work. But copy-pasting full solutions without understanding them is where things go wrong.
In fact, this post itself was written with the help of AI. Not by just taking its output for granted, but by reviewing and rewriting every part of it. The AI helped explore ideas and sharpen phrasing, but the decisions, structure, and conclusions stayed mine.
Use AI to support your thinking, not replace it. Prompting should accelerate decisions you already own, not quietly make them for you.