A few years ago, if you wanted to build software, you had two options. You either learned how to code… or you didn’t build the thing. Today, there’s a third option. You explain what you want in plain English, an AI generates the code, and you keep refining it through conversation. That might sound like science fiction––or like something your most annoying tech friend won’t stop talking about––but it’s very real. It’s called vibe coding, and it’s changing how people build software; sometimes in amazing ways, and sometimes in ways that should make responsible engineers panic. 

What is it?

When you overhear your nerdy friends or coworkers wax poetic  about vibe coding, what they’re talking about is programming in plain language. Instead of writing code, you describe what you want in words, and the AI model generates actual working code in response. You then run the code to see what it does, and if something breaks, you ask the AI to fix the issue. You repeat that loop until the code behaves the way you imagined. If you take it to the next level and use a voice-to-text tool, you can throw your keyboard in the garbage! 

How does it work?

In nerd-speak, when you ask an LLM––or large language model––to write code, what’s really happening under the hood is that it’s doing something called next-token prediction. These models were trained on massive datasets filled not just with normal human language, but with actual source code from public repositories, so they absorb patterns of how code is structured, how variables are named, and how logic flows. The model looks at your prompt as a whole; it figures out which parts are relevant when deciding what token (or piece of code) comes next. Then it builds the code one token at a time, so each new piece is statistically the most likely continuation, given both what you said and what it’s learned. That’s how it turns a plain-English description of what you want into working code. It’s basically making a smart prediction about what comes next based on patterns it learned from a huge amount of example text and code.

Why it matters?

Part of the reason vibe coding is becoming popular is that it makes building software more accessible. If you don’t have a background in programming (i.e., you’re nerd adjacent), describing what you want in natural language is a whole lot easier than learning a programming language. And if you do have a programming background, it turns out AI can help you prototype ridiculously fast. Developers can spin up demos or simple tools in minutes. As a user experience designer, I use AI tools to create prototypes, allowing me to actually get a feel for a potential solution before putting it in front of users. 

It also pairs really well with quick experimentation. Because you’re constantly talking to the model, testing what it generates, and nudging it in the right direction, you can iterate on ideas almost as fast as you think of them. The goal of the workflow is to encourage trying things, seeing what happens, and adjusting on the fly.

What vibe coding really changes, though, is the role of the human. Instead of being a line-by-line coder, you become more of a director. You set the intent, you decide what “good enough” means, and you guide the AI through the messy parts. 

And much like other emerging tech, there are trade-offs. One of the biggest issues is code quality. Because you didn’t write the code yourself, you might not fully understand what the model generated. Vibe-coded projects can produce code that can be oddly structured or inconsistent in style.

Problems can arise when the AI models generate code with security risks, maybe because they don’t always choose the safest patterns. If you’re not scrutinizing the code, those risks can slip through and become actual problems later.

In a previous episode, I talked about the black box problem of AI where it’s not clear how it arrived at its output. The same is true with vibe coding. It’s not always clear why it generated one solution instead of another. This kind of inconsistency can make it hard to feel fully confident in the result. The code generated by these tools might produce the functionality you need, but it can be overly complex and contain unnecessary or confusing coding patterns, which might make debugging and maintenance a total pain.

Conclusion

Despite all this, vibe coding is reshaping how software is built. Instead of handcrafting code line by line, you describe what you want in natural language, and a powerful LLM turns that into working code. This approach brings real creative power and lowers the barrier to building software. Those who don’t know how to code can prototype ideas; experienced developers can move much faster.

From a design and user experience perspective, vibe coding is pretty transformative. It allows designers and product folks the ability to build working prototypes instead of static mockups. Suddenly, someone who understands the user experience but doesn’t know how to code can still produce something interactive. The catch is that it requires a certain level of AI literacy. In order to get the best results, you have to learn how to talk to the model clearly, how to frame your intent, and how to troubleshoot through conversation instead of code.

While vibe coding may never fully supplant expert software engineering, it could dramatically reshape how we prototype, experiment, and democratize software creation. As models grow more powerful, the balance might shift. And while vibe coding is one of the most exciting developments in human-AI collaboration, it still demands responsibility, care, and judgment… that for now at least, only a human can deliver.