22nd March, 2026
🤙 Vibes vs 📢 Amplification
Hard cover. Harder truths.
To mark the one year anniversary of the publication of my book How To Be Wrong, I’m excited to announce it’s now available for the first time in hardback.
The first copies will ship in April. Pre-sales are available today, at a special introductory price.
The book is also still available in paperback, as an ebook or audiobook.
Find a full list of the places where it’s sold on the official How To Be Wrong website.
Enjoy!
How To Be Wrong is the book New Zealand’s business community has needed for years, from the person best qualified to write it. Not another cheerleader for the innovation economy. Not another lone-genius myth. A rigorous, honest, and often uncomfortable account of what actually drives success, and why so much of what we tell ourselves about startups is, to put it plainly, wrong.
Good Vibes Only
vibe coding (n.) asking an AI agent to write software code that you cannot read or understand.
See also: confidence, misplaced; abstraction, comfortable.
There are currently two opinions competing for dominance.
First, AI is a massive bubble that will soon pop. Large language models (LLMs) are impressive but shallow. The economics don’t work, and the environmental cost too large. We’re at the high point in the hype cycle. Soon we will file openai.com just ahead of pets.com in the alphabet of expensive lessons.
Second, software engineering as a profession is finished. Now, anybody can just describe the software they want or need and AI will build it for them. Learning to code? Irrelevant! A motivated twelve-year-old with a Claude Code subscription can do that job.
(As an aside: it is curious and revealing how many people seem to hold both of these opposing opinions concurrently).
I don’t think either is right. But the contradiction does highlight that we don’t yet have a clear mental model for what these new tools actually are, what they are good at, and who will benefit the most from using them. Vibe coding hogs the headlines, but there is something much more interesting bubbling just below the fold…
1. We mostly never read the output
Writing software has always involved layers of abstraction.
When I was studying Computer Science at university, back in the late nineteen hundreds, one course required us to learn machine code. These are the instructions executed directly by computer chips. And it was painful. Even the trivial examples we were given - adding two numbers together or storing and retrieving a single binary value from memory - required lots of code and hours of effort.
That is a skill I’ve long since forgotten, and never really used in anger. Most software engineers today never need to think about the machine code, and would likely struggle to understand it if forced. We use compilers to take the source code we write as input and convert that into machine code for us. We trust their output.
More recently, AI has added a new layer of abstraction. When we use a tool like Claude Code or Cursor the inputs are no longer source code, they are English prompts. The tools generate the source code for us based on these prompts. We’re one layer further up the stack.
When we’re vibe coding there is no stack. It’s just a black box. The only way we have to verify the output is at the application or user level. That leaves us exposed to a whole class of bugs and security concerns which are difficult to spot if we don’t know what we’re looking for.
The opportunity for software engineers is to always have the full stack in mind, and be intentional about how we verify the output that is generated at each layer. In this next era the job is much less about curly brackets and semi-colons and much more about quality assurance.
2. What’s the baseline?
When we worry about the quality of AI-generated code, we’re implicitly comparing it to an idealised standard of craftsmanship. Code that has been written carefully by skilled engineers, and reviewed throughly to ensure it contains no bugs.
That’s an imagined and unrealistic standard.
The actual alternative is code that is often written in a rush, reviewed by humans who are tired and distracted (because, honestly, who loves reviewing code?!), making assumptions that seemed reasonable at the time, resulting in applications that ship riddled with bugs nobody found because the test suite didn’t cover every edge case.
I speak from experience. I’m already humbled. Agents have improved code I wrote carefully. In one case, it found some performance improvements which generated so many positive comments from users that I felt embarrassed taking the credit. In another, it completed a poorly constructed test suite and in the process found a number of bugs that had been sitting there quietly accumulating cost.
This doesn’t mean AI code is always perfect. Or even good. It means the benchmark is lower than we’re pretending, and the systematic verification that good tooling enables might actually be more rigorous than the human review process it’s replacing.
Understanding persistence is important. If we’re a business analyst asking AI to quickly build a dashboard to summarise a dataset, so that can be pasted into a presentation, the quality requirements of the underlying code are low. If we’re a team of engineers building an application with thousands or even hundreds of thousands of users, and complex features, that needs to be maintained for years to come, that’s significantly different. Conflating the two is a mistake.
3. The real world is messy too
The sharpest criticism of AI-generated code is that it’s probabilistic. We can’t formally verify it. How can we trust software that doesn’t behave the same way twice?
This is absolutely fair. But, again, it’s worth examining the embedded assumption that what came before was deterministic.
In practice, as soon as we add humans in the equation formal proof becomes difficult if not impossible. People are fallible. We do unexpected things sometimes. And I’m not just talking about “users”. Engineers don’t always read requirements. Testers don’t always follow the script. A lot like AI agents, as it turns out!
Even compilers, which are meant to be the reliable mathematical layer of the stack sometimes have their own challenges when it comes to consistent output.1
As Computer Scientist C.A.R Hoare said:
“There are two ways of constructing software: one way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”
Determinism has always been an aspiration, not a guarantee. The question isn’t whether AI adds uncertainty to the stack. Of course, like any new layer, it does! We should ask whether the uncertainty it adds is meaningfully worse than the uncertainty already there. I’m not sure it is.
The engineers who understand that distinction will build better software than the ones who are simply anxious about it. AI tools are amplifiers. They give engineers more leverage, not less. A 10x engineer with a great AI workflow is a 100x engineer. They can go further, faster and with more confidence than before.
Even if you have all the vibes, it’s hard to compete with that.
Header Photo by Firmbee.com on Unsplash
Per Phil de Joux, GHC (the Glasgow Haskell Compiler), one of the most widely used and respected compilers in existence, has a long-standing documented issue where the same source code doesn’t reliably produce identical output across builds. See: gitlab.haskell.org/ghc/ghc/-/wikis/deterministic-builds



