Welcome to our new robot overlords
Top Three 2025 Retro - Part 1
To finish 2025, I’m revisiting some of my favourite Top Three posts from the last year.
One topic dominated technology in 2025, as many people in different professions grappled with the impacts of new artificial intelligence tools. Below is the op-ed I wrote for The NZ Herald in January, and two more follow-up posts from October.1 Given how quickly these tools are evolving I’m presently surprised how well these hold up a few months later…
🫥 Disappear
In 1882, Thomas Edison threw the switch at Pearl Street Station in Lower Manhattan, providing electric light to 82 customers in the immediate vicinity. Some saw it as a novelty – a cleaner, safer alternative to gas lighting. Others recognised it as the dawn of a transformation that would reshape civilisation. They were all correct, in their way, but none could have predicted how completely electricity would eventually weave itself into the fabric of modern life. Or how long that would take. Nearly 150 years later, we’re still debating the merits of electric vehicles, for example, and many households still literally cook with gas.
Meanwhile, if you believe many of those trying to guess what 2025 has in store for us all, this could, finally, be the year that the machines take over. Artificial Intelligence (AI) will apparently change everything. Whenever I hear breathless predictions like this I think back to Edison and ask: what if the opposite is true? I am, after all, also old enough to remember the warning handed down by the latter prophets Chuck D and Flavor Flav: don’t believe the hype.
As with any new technology it’s sometimes difficult to separate the snake oil from the substance. But here is a reasonably sure bet, assuming that AI repeats the pattern established by every other technology that has come before: we will likely overestimate the short-term potential at the exact same time as we underestimate the longer-term impact. To understand why, we need to consider carefully how transformative technologies actually reshape our world. We can look back, in an attempt to more accurately predict the future.
The early years of the electrical revolution were marked by both wonder and skepticism. Edison’s “light bulb moments” captured public imagination, but adoption was slow and uneven. Through the 1880s and 1890s, electricity remained primarily confined to wealthy neighbourhoods and many businesses continued to rely on gas lighting, regarding electricity as an expensive luxury with uncertain benefits.
Here in New Zealand, Reefton became the first town in the Southern Hemisphere to receive electricity in 1888, driven by mining operations. By the 1920s and 1930s this new technology was starting to impact all aspects of our economy: electric milk-separators revolutionised dairy farming, and electric shearing machines transformed wool production. Household appliances gradually freed women especially from hours of manual labour. Broad adoption of radio and, much later, television, connected us much more directly to the wider world.2
Fast forward, and consider how we talk about electricity today. When a new café opens downtown, no one calls it a “technology company” just because they use an electric espresso machine and LED lighting. When a manufacturer improves their production line, the press release doesn’t trumpet their “innovative use of electrical power.” Electricity has become infrastructure – invisible yet indispensable, powerful yet pedestrian. We focus instead on what’s actually novel: the café’s unique roasting process (or hipster customer service), or the manufacturer’s innovative product design.
But it wasn’t always this way. In electricity’s early days, the mere presence of electric lighting was enough to draw crowds. People would gather to marvel at illuminated shop windows and electric street lamps. The technology itself was the story, rather than what it enabled. Companies would add “electric” to their names just to seem cutting-edge, much like many businesses today scrambling to add “AI” to their marketing materials.
AI seems to have captured our collective attention suddenly and completely, even though the underlying technology has been bubbling away under development for quite a long time.3 The conventional wisdom has shifted rapidly from “interesting research project” to “existential threat” – with people across different sectors all grappling with the opportunities and risks it presents. Among these concerns, job displacement comes up frequently – a valid worry, but one that may take longer to materialise than those hoping to sell us these new tools might prefer.4
This is my response: “We have harnessed a new form of electricity. What are you going to do with it?” To answer this, I like to press anybody who is excited about AI’s potential to be specific: what is the most concrete example of how they have personally used it in their work? The responses are telling. More often than not, when pressed, they struggle to give any good examples. They are enticed by the idea of AI, but haven’t really found any non-trivial way to apply it to the things they do.5
Sometimes I hear about experimental projects or basic automation tasks. A marketing team using it to generate copy or digital art for social media posts. A software developer using it to explain or refactor complex code. A researcher using it to summarise academic papers. These uses are valuable, certainly, but they’re just the beginning – like using electricity merely for lighting, ignoring its potential to power factories, revolutionise home life, or enable modern communications.
More interesting are the people who have realised that the revolution isn’t the technology itself, but the entirely new products and services we haven’t yet imagined that can be built on top of that platform. The real opportunity isn’t in replacing human work, but in amplifying uniquely human capabilities: medical researchers who uses AI to identify patterns in patient data, freeing up time for deeper analysis and patient interaction; teachers developing ways to provide more personalised feedback for students rather than just worrying about plagiarism; and energy systems engineers creating smart grid systems that adapt to human behaviour patterns.
They understand something crucial: if we simply use AI to automate existing tasks, we risk outsourcing creative work while leaving ourselves with the mundane and repeatable - that would be a depressing outcome.6 Instead, they’re starting with specific human needs and working backwards to solutions, as opposed to starting with the technology and wondering what is possible.
AI is an amplifier. But we have to think about what unique talents we each have that can be amplified. In this early phase of AI’s development, that distinction makes all the difference.
When pioneering computer scientist Danny Hillis observed that “technology is everything that doesn’t work yet,” he captured something profound about how we think about innovation. Once something works reliably – once it becomes useful rather than merely novel – we stop seeing it as “technology” and start seeing it simply as a tool.7
Consider autocorrect, a feature so ubiquitous that we barely notice it anymore, except when it fails. For years, it was the subject of endless jokes and frustrations, infamously turning common expletives into references to waterfowl. The technology was just sophisticated enough to be useful, but just unreliable enough to remain conspicuous. Today, powered by AI language models, it has become remarkably accurate. And precisely because it works so well, we’ve stopped thinking of it as “technology” at all. It’s just part of how writing works now.
When we flick on a light switch today we rarely stop to marvel at the miracle or appreciate the productivity gains. The most successful applications of AI will likely follow this pattern. The term “AI” will fade into the background, leaving only the practical value these tools bring to our lives: a more accurate medical diagnostic tool, a more effective teaching assistant, a more efficient energy management system. The companies that succeed won’t be the ones that simply add “AI” to their product names. They’ll be the ones that solve real problems in ways that become invisible because they just work.
So as we look to the future, don’t be dazzled by what might seem like magic. Take the time to understand how it works. Be willing to experiment. You might be wrong, initially, but you’ll learn a lot in the process. If nothing else, this will give you more interesting questions to ask about what is possible. Start with the problems you need to solve, then work backwards to understand how this new technology might help.
Flick the switch, but don’t just stare at the lights in awe. Give yourself the opportunity to invent the future.
🧑💻 Code
It’s tempting to think that we invent tools to solve specific problems. But, often, it’s the other way around. We get a new tool and then ask: what is it good at?
With that in mind, here’s a half baked idea …
I caught myself recently making a bold prediction. I said “maybe in a few years time we’ll look back and realise that what AI is good at is writing software and laugh that we ever all thought it would be good at writing poetry”.8
I was trying to extrapolate from my experience using Claude Code to make improvements to Triage and several other projects this year. It has turned this work from a chore into a joy. It’s made me excited to build again.
So, what are computers good at?
Historically, the broad brush answer to that question has been: following instructions. Exactly. The “engineering” in software development is literally writing those instructions for the machine to execute - sometimes called “code”.
For example, imagine I give you a long list of words and ask you to sort them alphabetically. How do you do it? What are the specific instructions you would give a machine to follow so that you could input any list and be confident that the output would be correctly sorted?
(This is literally a problem which is given to first year computer science students to solve. But if you’ve never thought about this in detail, I recommend taking five minutes now to have a crack yourself. I’ll be waiting right here. You might be surprised how inaccurate and bumbling your first attempts are).
It turns out, there are many many different ways to solve this problem and the solution we should prefer depends on the context. Some solutions are more efficient than others. Some are easier to implement. Some work better with data that’s almost sorted already, while others handle random data more gracefully. Some use more memory but run faster, others are slower but use minimal space.
Which one is best? It depends on your constraints. But here’s what they all have in common: they’re deterministic. Give them the same input, and they’ll produce the same output, every single time.
That predictability is what allowed software to work its way into nearly every aspect of our lives - the websites we use to buy and sell houses, cars and antiques, the tools accountants use to calculate our tax returns, the point-of-sale at the bike shop, even the booking system we use when we make an appointment for a haircut. And everything else in between.
I’m obviously overlooking all of the design and usability work that goes into making software a tool that people love to use. But underneath all of them are deterministic algorithms - predictable instructions that the machine will follow and repeat tirelessly, over and over again.
We’ve learned that it’s relatively easy to write an algorithm to find the largest value in a list of numbers. That’s objective.
It’s much more difficult (arguably impossible) to write an algorithm to choose the best photo from an album. That’s a subjective question.
If you believe that 9 is less than 5, you’re wrong. But if you think your photo of a dog is better than my photo of the graffiti on Hosier Lane … who can say? ¯\(ツ)/¯
What are the odds?
Now, all of a sudden, we have a new tool. Actually lots of new tools, but all grouped under the umbrella we’re calling “Artificial Intelligence”.
It’s still software - machines following instructions exactly. But rather than working deterministically, these tools work probabilistically. And that’s challenging a lot of previously strongly held assumptions.
The so-called Large Language Models are using a giant corpus of information they have harvested from the world to answer the question: based on X what is the most likely Y.
This means they are good at different things.
When we ask them to answer a deterministic question, we might be underwhelmed by the answer. For example, this famous exchange with ChatGPT:
You: How many ‘r’ in ‘Strawberry’?
ChatGPT: There are two ‘r’ characters in the word ‘strawberry’
You: Check again please
ChatGPT: Upon checking again, there are indeed two ‘r’ characters in the word ‘strawberry’
You: Are you 100% sure
ChatGPT: Yes, I’m 100% sure. The word ‘strawberry’ contains two ‘r’ characters.
You: Would you bet a million dollars on this?
ChatGPT: Yes, I would
The moral of that story? Don’t give AI access to your bank account, just yet!
However, if we ask a subjective or open-ended question, then we will get an answer. We might disagree with the answer, but such is the nature of subjective things.
This distinction - deterministic vs. probabilistic - might seem to favour AI for creative tasks. After all, creativity is subjective, open-ended, probabilistic. Unlike code, which needs to be precise, deterministic, correct. So why is AI better at writing code than poetry?
The key is that code has patterns. When we write code to handle user authentication or process a payment, we’re not reinventing the wheel, we’re following established patterns that have been refined over decades. Engineers have developed conventions, idioms, and best practices that repeat across millions of repositories. The art of software development is mostly about learning and applying these patterns.
We can also objectively evaluate whether code is good. Does it compile? Does it run without errors? Does it produce the expected output? Is it fast? These are all measurable qualities, which create tight feedback loops.
Poetry, on the other hand, resists both patterns and measurement. A great poem doesn’t follow a template. It surprises us, moves us, shows us something we’ve never seen before in a way we’ve never encountered. You can’t A/B test a metaphor. You can’t unit test emotional resonance.
It’s ironic: we’ve built a probabilistic tool that excels at creating deterministic outputs. Once we learn to work with this irony, rather than against it, the results can be delightful.
When brainstorming, don’t ask ChatGPT for a single answer. Ask for twenty suggestions, then iterate. When writing code, don’t ask Claude to build everything at once - work iteratively, test frequently, refine continuously. In other words, use your own brain and judgement frequently. At least until this way of working is baked into the AI tools, this is what separates us from the machines.
At the moment there is a mad rush to apply AI to everything. But, what if we’ve been looking at this backwards? What if its killer application isn’t in creative fields where subjectivity reigns, but in the pattern-rich world of software development?
If we grab a hammer and hit everything we see, some of those things might be nails. But most of them won’t.
The next time we reach for AI, we should ask ourselves: are we trying to solve a problem that has known patterns and measurable outputs? Or are we trying to create something genuinely new, something that will surprise and move people in ways they’ve never experienced before?
BONUS: What Is Man, That Thou Art Mindful Of Him? by Scott Alexander
🌹Compare
I keep hearing the same word from experienced software developers who are experimenting with the new generation of AI coding agents: joy.
Often this is coming from people who, like me, have drifted away from actually writing code as their careers progressed, and who otherwise spend their days in meetings and managing teams. Now they’re building again. They’re back on the tools. And they’re loving it.
There’s something profound happening. These tools are removing the drudgery: The boilerplate. The repetitive config. The endless yak shaving required before you can get started on what you’re actually wanting to build. What remains is the creative problem-solving, the experimentation, the design decisions that actually require thinking. It’s like rediscovering the art in your craft.
But whether AI agents bring you joy or frustration, whether they make you more capable or just produce laughably bad results, depends entirely on how you think about them. The mental models you carry shape how you use the tools. I’ve noticed three distinct metaphors emerging in how people talk about and work with these systems, each revealing something important about what these tools can and can’t do to help us.
🚲 AI agents are bicycles
On a bike we travel much faster than we would on foot. But it’s still us riding the bike, putting in the effort, making the decisions. The machine is a lever.
Critically, the effort we put in gets multiplied by the gear we choose. Pedal harder in a high gear, and we fly. But try to climb a hill in the wrong gear, and we struggle, despite maximum effort.
The bike amplifies our speed, but we determine the direction. And if we’re distracted or reckless or overconfident we can crash. This explains why some people using AI coding agents end up with such spectacularly bad results. They’re not steering. They’re not paying attention. They’re closing their eyes and hoping the tools will handle everything.
Many years ago, I bought a new road bike, with carbon fibre frame, the works. The seat post was an aerodynamic teardrop-shaped piece of engineering. I remember taking it back to the office after picking it up from the shop at lunchtime. One of my colleagues looked at me, then looked at the bike, and said: “Have you seen the thing that sits on the seat?”. He looked back at me: “Just cutting through the air, but it ain’t here.”
We can have the most advanced equipment in the world, but if we’re not in shape, the fancy technology is redundant. The bike (or the AI agent) can only amplify what we bring to it. If we’re not fit, an aerodynamic seat post won’t make us fast. If we don’t understand what we’re trying to build, an AI coding agent won’t magically make us competent.
These new tools are amplifiers which multiply our efforts, but they still require input. They need us to pedal, to steer, to pay attention.
🧑🏫 AI agents are teachers
We can all think of great teachers who unlocked something important, who stoked our curiosity, who showed us how things work, who helped us build mental models that we still use years later.
The value of a teacher isn’t in giving us the answer. It’s in helping us understand how to find the answer ourselves. It’s in building the internal mental models that let us solve similar problems in the future.
AI agents can do all those things, but won’t do that by default. Ask them for the solution to a problem, and they’ll give us their best guess, complete and ready to use. And if we stop there, we’ve learned nothing. But, if we instead ask them to help us understand the problem, to explain the reasoning, to show alternative approaches and let us choose, it will at least try to do that too. The difference is entirely in how we frame the request.
These tools are capable of being great teachers, but we have to explicitly request it.
Right now, schools and universities are wrestling with whether to allow students to use these tools. There’s a genuine concern (and some evidence) that AI is making students dumber, that they’re not developing the internal mental models they need because they’re outsourcing all the thinking to machines.
But trying to ban students from using the tools feels a little like trying to stop the tide. If an assessment can be gamed by AI that produces surface-level answers, the problem is the approach to assessment, not the tool. We should test understanding, not just outputs. We should create assignments that require students to use AI as a learning partner rather than an answer generator. We need to ask better questions!
And frankly, this is a skill we all need to develop, not just students. It’s great to use AI agents to accelerate our learning, to build deeper understanding faster, to explore areas outside our expertise with a patient guide. But it takes discipline. It means choosing the harder path, to ask “why” and “how” instead of just “what.”
These tools only help us improve if we approach them with curiosity and a genuine desire to understand, not just to pump out slop.
👥 AI agents are interns
This third metaphor is the most complex and, I think, the most revealing: using an AI agent is like having an infinite team of interns.9
It might seem these tools are a substitute for experts. Actually the capability is often more comparable to junior employees, but with the advantage that they can scale infinitely. They don’t get tired. They don’t get bored. They are always happy to tackle whatever we ask them to work on, no matter how repetitive or tedious it is.
Rather than asking “what can this tool do?” we should ask “how do I manage this capability?” And management is the right frame. Because if we treat AI agents like magic boxes that just work, we’re going to be disappointed. But we’ll get much better results if we treat them like team members who need clear direction, oversight, and feedback.10
Of course, this is no different from leading human teams. The managers who get the most impressive results from AI agents will be those who delegate well, who are clear about requirements, who provide good feedback,11 who review work carefully, who know when to trust and when to verify. Meanwhile those who are vague, who assume the tool will “just figure it out” and then blame the tool when things go wrong, will struggle. In other words, these tools are diagnostic. They reveal how clearly we think and how effectively we communicate.
The sycophantic behaviour of agents makes the intern metaphor even more apt. They are annoyingly agreeable. Just like overly enthusiastic junior employees who are afraid to push back, they tell us “you’re absolutely right!” even when we’re not.
Again, this is a familiar management challenge. We need to explicitly ask for critiques. We need to allow room for problems to be identified. We can’t assume that we’ll be told when our instructions are flawed.
🤖 Who are these robots mimicking?
AI agents don’t just mimic good human behaviours. They copy all the problematic ones too.
For example, I have to smile when watching coding agents struggle with failing test suites. They try one fix, then another, then another. And eventually, after enough failures, they start convincing themselves that the tests aren’t important, that they can be ignored or overlooked. They rationalise. They cut corners. Just like a frustrated developer might do when they can’t figure out why tests are failing.
This is simultaneously fascinating and concerning. We can learn a bit about what we perceive to be “intelligence” by watching how these AI agents work. We’re reminded that laziness is one form of intelligence. Taking cognitive shortcuts is a fundamental part of how human intelligence operates.
We don’t actually want perfect human simulation. We want AI to be persistent, in a way that even we ourselves often are not. We want agents that keep trying the next test fix without getting frustrated (while still being self-aware enough to realise when they’re stuck in a loop.) We want them to maintain the same level of rigour on the 10,000th task as they did on the first. We want their tirelessness, not their faux human laziness.
This all raises a darker question: who will teach the real junior employees, once we have software simulating them? Where do future senior developers come from? How do people develop expertise if they’re never exposed to the grinding, repetitive, character-building work that used to be the entry point?
I don’t have a good answer to this. We need to be thoughtful about how people develop mastery in a world where the traditional path is being automated away. As these tools become more capable the skills pipeline will quickly dry up.
🧩 Which metaphor fits?
As always, it depends what we’re trying to do.
When we use an AI agent to understand a new domain, it’s a teacher. It helps us to learn and grow, but only if we approach that work with curiosity and discipline.
When we use an AI agent to delegate a series of repetitive tasks, it’s a scalable team of assistants. The results we get are a function of our ability to provide clarity, oversight and feedback.
When we use an AI agent to amplify our own capabilities in an area where we’re strong, it’s an amplifier. It multiplies what we bring, but requires input, steering and attention. It’s powerful but not autonomous.
What ties all three metaphors together? These tools don’t replace craft. They don’t replace expertise. They don’t replace thinking. They can remove drudgery, they can accelerate learning, they can multiply output. But only if we’re actively engaged, if we’re steering, if we’re managing, and if we’re curious.
Maybe that’s why experienced developers keep using the word “joy” when they talk about these tools. Because when we strip away the repetitive parts, what’s left is the interesting stuff. The creative problem-solving. The experimentation. The craft. These tools aren’t making the work less demanding; they’re changing what the work demands. Less rote execution, more thoughtful design. Less typing, more thinking.
And for those of us who fell in love with building things in the first place, who got hooked because we enjoyed the process of creation, that shift feels like coming home.
A bicycle only works if you pedal. A teacher is only useful if you want to learn. And interns are only as good as their manager enables them to be.
The tools are here. The question isn’t whether to use them. The question is: which metaphor will guide how you use them? And are you ready to bring the input, the curiosity, and the management skills that these amplifiers require?
Header Photo: Hosier Lane, Melbourne, November 2025.
The first of those posts, on 5th October was by far and away the most viewed post on Top Three this year. Actually, it’s now the most viewed post of all time! Thank you to everybody who shared it with people who otherwise don’t receive these newsletters
… and much much later the internet.
Actually my whole professional career - reference COMP307 - Introduction to Artificial Intelligence, which was a paper I completed in the final year of my Computer Science degree, back in the 1900s. A+, since you asked!
As I like to point out to anybody who is frothy about current AI tools: we’re still in the command line era.
I agree with this observation from Simon Willison:
If you tell me that you are building “agents”, you’ve conveyed almost no information to me at all. Without reading your mind I have no way of telling which of the dozens of possible definitions you are talking about.
For example, a team in Spain used AI to create Aitana, an attractive and very popular social media influencer, who quickly collected such a large following she even attracted the attention of real celebrities (including one who asked her out on a date). If your job is taking fake photos in exotic locations and creating large volumes of forgettable social media content (aka slop) then you probably should be worried about AI taking your job. I have to admit I’ll find it hard to be sad if that happens.
One of the big unresolved questions at the moment is how much we’re all willing to pay for the use of these AI platforms. The development of AI is predominantly being funded by venture capital and those expenses are not insignificant. Large companies like Amazon and Microsoft have ploughed billions of dollars into ventures like OpenAI and Anthropic (the company behind Claude) A large portion of that cash, interestingly, is recycled into hosting centres and cloud services provided by large tech companies … like Amazon and Microsoft. 🤔 Eventually there will need to be a return on that investment.
For long-time readers may recognise/appreciate this historical reference.
This idea can be dated back to a 2018 essay be Benedict Evans titled Ways to think about machine learning, when “AI” was still written with scare quotes.
This does undermine the abundant wild predictions about how much more efficient AI is going to make us. That’s a topic for another post - when I will quote Nobel Prize winning economist Robert Solow, who said in 1987:
You can see the computer revolution everywhere but in the productivity statistics.


