A lot of people have declared that the end of programming is nigh. I think it’s a whole new beginning. Part 1 of this series explored the state of AI coding assistants. I concluded that they are not that good yet but will improve over time. In this piece, I’ll discuss where this technology is headed and its implications for who writes software and how companies develop it.
Here’s a brief outline of what I’ll cover:
The roadmap ahead
Where we are now
The impact on software engineers
The impact on non-programmers
The impact on SaaS companies
Key risks
The roadmap ahead
When the next big coding assistant comes out, how should we assess the sales pitch? Is their tool a step-change for automating software development or just an incremental improvement of the tools we have today? The folks over at Sourcegraph have a great framework to cut through the hype around the levels of code automation:

They draw inspiration from the Society for Automotive Engineers (SAE) framework about autonomous driving:

The Sourcegraph framework has three main stages- human-initiated code creation, AI-initiated code creation, and AI-led code creation.
To briefly summarize the framework, the human-initiated stage ranges from no AI assistance (driving analogy: the car is fully reliant on the human driver) to basic code completion to AI code creation (driving analogy: traffic-aware cruise control or lane centering). Here, AI can write/fix longer blocks of code, but the human developer is still in control of the program’s structure/design and reviews all the AI’s code.
The next stage is AI-driven, where the human will give higher-level specifications to the AI to implement and test. In the beginning, humans will still give the code a high-level review, much like you would after assigning something to an intern or junior developer (driving analogy: the car is self-driving at times but the driver is ready to take control at any moment). Eventually, the AI reaches a point where the high-level review is not needed, just as a CEO or product manager might ask a senior software engineer to build something and trust them to build a functioning product (driving analogy: fully self-driving.)
Where are we now?
I would peg us at level 2, where AI can generate a decent amount of code, but humans still control the overall development process and review the code. We’ve had a glimpse at what could be possible with agents like Devin, which incorporate higher-level planning and automated testing, but a human still has to review the output.
As these systems will improve, they will more reliably check and review their code, removing the human from the intermediate step of the loop. This looks like a SWE tells the computer what to write, and the computer just writes it.
This is where the big breakthrough happens– when we can use natural language to program computers. Here, we can tell the computer what we want in plain English (i.e. build me a CRM-like platform to manage my blog subscribers with these specifications…), and the AI will write the program.
Super SWEs
Some people worry that these LLMs will replace software engineers. I don’t.
If a high-powered SWE knows a language or framework well, it will be faster for them to write the code themselves instead of reviewing AI-generated code. But as I wrote in part 1, if they are less familiar with a language or program, these AI assistants become a godsend.
As these tools improve, they add another level of abstraction and leverage in programming. It means software engineering is less about writing the software and more about the engineering. Instead of writing every function and every for-loop, a software engineer can work at a higher level focusing on systems design.
Take this diagram of how OAuth 2.0 works:

A really smart human had to come up with this. I think the design behind OAuth 2.0 or HTTPS (i.e. sending public/private keys and swapping certificates) is something much less automatable. An AI agent might be able to write the code to implement this, but it takes an engineer to tell it what to build and how to build it. Moreover, you’ll need someone with a highly technical background to get into the guts of the code these models generate.
Scott Wu, CEO of Cognition Labs, phrased it nicely, that software engineers in five to ten years will be a hybrid of today’s product managers and technical architects.
To answer the question about SWE automation, I view AI programming assistants more as a productivity boost rather than an automation threat. Companies will be able to build more with less. A fleet of 10 SWEs equipped with a coding assistant might be able to do the work of 15 SWEs. And they’ll be more capable, meaning they can ship more features and build better things.
The democratization of programming
I remember looking at stock images of coding when I was younger and thinking it was some crazy, complicated foreign language.

It turns out that it’s not rocket science. (If you want to learn, check out the first few lectures in Harvard’s CS50 with Professor David Malan. He’s a phenomenal lecturer. Highly recommend.)
In a few years, you might not need to take this course. As AI programming assistants get good enough, anyone can become a programmer. The language of choice: plain-old English.
I think we’re a ways away from my grandma asking ChatGPT to build her an app to schedule her bridge games, and viola, there it is. But in the meantime, AI programming assistants are leveling the playing field in terms of who can code.
Take SQL. It’s a language for querying databases. With a thin LLM layer on top of a search bar, you can now ask “Find all enterprise customers in North America who signed up last August” and get a result. No SQL needed.
What this means is that programming is no longer reserved for a skilled group of technicians who know the language. Anyone will be able to do it, using plain English.
Natural language programming could revitalize low-code/no-code platforms. These apps were all the rage a few years ago. But they’re not that good. I’ve used Alteryx, which was ok, but slower than writing the code myself. A few years ago, I tried 4 different platforms to build an app and couldn’t do it. Even though there was “no code”, it was more like programming with a GUI. All these platforms still require you to understand the high-level structure of programming. These natural language AI assistants skip the intermediary. You just tell it what to do, and they’ll build it.
Avoiding diSaaSter
If anyone can build any kind of software, what does it mean for software companies? Chamath Palihapitiya said it could have a hugely disruptive impact on SaaS companies. He claims it’s easy to replicate 80% of the features of any app at 10% of the cost.
The argument is
AI is good at coding
AI coding assistants reduce the cost of building software
People can use AI coding assistants to replicate SaaS products in 2 ways:
Influx of lower-priced competitors
Internal development– more companies opt to build software internally or use open-source options instead of buying for easily-replicated SaaS.
At first, this made sense to me. It’s not that hard to build a CRM. A CRM doesn’t rely on some NP-Hard UberPool-esque algorithm. It’s really just a list of customers, each with certain attributes (i.e. contact info, role, company, previous interactions, stage of the sales process, etc.), which you manipulate (i.e. move customer 1 from free to paid subscriber, email customer 2, etc.). Slap a slick UI over it, and bingo, you now have the core of Salesforce. Plus, if you’re developing the software internally, you could customize it to your enterprise’s needs. As Des Traynor says, many software products are just a database with a UI on top of it. Neither are impossibly difficult to replicate.
2 years ago, it might have taken a team of 10-20 people a long time to build a home-made CRM. Now a team of 2-5 people could build this with the help of GitHub Copilot and ChatGPT in a few weeks. (I’ve seen it happen too, me and three of my friends got second place in the Dartmouth hackathon last weekend for a web app we built in 24 hours).
This is especially true in the age of open source software where developers publish their code free for anyone to use. If just one software developer publishes a pretty-good open source CRM, anyone can copy their code and build on top of it.
To the chagrin of SaaS builders and investors, I think SaaS is safe from this kind of disruption for two reasons. First, writing the code is a small cost of the software development process. It doesn’t account for the cost of hosting and maintaining an application (i.e. patching cycles, A/B testing environments, backups/restoration, etc.) even if you use a cloud provider.

Second, these companies have deeper moats than the products themselves vis-a-vis network effects. There are so many CRMs out there that do +/- the same thing as Salesforce. Salesforce is so strong because there is an entire ecosystem of community-built apps built on top of the core CRM, and it’s interoperable with lots of other pieces of software a company uses. This is harder to replicate.
The bear case
I think there are two factors that might impede natural language programming.
The main risk is technology. As I wrote in Beyond Large Language Models, LLMs are not actually reasoning about the code to write. Though scaling transformers will improve these models’ performance as discussed in part 1, we may need new architectures to reach the next level. I’m optimistic given that there are so many researchers and companies working on this issue.
The second risk concerns market size. The total addressable market for “tools to help with programming” isn’t that large. ChatGPT has essentially replaced Stack Overflow, the panacea web forum of where people used to go for programming questions. But Stack Overflow’s not worth that much, it was acquired for $1.8Bn in 2021.
We can even do a back-of-the-napkin market sizing ourselves for GitHub Copilot. There are around 4.4 million software engineers in the U.S. x $39 per seat per month x 12 months per year = $2.06Bn revenue per year. That’s a sizable revenue opportunity, but a) it’s not that big compared to say, search and advertising, which is on the order of over $100Bn, b) if it’s not a winner-take-all market and there are several competing coding assistants, there’s only so much pie to go around for each slice, and c) may not justify massive costs to train models (re: report from Sequoia that the $50Bn spent on Nvidia GPUs only generated $3Bn in revenue last year).
I could see a bull case for the technology saying something like “companies will pay a lot more than $39/person for it” or “the TAM for low code/no code platforms is $44.5Bn according to Gartner” or “with natural language programming, everyone can program, so the TAM is even larger”. However, that only happens when their capability increases dramatically. My guess is that AI programming assistants are probably not the next Google to invest in. But they might help someone build the next Google.
Conclusion
There’s a lot of hype in AI, but I think that programming is a proven and value-added use case. LLMs can automate routine programming for software engineers, making them more productive and focused on higher-level systems design.
Overall, I’m excited. I’m excited about the incredible things this technology could help create. Less than 1% of people in the world know how to code, and look at all the amazing things they have been able to build. Now imagine if those 1% of people were 5x more productive. Now also imagine if the number of people who knew how to code increased 5-fold since they could use plain English, not JavaScript. We could build some really incredible things. Programming is far from over. If software is eating the world, AI is feeding it.