The Last Mile Problem of AI
Why AI Progress Seems to Be Slowing Down
The last mile of goods delivery is the most expensive part. It can take longer than flying goods 1,000 miles. A package moves from a local depot to many doors on busy city streets. Human couriers are required. Because this door-to-door leg is hard to automate or scale, it is called the “last-mile problem” in logistics.
The term soon spread to self-driving tech. In that field, the “last mile” means running safely in tough spots like busy cities, packed suburbs, rough trails, or bad weather. Passing this stage is key to large-scale use. A driverless Uber is useless if snow, road work, or a big bike race makes it fail.
Crossing the last mile in technology is extremely hard. Progress often plateaus at this stage. The general public begins to notice that development is slowing down. In generative AI, we saw a major leap from GPT-3 to GPT-3.5, commonly known as ChatGPT, but progress since then has felt underwhelming. Many AI evangelists pointed to the rapid rise to GPT-3.5 as evidence of unstoppable AI advancement. If only that pace could be maintained. But one model followed another with increasingly marginal improvements, ending with the GPT-5 disaster.
While the market leader, OpenAI, entered the slow and painful journey through the last mile, competitors began to catch up.
In the winter of 2025, Chinese company DeepSeek made waves online by reverse-engineering what was, at the time, OpenAI’s most advanced model. The reaction was intense. Some praised the move, especially in the open-source community, seeing it as a breakthrough that could accelerate AI progress by giving researchers access to a competitive model. Others reacted with alarm—fearing that China had made a decisive move, was taking the lead in AI, and might challenge U.S. dominance. The drama settled after a few weeks, as more Chinese and American models were released with similarly competitive capabilities.
The way to notice that we are crossing the last mile is finally to see that the generative AI pays off. Currently it is not the case:
To address common issues with LLMs, such as hallucinations, the field has shifted toward agentic AI. Proponents now claim that agents can replace humans in many areas. However, just as with autonomous driving, we still need to cross the last mile for that to happen.
As more models began to reach the last mile, public interest started to wane. AI vendors, unable to deliver on their earlier promises of exponential progress, grew increasingly desperate to maintain their valuations. Hype spiraled out of control. Bold claims about doubling human lifespan or replacing half the office workforce appeared with increasing frequency. As “generative AI” lost its buzzword appeal, the term “agentic” surfaced to sustain investor FOMO. Yet, actual progress remained limited, and the last mile problem persists.
If you liked this newsletter, consider becoming a paid subscriber to support this work or checking out the ai realist shop
https://airealist.myshopify.com
- Every item you buy comes with a free month of paid subscription (two items = two months, and so on).








The hype is unbelievable and the cross investments have surpassed incestous levels. It is no longer a slow motion train wreck but a disaster at full speed. The underlying technology while fascinating is nowhere ready for prime time - the complexity is increasing asymptomatically … meaning real life applicability in a general purpose way remains off the charts …
Happy to see this bubble, if not bursting, at least inflating a little.
Since we cannot go back, I don't mind, if this last mile never ends. 🙌