This morning I wrote off the cuff about how OpenAI and Uber are similar. It’s a bit of a stretch, but I still like the comparison. 1 But all day I’ve been thinking about a few things that are worth noting that are quite different between ride sharing and the current crop of AI companies.
Ride sharing did not represent any meaningful technological innovations. It required the smart phone and a host of other technologies, but absent the promises of self-driving, the actual “innovation” in ride sharing was a business model innovation. The promise was always that technological innovation would come, but in the meantime, it was having all drivers be contractors and forcing them to pay the capital costs of a vehicle fleet that made ride sharing more than just a marketplace with maps and routing.
Large language models do represent a technological innovation. The last 10-20 years really has seen a change in how we work with huge amounts of unstructured data and are more than just collecting larger corpuses of data and productizing known methods. Sure, many folks building LLMs and other machine learning-driven products are resting on the shoulders of a huge corpus of academic work, but I think there is significantly more technology involved.
Ride sharing asked us to believe that a huge technological innovation would make it profitable and cost effective– namely self-driving vehicles. There was plenty of hype and prediction, much of it from outside of ride sharing, that this future was just around the corner. And although some (many, myself included) doubted we were so close to true self-driving, the valuations demanded by ride sharing companies required this innovation.
Current AI companies don’t necessarily require a huge breakthrough to have a better cost structure. They need compute and storage to go down in price, which it has for decades and this trend will likely continue, even if it slows. That makes the models of today cost effective tomorrow. Where AI companies do need to see innovation is in the effectiveness of models themselves. Can LLMs and similar models make big leaps over their current implementations without requiring significantly more data and energy to produce and then operate? I think that’s an open question. If I’m right, LLMs will not get much better. It will take an increasingly large amount of data and compute to build models that are barely better than what we have today. I am not convinced there’s a new innovation, like GPT models themselves, that are just around the corner that will produce better results and products with similar costs. This means that while the marginal costs of training a model and a query may come down, those models will be not much more effective than what we have today. And that’s assuming the courts decide that scraping huge amounts of “public” data remains viable.
Ride sharing was subsidized for consumers by both investors and workers. We got cheap rides because people who drive cars for work now have to pay for them. Drivers could more efficiently find passengers, but I doubt that improvement closed the gap on capital investments. Plus, our rides were cheap because investors didn’t care about profits.
AI doesn’t just benefit the companies that are making it and those that are investing in it. As users, we get more surplus from ChatGPT or generative fill than we do ordering a Lyft versus a cab. AI can help its users be more productive and enable us to do things we could not do before. This may be generating significantly more surplus value.
Which is connected to the most important, final comparison:
Ride sharing didn’t really enable new abilities. Mostly it expanded an experience and capability that existed in some places to new places.
LLMs do, I think, provide some unique experiences. Although we’ve had grammar checkers, chat bots, natural language interfaces, and various forms of autocomplete, etc. What’s new is powering this all from one set of machinery and getting best in class or near best in class results. I also think that tools like Midjourney and Adobe’s AI-based autofill are quite astonishing. This was not something that existed meaningfully before.
This is where I have the most optimism should for AI tech– there are some new experiences and capabilities here. There’s a real chance that a broad set of people will be able to do their work a little better and a little easier than before. This does have the potential to be valuable to society, but also, importantly, individuals who should be willing to pass on some of that surplus value in the form of profits to AI companies.
Ride sharing was a lot more smoke and mirrors. AI has substance. That doesn’t mean I think their trajectory will look much different– both had their boom, and I suspect both will have their bust. But I admit, I do think AI will have more lasting impact and a softer landing.
I take the most license with the word just in the title. But it was meant to be a provocative title. ↩︎