Oh, wow. Matt Birchler wrote a response to my post this morning about OpenAI. Awesome to see a blog I follow respond to something I wrote.
A couple of quick responses to his disagreement. Matt wrote:
I’d actually disagree quite a bit here. Ride hailing is exponentially better than it was 10 years ago, and we’d be shocked how bad it was compared to today if we were teleported back then.
Maybe my recollection is off. But I started using Uber/Lyft probably closer to 8 or 9 years ago– so maybe a decade was wrong– and I remember an experiencing virtually indistinguishable from today. If anything, my wait times were less and so were the prices. At least from the perspective of a frequent business traveler, my experience remains the same as it ever was– land in a new city, take out my phone, call rides that arrive in a mostly reasonable amount of time, go wherever I need to go. I just pay more and wait longer now than I did pre-COVID.
Especially if we agree that LLMs won’t get meaningfully smarter, that means using the local models that Google has already announced for Android and Apple is clearly working on for iOS/macOS will be more viable, and will be completely free to run as much as you want.
I think this is fair and I think my follow up post better explains my thoughts. What I really mean is that our current models are about as good as we’ll see at low cost. If LLMs can get better at all, I expect the better models to remain expensive. So I think my original piece combined two predictions in a way that was unclear. I think better models will require significantly more cost and energy, if we can achieve them at all, because I think there is no technological breakthrough that will reduce that cost. In other words,
Basically, I think if LLMs do get exponentially more useful, then server-side models that cost significant sums of money to run will continue to be prominent. But if they plateau in usefulness, then most people will run models locally on their devices most of the time because they’ll be quicker, more private, and basically just as good. And remember that our phones and computers are getting faster every year, so these LLMs will constantly run better and better than before.
I agree with this. If they get more useful, they will cost a lot of money. Today’s models will get cheap enough.