Jason Becker
December 31, 2023

Finished my final book for the year. I made my modest goal, which is nice, but this year I still read much less than normal and less than I would have liked.

I’m bummed at how hard reading felt at various points, and I hope a part of my 2024 is creating more space for fiction.

It’s December 31st and my office is almost unusable because of the industrial dehumidifier, so I’m seriously considering nuking neovim and starting my configuration from scratch.

December 30, 2023

The Phoenix guides are as good as everyone says. A bit overwhelming for a non-web developer by trade, but it’s all in there. And I still appreciate how much of the code is generated. I’d prefer boilerplate I can find, read, and work out to hidden “magic”.

It took quite a while for the sun to break through the clouds, but eventually we had a hell of a view in Philly.

A view of Philadelphia from 53 floors up.

Cleaned up the menu on my blog a bit to create some space for some new ideas I hope to put up there in 2024.

I agree with everything in Why I’m skeptical of low-code. Nick Scialli’s four reasons are:

They wanted truly custom functionality that the low-code solution could not handle. They implemented a bunch of custom functionality in a product-specific or even proprietary language and now their pool of potential developer talent is tiny. Upgrades to the low-code platform would break their custom implementation. The underlying database structure was an absolute mess, especially after a bunch of incremental modifications.

I’ll add one more reason to be skeptical of low-code.

Low-code encourages executives to believe truly custom functionality is reasonable. When the first 80% works great and comes at nearly no cost, it’s easy to imagine that closing the gap that remains with custom functionality is worth it. In truth, low-code encourages customization. And customization is almost always bad ROI. If the process you’re using software to address is not your businesses core competency, anything but bog standard commercial-off-the-shelf software should be looked at with great skepticism.

Organizations should use the “good enough” solution built somewhere else for anything other than their core business differentiators.

Last.fm, in addition to being increasingly difficult to use, won’t let me log in. It says I need a password reset. I can do so successfully, then when I try to log in, it says I need to reset the password again. Forever and ever.

December 29, 2023

I constantly think about building my own blogging system for my site, just because I think that’d feel good, but I’m also deeply disinterested in pretty much all the tech that’s a good fit for this scope. Maybe when I retire.

I just received the most thoughtful gift from the person in my life who has always given me the most thoughtful gifts. This one is so excellent that even they said, “lower your expectations from this point forward.”

I remain very happy with my Apple stuff. And curious about the minimum viable Linux box that could drive my Studio Display and be a good developer station.

Literally been trying to get a roofer to my house for 2 weeks. Twice I tried to go with someone not strictly roofing and they showed up and said nope, won’t touch that. The only glimmer I have is someone who said he can maybe come out to look end of next week if I call midweek.

December 28, 2023

More flooding. While we’re not home. I literally can’t get someone sooner than Friday afternoon to even estimate the cost of fixing the cause of the leak. I’m so stressed and pissed.

$55 in electricity from running these dehumidifiers and blowers non-stop with seeming no end in sight.

December 27, 2023

Uber burst onto the scene as a “major transportation disruptor,” vowing to eliminate private car ownership, introduce self-driving vehicles, and revolutionize transportation. But first, they needed to illegally break up a taxi cartel, providing massively subsidized car services in locations and at times that could not possibly be profitable. They also needed to screw workers.

A decade later, ride-sharing hasn’t evolved significantly since its launch. Costs have risen as consumers now pay the actual marginal cost of their rides. hasn’t undergone a transformative shift, but we do have a slightly better taxi system.

In a conversation with my friend Jake today, it struck me that the current landscape of “AI” mirrors this trajectory. The current crop of large language models are promising to disrupt work and change the whole world. It’s cheap for end users due to massive subsidies. Models are being built off of likely illegal practices.

In ten years, I expect that LLMs won’t be that much more useful than they are today. Using these AI services will be much more expensive, because we will no longer have queries massively subsidized. Work won’t have changed that much, but some select jobs and industries will be permanently impacted. We will end up with marginally better technology. We probably could have achieved similar outcomes without breaking the law, with less negative impact on workers, and less wealth creation and destruction. 1

  1. Because there will be a boom and bust as Silicon Valley moves capital to fuel a new bubble. Although, this time, it seems established companies may fuel investment as much as venture funds. ↩︎

I wonder why it always takes around 3-6 weeks to prepare end of year tax documents. The rules rarely change in December. The information is all digital and collected all year. So why does it seem to take everyone until the end of January for forms?

My interest in Ghost has declined now that I learned it appears to store content (or at least communicate it over the API) in a custom data format by Lexical, an open source JavaScript text editor component by Meta. Tail wagging the dog into a massive compatibility challenge.

This morning I wrote off the cuff about how OpenAI and Uber are similar. It’s a bit of a stretch, but I still like the comparison. 1 But all day I’ve been thinking about a few things that are worth noting that are quite different between ride sharing and the current crop of AI companies.

Ride sharing did not represent any meaningful technological innovations. It required the smart phone and a host of other technologies, but absent the promises of self-driving, the actual “innovation” in ride sharing was a business model innovation. The promise was always that technological innovation would come, but in the meantime, it was having all drivers be contractors and forcing them to pay the capital costs of a vehicle fleet that made ride sharing more than just a marketplace with maps and routing.

Large language models do represent a technological innovation. The last 10-20 years really has seen a change in how we work with huge amounts of unstructured data and are more than just collecting larger corpuses of data and productizing known methods. Sure, many folks building LLMs and other machine learning-driven products are resting on the shoulders of a huge corpus of academic work, but I think there is significantly more technology involved.

Ride sharing asked us to believe that a huge technological innovation would make it profitable and cost effective– namely self-driving vehicles. There was plenty of hype and prediction, much of it from outside of ride sharing, that this future was just around the corner. And although some (many, myself included) doubted we were so close to true self-driving, the valuations demanded by ride sharing companies required this innovation.

Current AI companies don’t necessarily require a huge breakthrough to have a better cost structure. They need compute and storage to go down in price, which it has for decades and this trend will likely continue, even if it slows. That makes the models of today cost effective tomorrow. Where AI companies do need to see innovation is in the effectiveness of models themselves. Can LLMs and similar models make big leaps over their current implementations without requiring significantly more data and energy to produce and then operate? I think that’s an open question. If I’m right, LLMs will not get much better. It will take an increasingly large amount of data and compute to build models that are barely better than what we have today. I am not convinced there’s a new innovation, like GPT models themselves, that are just around the corner that will produce better results and products with similar costs. This means that while the marginal costs of training a model and a query may come down, those models will be not much more effective than what we have today. And that’s assuming the courts decide that scraping huge amounts of “public” data remains viable.

Ride sharing was subsidized for consumers by both investors and workers. We got cheap rides because people who drive cars for work now have to pay for them. Drivers could more efficiently find passengers, but I doubt that improvement closed the gap on capital investments. Plus, our rides were cheap because investors didn’t care about profits.

AI doesn’t just benefit the companies that are making it and those that are investing in it. As users, we get more surplus from ChatGPT or generative fill than we do ordering a Lyft versus a cab. AI can help its users be more productive and enable us to do things we could not do before. This may be generating significantly more surplus value.

Which is connected to the most important, final comparison:

Ride sharing didn’t really enable new abilities. Mostly it expanded an experience and capability that existed in some places to new places.

LLMs do, I think, provide some unique experiences. Although we’ve had grammar checkers, chat bots, natural language interfaces, and various forms of autocomplete, etc. What’s new is powering this all from one set of machinery and getting best in class or near best in class results. I also think that tools like Midjourney and Adobe’s AI-based autofill are quite astonishing. This was not something that existed meaningfully before.

This is where I have the most optimism should for AI tech– there are some new experiences and capabilities here. There’s a real chance that a broad set of people will be able to do their work a little better and a little easier than before. This does have the potential to be valuable to society, but also, importantly, individuals who should be willing to pass on some of that surplus value in the form of profits to AI companies.

Ride sharing was a lot more smoke and mirrors. AI has substance. That doesn’t mean I think their trajectory will look much different– both had their boom, and I suspect both will have their bust. But I admit, I do think AI will have more lasting impact and a softer landing.

  1. I take the most license with the word just in the title. But it was meant to be a provocative title. ↩︎

Oh, wow. Matt Birchler wrote a response to my post this morning about OpenAI. Awesome to see a blog I follow respond to something I wrote.

A couple of quick responses to his disagreement. Matt wrote:

I’d actually disagree quite a bit here. Ride hailing is exponentially better than it was 10 years ago, and we’d be shocked how bad it was compared to today if we were teleported back then.

Maybe my recollection is off. But I started using Uber/Lyft probably closer to 8 or 9 years ago– so maybe a decade was wrong– and I remember an experiencing virtually indistinguishable from today. If anything, my wait times were less and so were the prices. At least from the perspective of a frequent business traveler, my experience remains the same as it ever was– land in a new city, take out my phone, call rides that arrive in a mostly reasonable amount of time, go wherever I need to go. I just pay more and wait longer now than I did pre-COVID.

Especially if we agree that LLMs won’t get meaningfully smarter, that means using the local models that Google has already announced for Android and Apple is clearly working on for iOS/macOS will be more viable, and will be completely free to run as much as you want.

I think this is fair and I think my follow up post better explains my thoughts. What I really mean is that our current models are about as good as we’ll see at low cost. If LLMs can get better at all, I expect the better models to remain expensive. So I think my original piece combined two predictions in a way that was unclear. I think better models will require significantly more cost and energy, if we can achieve them at all, because I think there is no technological breakthrough that will reduce that cost. In other words,

Basically, I think if LLMs do get exponentially more useful, then server-side models that cost significant sums of money to run will continue to be prominent. But if they plateau in usefulness, then most people will run models locally on their devices most of the time because they’ll be quicker, more private, and basically just as good. And remember that our phones and computers are getting faster every year, so these LLMs will constantly run better and better than before.

I agree with this. If they get more useful, they will cost a lot of money. Today’s models will get cheap enough.

Today’s “series” on AI is a great example of why I love that my blog is arranged reverse chronological by day and chronological within that day. Reading my blog’s home page makes sense and shows an evolution of thoughts. I just wish Hugo could more easily generate day pages.

December 26, 2023

Finally got the blowers out of my office, which is a total disaster zone and I expect it will stay that way until we’re ready for repairs. Probably lost at least one house plant. Also, our dining room is completely torn a part and louder than ever. MIL’s room still not dry. 🫠

I was supposed to start allergy shots back in October. I called two weeks after my appointment like the instructions said and got a nasty person on the line who said they’d notify me when the shots were ready. I just got a bill that claims my insurance has already paid almost $2000 for allergy shots I’m not getting and that I owe $30.

I have never received any follow up. They were on my list of calls to make this week. I decided to not call for about 6 weeks after they were so nasty the first time.

The date my serum was supposedly prepped? The day I called to see when I was supposed to come in for the shots.

I guess that’s something I’ll probably be starting next week.

Today is a late Festivus, so on to my next grievance. HDCP is absolutely insane. Modern, new TVs, new receivers, new HDMI cables, and new Apple TVs will just suddenly stop working consistently all the time. DRM, bad standards, and “cheap” electronics in expensive equipment.

Oh, and my original HomePods are officially unable to just… play music consistently.