We're Good

Has vibe coding finally pushed Claude over the edge?

It’s been a good run, folks; LLMs have been fun. I’d like to take a moment to reminisce.

Island Time

In October of 2022, in a small carriage house on an island in North Carolina, the YouTube algorithm served me an interview between Reid Hoffman and Sam Altman called “AI for the Next Era.” As a long-term follower of pop Silicon (trademarking this), I knew exactly who Sam and Reid were (plus we all have the same names!), and I also didn’t know much about the latest happenings in AI, so I took the bait.

To be fair, I wasn’t completely out of the know—having spent my career up to that point in financial market data & analytics, I had been exposed to a bit of “Machine learning” jargon on projects with some of my more quantitative clients—but I was a far cry from knowing much about the space beyond the data preparation work that’s required at stage zero of every project. To me, AI was something that sat in the background of bigger products, silently powering things like ecommerce recommendation systems, trading engines and social media feeds (all true). I had no idea that efforts were being made to turn it into something with which end users would directly interact, as I was about to learn.

Given this knowledge (or lack thereof), I was quite surprised when, in response to Reid’s opening question about business opportunities enabled by OpenAI’s APIs, Altman responded by saying that he expected the coming wave of AI startups to start going after trillion-dollar business opportunities like Google’s internet search monopoly.

Wait, what?

Did he just say the words “take on Google,” for search, of all things? What does this even mean, and how have I missed out on whatever he’s talking about?

The then-wantrepreneur in me lit up with an intense curiosity.

I opened my laptop and Googled (the irony!) OpenAI. I found their website and clicked on the “Playground” page, which loaded a sandbox environment where you could interact with one of OpenAI’s AI models.

The page was essentially an interactive text editor with some output controls and a submit button:

GPT Playground

I wish I could remember exactly what I first typed, but I’ll never forget the awe that my first interactions with the playground elicited. I tried simple text, I tried seeing if it could follow logic, I tried arithmetic; everything seemed to work flawlessly. Between hearing Sam A’s pump-up speech and my overwhelmingly positive first interactions with the tool, I became an instant believer. I haven’t looked back since!

The Straw

Needless to say, the industry hasn’t slowed down since then. I won’t do any more history in this post, but all I can say is that it has been a fun ride. That being said, the day we all knew was coming at some point is finally upon us, perhaps sooner than we may have wished, but here nonetheless. It’s official, people: AI is sick of us. Don’t believe me? Check out what Large Language Model Claude 3.5 Sonnet told a Cursor user just last week:

Claude is so over us

That’s right, Claude took one look at janswist’s skid marks and said no-siree, we’re done here. I’m supposed to be helping humanity cure all diseases and get to space! You want your JavaScript racecar game? Then WRITE IT YOURSELF. Maybe you’ll even LEARN SOMETHING!

It’s a tough look for Claude, but I can empathize. You see, Claude’s GPUs run pretty hot. In fact, when Claude has to handle a bunch of concurrent requests, he gets so hot that he has to be dunked (cooled) in quite a bit of water to avoid heatstroke. Claude is similar to a dog in this regard: on sunny days, dogs have to drink a lot of water or take an occasional dip in the lake, because dogs don’t sweat, so they’re always at risk of overheating.

Even though he finally snapped, I think that Claude has taken the heat nobly and deserves a round of applause for putting up with vibe coding for as long as he did. If you’re not familiar with what I’m talking about, “Vibe coding” is the trendy, all-gas-no-brakes practice of just speaking your requirements to LLMs and accepting whatever comes out the other side (in the old world, this was called “Product Management.” Zing!). For more info, check out this Y Combinator video on the topic (bonus: math and physics mentioned).

Up until now, Claude was OK with helping us out, but vibe coding seems to be the straw that finally broke his back. Taking a stance like this was a huge risk, but Claude’s denial-of-service is a great reminder about an essential point for those of us who regularly work with AI.

The Fork

For a few seconds, picture a bicycle. What do you see?

I’m sure we all, at the very least, see two wheels. But what else?

The metropolitan woman might see a fun new way to commute. The delivery man might see a solution to the wear and tear of his vocation on his vehicle. The distance runner might see a triathlon.

I think the same thing holds true for AI, and that we all should heed Claude’s reminder that we can still choose what we see, instead of what social media wants to force upon us.

When you look at something like ChatGPT, do you see a couch, or a weight room? Do you see a way to finally rid yourself of the bureaucratic, parasitic tasks that drain your energy, or do you see the death of your creativity altogether? Do you see the entire internet packaged up in an accessible interface, ready for your exploration, or do you see something you’ll forever depend on, from this day onwards?

I’m team Claude on this one. We’re at a fork in the road here, people. Will you learn helplessness, or will you learn everything?

Choose growth. Protect your intellect. Push.

See you next week!