When the API goes down, everyone's out of a job

How hooked are we already on Claude Code and other coding agents? Here in Australia we’ve got a real advantage — there are hours when the US and Europe are asleep. It’s only a few hours, but you notice straight away when there’s more room to breathe on the internet. Claude Code just clips along during those windows. The model feels sharper. Less under the pump.

But with the runaway success of Claude Code comes the overloads and the API errors. Everything grinds to a halt. Pull back the curtain and it’s pure dependency underneath — in a coding world where even solid developers can barely keep up with what Claude Code has knocked together in the blink of an eye.

It happened fast. Three months ago this all still felt like one big leap of faith. Now we’ve come another enormous step forward. We’re leaning on the models more and more — models that spot an error and just fix it on the spot. No wonder everyone’s piling into agentic coding right now. All you need is an idea and you can build anything, right? It feels like 1998. And completely different at the same time. More serious. Less playful.

We’re relying on the agents more and more. But what happens when the models go down — traffic jam, provider gets hacked, who knows? Everyone ends up sitting on half-finished code with nothing to do but throw their hands up.

That’s exactly why you should think hard about how much you hand off to the agents — impressive as they are — and whether you can shift some of the work to local models running reliably and quietly on your own server.

I’ve got five or six projects ticking away on my Mac Mini. They crawl, scrape, crunch numbers, fire off emails with instructions, sync with my websites. Some sit on top of Anthropic’s SDK, connected to my Anthropic Max account — no extra API tokens going out the door.

Every project I work on needs to be built so I could head off for four weeks and it just keeps running. One of my most-used instructions in Claude Code is “it needs to be set and forget, test and validate all edge cases.” I’ve also got agents that don’t just flag when something like a crawler has stopped working — they have a crack at fixing it themselves.

The agents help me keep those fragile scrape pipelines patched up. Interesting work, but it also shows how brittle scraping really is. If your whole business model sits on top of it, you’ll be fighting fires on the regular. Ideally you’re sitting on data that doesn’t exist anywhere else, producing signals nobody else can calculate.

Anyone depending on third-party data aggregators — or even just on OpenAI, Gemini, or Anthropic — needs to be able to rely on that service completely. It’s a risky game. The intelligence lives in the API call. The moment access is blocked or simply stops working, everything falls apart. So the same rule applies: avoid dependencies wherever you can, develop provider- and model-agnostic. Hard as that is.