Mind the Gap
I stared at my laptop screen.
VM connection timeout after 60 seconds
Restarting Claude or your Mac sometimes resolves this. If it persists, share your debug logs to help us improve.
Oh-kay. Here we go again.
I'd been using Anthropic's new Claude Cowork for about a week and a half. It had already become indispensable, helping me plow through a big strategy report like a faithful work buddy. Then one morning it just… stopped. No warning. No explanation beyond that cryptic error message and its optimistic suggestion to restart everything, as if the software equivalent of "have you tried turning it off and on again" was going to cut it.
It didn't.
What followed was two days of intermittent troubleshooting. I logged a bug ticket with Anthropic (offered my contact info so they could follow up, never heard back). Then I cycled through everything I'd done the night before. I'd added a bunch of plugins. What if one of them was the culprit? I went into the application settings and disabled them one by one. No luck. I checked the Claude Developer Discord and found exactly one other person with the same issue, timestamped the same day as mine. Solidarity, but no answers.
Finally I tried Reddit, which in retrospect is where I should have started. I found the Cowork subreddit, searched for "virtual machine," and landed on a post from someone who'd stumbled onto a fix. It required a terminal command and came with a warning about losing chat history, but after studying their approach I realized there was a simpler way. I just had to delete a single corrupted file that Cowork would regenerate on its next startup. I tried it. It worked. I went back to that lonely Discord thread and posted the solution. Then I got on with my life.
A good two to three hours, gone. Workflow completely disrupted. All because of one tiny corrupted file.
This wasn't even my first rodeo with Cowork. When I initially got access, I couldn't get it to work for the first five days. I went through the usual ritual: quit and restart Claude, rebooted my laptop, waited through several software updates (it was a "research preview"—another way of saying beta software—and the developers were shipping fixes almost daily), until I eventually got it running by disconnecting my VPN. Which I'm positive I had tried before. But whatever.
I tell this story not because it's unusual, but because it's completely ordinary. This is what adopting new technology is really like. Don't be fooled by those sleek keynote product demos and social media launch videos. The reality is you, alone at your desk, trying to figure out why the thing that was supposed to make your life easier has instead consumed an entire afternoon.
It's always been this way, and it probably always will be.
Ever since I was young I was one of those kids who tried to fix things when they broke. Cassette players, remote control cars, telephones. I'd patiently take them apart, stare at the guts, try to figure out what might be wrong, then put everything back together. There would often be a leftover piece. It rarely worked afterwards. But every now and then I'd manage to fix something, and those small victories gave me an optimism and determination I still carry today.
The objects have changed but the pattern hasn't. Technology continues to make promises that lie just beyond what it can reliably deliver. Even Apple, whose products are supposed to “just work” releases iOS updates that "just break" something that was finally "just working" after a year of incremental fixes. A "smart" dimmer for our outdoor lights suddenly needed a firmware update to stay compatible with an iPhone app update I didn't ask for. Technology doesn't care about our schedule or our patience. It just keeps moving forward. There's no option to turn back. Downgrading means giving up.
With AI, the gap between promise and reality has become especially vivid. I don't just mean keeping up with the frantic stream of updates and product releases (though I have a blog series just for that). I mean actually keeping up with how these tools work and how they might genuinely improve your life. The media tells you that if you don't figure this out you'll become obsolete, lose your job, and die in the street. (They don't actually say that last part, but it's implied.)
So you jump in. And you quickly discover that AI tools come with their own special flavour of frustration. Sometimes it's simple prompt-level stuff, learning how to ask for what you actually want. But increasingly these tools are sophisticated enough that when they break, they break in baffling ways. If something is labelled an "agent," that's essentially a warning label: it might stop working as expected at any point, and figuring out what went wrong will be your problem, not theirs.
When I first heard about Claude Cowork I was excited for all my friends and colleagues who would finally get to experience agentic AI without having to open a command terminal. I had been raving to anyone who would listen about Claude Code, Anthropic's powerful coding assistant, but the interface was intimidating, and even though it could do much more than write code, the name alone was a barrier. Cowork felt like the product that could bridge the gap between early adopters and everyone else.
And yet. Even as someone who lives and breathes this stuff, I spent days wrestling with it before I could actually use it. If the gap is that wide for me, how wide is it for someone who doesn't have decades of tinkering muscle memory to fall back on?
This is the gap worth paying attention to. Not the gap between what AI can do today and what it'll do tomorrow. The gap between the technology as it's marketed and the technology as it's actually experienced. Between the keynote demo and the Tuesday morning when nothing works and you're searching Reddit like it's 2009.
Those 2-3 hours weren't entirely wasted, though. Without intending to, I'd learned how Cowork actually works under the hood. I now know where the settings live for plugins, connectors, and the local MCP servers that Claude relies on to connect to other apps and services. If I see that error message again, I'll know exactly what to do. And the next time something different breaks, I'll diagnose it faster because I understand the architecture a little better.
The frustration was the education.
That's the uncomfortable truth about the gap. It's annoying. It eats your time. It makes you question whether any of this is worth the effort. But it's also where the real understanding comes from. The people who will be most capable with AI aren't the ones waiting for it to "just work." They're the ones willing to delete corrupted files and share their fixes with strangers on Discord.
Not everyone has the time or temperament for this, of course. My partner has found her own path. "Ask Plexi!" has become a mantra in our house since she started using Perplexity. It doesn't always have the right answer, but it usually gets you pointed in the right direction. Two years ago we had to rely on Reddit and Google Search, spending hours combing through possible clues. Now tools like Perplexity and ChatGPT do that sifting for us. In some real and measurable ways, life has gotten easier thanks to AI.
Just be careful about putting too much faith in the answers. Like most LLMs, these tools would rather give you a confident-sounding response than admit they don't know. When that happens, I like to think about the time we were driving through a remote town in Mexico, back before iPhones and reliable GPS, trying to find our way back to the highway. I rolled down the window and asked a local which way to go. He pointed confidently. We followed his directions.
He was completely wrong, but fortunately the next person we asked pointed us in the right direction.
At least he was trying to help. And on some level, so is this technology. It's just that "trying to help" and "actually helping" are two different things, and the distance between them is where we all live right now. So mind the gap.
Cover image generated with Midjourney. Editing assistance provided by Claude.