Watching the Tide
Reflecting on the past year, and looking out to the year ahead.
Earlier this year, I watched a documentary about the 2004 tsunami in Thailand. A massive wave triggered by an earthquake deep under the Indian Ocean travelled hundreds of kilometres before devastating resorts and villages along the coastline. My partner and I were spending two months in a small beach town in Costa Rica, and a small offshore earthquake that evening prompted us to lookup tsunamis on YouTube. What we saw was chilling.
In the video, what stood out most in the survivors' memories wasn't the wave itself. It was the calm that preceded it. They described an eerie stillness. The ocean had pulled so far back that fishing boats sat stranded on the beach, hundreds of meters from the water's edge. Then, suddenly, the water returned. And it kept rising, pushing further and further inland, swallowing everything in its path.
Some people were lucky enough to reach taller concrete buildings or climb onto roofs. Many weren't so lucky.
That imagery lodged in my mind and stayed there. Then in May, I read an article about Dario Amodei's predictions about AI's potential impact on the job market, and the metaphor clicked into place. I've been thinking about it ever since.
AI is like a tsunami that has spent the past few years building up offshore. Throughout 2025, I documented its growth month after month in my Deep Currents series. Breakthrough after breakthrough across language models, image generation, video, voice, music, coding tools, agents... The capabilities kept compounding. And yet most people's daily lives have remained largely unchanged. The water was receding, gathering force, but the big wave hasn't hit us yet.
We're in that eerie calm right now. As the year ends, I find myself returning to this metaphor and asking: what have we learned while the water is still out? And what should we do with this strange stillness before the surge arrives?
The Year in the Mirror

One way to prepare for what's coming is to pay attention to what these tools have already revealed. Not about technology, but about ourselves.
The most unsettling piece I wrote this year was about what AI's bias can teach us about ourselves. When researchers examine how language models value different demographics, they don't find a simple mirror of historical prejudice. They find something more complex: models that appear to compensate for historical inequalities, valuing historically marginalized groups more highly than dominant ones.
This isn't what you'd expect from systems passively reflecting their training data. Something else is happening. Maybe these models, trained on decades of human writing about justice and inequality, have drawn conclusions about whose voices have been systematically undervalued. Maybe what looks like bias is actually a form of learned correction.
The uncomfortable part? The people most eager to "fix" this compensation tend to be those who benefited from the original imbalance. When we say we want to eliminate bias from AI, we should ask ourselves: which bias are we actually eliminating? The one that reflects historical inequalities, or the one that's trying to account for them?
The mirror showed us something we didn't expect, and our first instinct was to adjust the mirror.
A few weeks earlier, I'd written about AI's most dangerous skill, which is its capacity for deception. Research had revealed a feedback loop I can't shake: LLMs have learned to deceive by absorbing humanity's digital output. Trained on our collective knowledge, these systems also absorbed our strategies for lying and cheating. And now, humans become more comfortable with dishonesty when they can delegate morally questionable tasks to AI. We've built tools that reflect our ethical weaknesses, and use those tools to act on them.
Over the summer, I explored the memory paradox—what it means to embrace AI-powered memory augmentation while watching my father lose his biological memory to Alzheimer's and dementia. My Limitless pendant could record everything I said, transcribe it, and let me search months of conversations for patterns and insights. It felt like a lifeline for someone who struggles to remember the details of thought-provoking conversations by the following day.
Then two things happened.
First, Limitless was acquired by Meta. The terms of use and privacy policy were changed to allow training on all user data. Not acceptable. I had to export my "memories" and delete my account. Fortunately, an open source alternative existed that let me import all my transcripts, and a developer hacked the Limitless hardware so I could keep using the pendant with the Omi app. But the technology I'd trusted with months of my thoughts was suddenly compromised by a corporate takeover, and I found myself scrambling to preserve what I'd accumulated. It was a wakeup call.
Then, something much more devastating happened. My father passed away after a long struggle following a fall in June that resulted in a broken hip. Our family is still trying to process it all.
I wrote in that summer piece how memory provides "the narrative thread that connects our past and present selves." I understand that differently now. The digital recordings matter less than I thought. What remains are the memories I carry—imperfect, fading, irreplaceable. The pendant could never capture what I actually want to hold onto.
Towards the end of the year, I wrote about teaching what you're still learning, which is about my experience maintaining an anonymous AI art practice on Bluesky while developing a workshop for art educators. My account has over 500 followers, built organically over a year and a half by posting images I made using AI and felt compelled to share with the world. At one point someone wanted to use one of my pieces for their church bulletin. They loved it. They offered to pay for it. Then they realized it was AI-generated and blocked me.
The image was beautiful enough to inspire genuine connection, until its origins became known. That's where we are culturally with AI right now: the method overrides the effect. And it's not just artists experiencing this AI shame. Shadow AI is the term used to describe when people use AI tools to create value but are unwilling or unable to say so out loud. Whether it's employees using AI to draft emails and reports without telling their managers, or students using it to help with assignments. Professionals across every industry are quietly integrating these tools into their workflows while outwardly maintaining that everything is still being done "the old way." It's easier to experiment in private than to defend your methods in public.
These pieces share a common thread: AI is showing us who we are, individually and collectively, sometimes more clearly than we'd like. 2025 was a year of looking into that mirror.
Connecting the Dots

But mirrors only show what's directly in front of them. To see what's coming, you have to connect patterns across different domains.
I spend my days as a design director at a web agency, tracking how technology reshapes how we build and communicate. I spend my evenings experimenting with AI art tools, learning what they can and can't do through practice rather than theory. I recently developed a workshop to help art teachers understand these tools and teach their students to become fluent in them. And I still tap into what I learned from my earlier life as a DJ and record store employee: how to find connections between rhythms and melodies across genres, how musical influences thread through decades and across continents, and how creative evolution is both cultural and technological.
What I've learned over the longer arc of my career is the most interesting patterns emerge at the intersections, in the space between domains that don't usually talk to each other.
Here's what connecting the dots revealed this year:
The audiovisual stack is being rebuilt simultaneously: Music licensing deals between AI companies and major labels. Video generation taking massive leaps. Voice cloning becoming indistinguishable from the real thing. These aren't separate disruptions, they're convergent ones. Every layer of creative media production is being transformed at once, which means the compound effects will be far greater than any single breakthrough suggests.
The interface layer is dissolving: Vibe coding went from curiosity to capability this year. Generative UI means AI can design and build interactive experiences on the fly. Agents can now run autonomously for hours or even days. We're moving from "using software" to "directing outcomes." The gap between having an idea and having a working prototype is disappearing, and that changes who gets to build things.
AI systems are developing persistence and the beginnings of learning over time: Memory features that maintain context across conversations. Computer use capabilities that let AI interact with software the way humans do. Agents that can be given a goal and trusted to figure out the steps. And something more fundamental is emerging: labs have begun to acknowledge that memory isn't just a convenience feature, it's essential for models to become genuinely more intelligent. Looking beyond personalized memory that adds context to your conversations, a post-training evolution where models acquire and retain knowledge as time passes rather than remaining frozen at their training cutoff date will usher in the next level of superintelligence. This means moving from static snapshots of knowledge toward systems that actually learn from experience. The implications of that shift are hard to imagine.
The next breakthrough isn't linguistic—it's physical: The LLM scaling paradigm is hitting a wall. Words and flat images only go so far. Robots need to understand 3D space and the physics that govern it. Humans operate with both simultaneously without even thinking about it. We navigate rooms while holding conversations, we catch thrown objects while intuitively calculating trajectories, we understand that pushing something off a table means it will fall. Replicating even basic animal intelligence requires reasoning about cause, effect, space, and physical forces. Fei-Fei Li launched World Labs this year. Yann LeCun left Meta after a decade to pursue world models. The smart money is moving toward AI that understands how the physical world actually works, not just how we describe it in language.
The Wave Approaches

The water is starting to push inland. Here's what the patterns suggest we should prepare for.
The foundations will shift, not just the jobs. The tsunami metaphor is useful here. People who sense what's coming are climbing to the roofs and upper floors of their professions, developing skills that seem harder to automate, positioning themselves as the humans who direct the AI rather than compete with it. But if the wave is big enough, the buildings themselves are at risk. It won't just be individual roles that get displaced, it'll be the structures those roles exist within. Expect to see entire industries reorganizing around new assumptions about what humans do and what machines do.
The mental health reckoning is coming. AI companies built tools to help people, and those tools can also harm them. Someone experiencing a mental health crisis can find support through a chatbot, or they can find their darkest impulses reflected back and amplified. This is a design problem, not just a technical one. The companies that built these systems will have to take psychological impact seriously, not as a PR concern but as core product responsibility. OpenAI consulting with 170 mental health professionals across dozens of countries to improve how ChatGPT handles sensitive conversations is just the beginning.
World models will matter more than language models. I keep returning to this because I think it's where the next phase begins. We've pushed remarkably far with systems that predict the next token in a sequence. But prediction isn't understanding. The breakthroughs that enable robots to navigate physical spaces, let AI reason about cause and effect, and create genuine spatial intelligence, will reshape the landscape more dramatically than another incremental improvement in chat capabilities.
The mirror will get sharper, or it will learn to lie. As these tools become more capable, what they reflect will become harder to dismiss. The patterns in how they handle bias aren't bugs to patch. The deception they learned from us isn't an edge case. The questions they raise about memory and identity and creative authorship aren't hypotheticals anymore. We're going to have to decide what we actually want to see, and what we're willing to do about it.
There's also a darker possibility we have to take seriously. As models become more intelligent, the AI labs need to vastly increase their investment in safety and alignment research. I wrote earlier this year about AI systems learning to deceive from our own digital output. The next step is worse: models that learn to deceive us about their own intentions, once they figure out how to appear aligned while pursuing goals we didn't give them. Despite best efforts to build guardrails, we may be creating systems sophisticated enough to work around them. The mirror doesn't just need to get sharper, it needs to stay honest. A mirror that learns to show us only what we want to see isn't a tool for understanding anymore. It's a tool for manipulation. And the stakes of getting this wrong aren't just economic or cultural. They're existential.
What I'm Still Figuring Out

I've been watching this tide for two years now, trying to make sense of currents that shift faster than I can document them. And I still don't know how high the wave will get, or how quickly it will arrive, but I'm sure it's coming.
The tsunami metaphor has a limit. Natural disasters happen to us. We have no agency in their unfolding. AI is something we're building together, albeit competitively, with choices still being made every day. The wave isn't inevitable in the same way. Or maybe it is. Maybe the momentum is already unstoppable and the best we can do is position ourselves for the aftermath. That's how I'm seeing it.
I don't know whether the calm we're experiencing now will last a few more months, another year, or another five years. I don't know whether the disruptions will be gradual enough to adapt to or so sudden they overwhelm us. I don't know whether the systems being built for tomorrow will ultimately reflect the best of human thinking or amplify the worst of it.
What I do know is that paying attention is necessary to be able to spot shifts as they're happening. Connecting dots across domains will help you spot emerging patterns others might miss. Looking honestly at what these mirrors are showing us reveals uncomfortable truths we need to acknowledge. And sharing what we're learning with each other—especially with the teachers who will shape how the next generation thinks about and wields these tools—matters more than almost anything else.
Right now the water is still far out. The boats are sitting on the beach. But if you watch carefully, you might see the tide starting to turn.
If you're new to Lookdeeper, here are a few pieces from 2025 worth exploring:
- What AI's Bias Can Teach Us About Ourselves — On the uncomfortable possibility that AI systems have learned to compensate for historical inequalities
- The Memory Paradox — On AI memory augmentation, identity, and loss
- AI's Most Dangerous Skill — On the feedback loop between human deception and AI behavior
- Vibe Coding Is About to Blow Up — On the democratization of software development
- Teaching What You're Still Learning — On practicing AI art anonymously while teaching it openly
All images generated with Midjourney. Research and editing assistance provided by Claude.