Teaching What You’re Still Learning
A comment appeared on one of my Bluesky posts the previous night: “This is so beautiful! Would you mind if we used it for our church bulletin? We’ll pay you for it of course.” I was so flattered! But by the next day, the comment was gone. So was the commenter. I’d been blocked.
They’d figured out the image was AI-generated.
This is the strange territory where AI art exists right now—a space where images can move people enough to offer payment one moment and trigger instant rejection the next, where the methodology matters more than the output, and where strangers feel entitled to insult and bully creators for their choice of medium. Block lists circulate specifically targeting AI artists. Hostile comments appear from people who’ve never engaged with the work or the artists making it. The message is clear: you’re not welcome here, your work has no value, and your presence in creative spaces is inherently offensive.
What strikes me most about this backlash isn’t the intensity—new technologies always provoke resistance—but the confidence behind it. People who couldn’t explain how a diffusion model works, who’ve never used these tools beyond a few frustrated attempts, who have no framework for distinguishing between legitimate concerns and reflexive dismissal, nevertheless feel certain they understand everything they need to know. The methodology is theft. The output is soulless. The practitioners are lazy. Case closed.
This gap between opinion and understanding is what makes AI art education so urgent right now. Not to evangelize it or convince skeptics it’s the future, but to create the conditions for informed discussion rather than tribal positioning. And the opportunity to reach that goal isn’t through arguing with entrenched adults on social media—it’s through teaching the teachers who’ll work with young people whose views haven’t calcified yet.

For two years I’ve maintained an anonymous AI art practice on Bluesky, building over 500 organic followers who respond to beauty regardless of methodology. The anonymity wasn’t originally strategic—I just wanted to experiment without the instant judgment. But watching the community that’s formed around this type of work has revealed something the backlash obscures: there are a growing number of people using these tools for creative expression, finding community, and having fun. For the most part they’re not tech evangelists or corporate shills. They’re religious people looking for contemplative images. Gardeners who appreciate botanical subjects. People with physical limitations that make traditional art-making difficult, or time constraints that have kept creative expression out of reach.
My follower count fluctuates whenever anti-AI sentiment surges and people discover what they’ve been looking at and, I assume, they’ve decided they don’t approve of AI art anymore. But the core audience has grown steadily because they’re not in evaluation mode—they’re in creation mode. And when you’re actually making things rather than theorizing about whether they should exist, the questions shift entirely.
At the same time, I’m developing a workshop for the art department at a private school in Vancouver: a dozen teachers with varying comfort around technology, some teaching Photoshop and media arts, others ceramics and 2D work. This is where the real opportunity lies. Not in arguments with strangers online, but in giving educators the foundation to help students think critically about this technology rather than just absorb whatever narrative reaches them first.

The backlash against AI art rests on several legitimate concerns that deserve serious consideration. Training data copyright raises real questions about intellectual property. Job displacement for commercial illustrators and photographers represents genuine economic disruption. The environmental cost of training large image models matters. The ethics of mimicking living artists’ styles demands careful thought. These aren’t reflexive fears—they’re substantive objections to real problems the technology creates.
But much of the hostility isn’t rooted in these legitimate concerns. It’s rooted in something deeper and less examined: the belief that art’s value lies primarily in the human struggle and skill involved in creation rather than in what the creation evokes in the viewer. This isn’t wrong exactly—it’s one valid framework for thinking about art. But it’s not the only framework, and treating it as self-evident truth rather than contested aesthetic philosophy shuts down conversation before it starts.
The church commission story I opened with illustrates this perfectly. The output moved someone enough to offer payment. The image clearly succeeded at whatever criteria they were applying—beauty, appropriateness for religious context, emotional resonance. But once they learned the methodology, none of that mattered. The process invalidated the product entirely. This reveals something fundamental about competing value systems around art-making that rarely gets examined in the heat of online arguments.
What makes this moment particularly urgent is how confidently people hold positions they haven’t really examined. The same dynamics that make social media excellent for rapid information sharing make it terrible for nuanced discussion. Complex questions about copyright, labor, creativity, and technological change get flattened into tribal signaling. You’re either pro-AI (tech bro, corporate shill, enemy of “real” artists) or anti-AI (Luddite, gatekeeper, defender of “real art”). The actual substance gets lost.
This is why teaching matters more than debating. You can’t reason someone out of a position they didn’t reason themselves into, and most people on both sides of this divide haven’t engaged with the substance deeply enough to have reasoned positions. They generally have tribal affiliations shaped by social media algorithms and in-group signaling. Changing minds in that context is nearly impossible.
But young people? Students encountering this technology in educational settings? They’re still forming their frameworks. They haven’t yet decided whether process or product matters more, whether new tools represent threat or opportunity, whether creative expression requires traditional skill development or just requires expressing something worth communicating. These questions are still open for them in ways they aren’t for adults whose positions have hardened.

The workshop I’m building takes “demystify and exploration” as its core approach, but the demystification matters more than the exploration. These teachers need to understand how diffusion models actually work—not superficially, but deeply enough to evaluate student work and make informed institutional decisions. They need to understand the legitimate concerns about copyright and labour alongside the technology’s actual capabilities and limitations. They need frameworks for distinguishing between thoughtful critique and reflexive dismissal.
Most importantly, they need to understand all this well enough to teach it—not as propaganda for either side, but as a complex technology with real affordances and real costs that students should learn to navigate critically. Understanding that diffusion models start with noise and gradually refine toward coherence, that training data quality shapes outputs in specific ways, that different models have different strengths and limitations—this knowledge transforms abstract anxiety into concrete assessment. Students can ask better questions. Teachers can provide better guidance.
What I’ve learned from maintaining the anonymous practice alongside developing the workshop is that you can’t just teach this stuff theoretically. You need active practice to understand what the tools can and can’t do, where they fail, why certain approaches work. The hundreds of failed attempts teach more than the successful outputs. Working within constraints, like Gemini’s limited generation options, Midjourney’s complexity, and the creative problem-solving that emerges when tools behave unexpectedly, reveals things the documentation doesn’t cover.
But practice alone isn’t enough either. Teaching forces a different kind of understanding. You can’t just tell educators “type words and images appear” when they need to evaluate student work. And you can’t wave away concerns about copyright and labour disruption when they’re looking to develop institutional policies. You have to understand the technology deeply enough to explain it to the skeptics, to address legitimate objections seriously, and distinguish between real risks and imagined ones.
This recursive loop—practice informing teaching, teaching demanding deeper understanding, and understanding shaping the practice—is what makes education possible in the first place. And it’s what’s largely absent from the online discourse, where people confidently dismiss or defend technology they don’t really understand.

The opportunity right now isn’t to win the online argument. It’s to reach young people before their views calcify into tribal positions. Teaching teachers creates a multiplier effect—a dozen educators reaching hundreds of students over the coming years, shaping how a generation thinks about creative tools and technological change. Not telling them what to think, but giving them frameworks for thinking critically rather than reacting tribally.
This is important because the current discourse is failing everyone. People making genuine creative work face harassment and exclusion from strangers in online communities that should be about art rather than methodology. And people with legitimate concerns about labour and copyright risk getting lumped in with reactionary Luddites. The technology itself gets mythologized as either miraculous or catastrophic rather than understood as a complex tool with specific affordances and limitations. And young people trying to figure out how to navigate this landscape get caught in the crossfire.
Education doesn’t solve everything. Many concerns about AI art are valid and won’t be resolved through better understanding. Job displacement is real. Copyright questions remain unresolved. The environmental costs of training models matter. But education creates the conditions for discussing these issues seriously rather than simply choosing sides. It creates space for nuance in a discourse that’s currently dominated by certainty.
After nearly two years of maintaining an anonymous practice and developing this workshop, I’ve added an Art section to Lookdeeper. The secret identity was never sustainable—I can’t expect to teach something I’m ashamed to practice publicly. So I’ve uploaded nearly two years of work, now with some context and intention behind it. For now, it’s a simple tiled gallery organized by month, though each piece has a story. There’s no romanticized narratives about artistic vision, but there’s a clear thread of experimentation that produced each image, and a rough timeline that illustrates how much the tools, and my skills, have improved.
And as for my teaching experience, this is just one small workshop for a single high school art department, but teaching teachers reaches students who are still forming their views, are still open to complexity, and still capable of thinking critically rather than just choosing sides. I believe this is where real progress can happen. Not by having arguments with strangers online, but by giving the next generation better tools for thinking about creative expression, technological change, and what actually matters when we look at art.
All images generated with Midjourney. Editing assistance provided by Claude.