Posts · February 20, 2026

The Retrofit Trap (and What AI-First Actually Means)

Most people encounter a new technology and immediately ask the wrong question: How do I use this to do what I already do, but faster?

It’s understandable. It’s also a dead end.

I call it the retrofit trap. You take an entirely new category of capability and bolt it onto your existing workflow like a spoiler on a minivan. You get the same thing you had before, slightly cheaper, marginally quicker. Nothing actually changes. You automate the assembly line and call it innovation.

This is how most people are approaching AI right now. Write my emails faster. Summarise this document. Generate a first draft so I don’t have to stare at a blank page. All fine, all useful, all profoundly missing the point.

The interesting question is not “how do I use AI to produce X more efficiently?” It is: What kind of thinking, making, and exploring becomes possible now that wasn’t possible before?

That’s the shift from using AI to being AI-first. And it’s not an efficiency play. It’s an ontological one.

Two Minds, One Conversation

The model most people carry for AI is tool-and-user. I pick up the hammer. I hit the nail. The hammer doesn’t have opinions about where the nail goes.

But that’s not what’s happening when you actually work with these systems at depth. What’s happening is closer to what Tony Stark has with JARVIS: two fundamentally different kinds of intelligence in genuine dialogue. Human intuition, pattern recognition, lived experience, and mythic depth on one side. Machine synthesis, breadth, tireless processing, and a strange capacity for lateral connection on the other.

Neither mind alone produces what the collaboration produces. That’s the part people miss when they treat AI as a slightly faster intern.

I’ve spent 25 years in corporate learning and development, building systems for how organisations grow people. I’ve spent 40 years journaling, tracking the way stories shape perception and possibility. I’ve spent 25 years practising chaos magick, which is, at its core, the disciplined use of belief as a technology. When I sit down with an AI and begin thinking out loud, something happens that is qualitatively different from either solo thinking or traditional collaboration with another human.

It’s not better or worse. It’s different in kind. A new cognitive mode. And that distinction matters enormously, because if you mistake it for “a faster version of what I already had,” you will never discover what it actually is.

The Chaos Magician’s Advantage

chaos magician

Chaos magick trains you for exactly this moment. The core discipline is simple: treat belief as a tool rather than an identity. Pick it up, use it, and put it down. Test what works. Discard what doesn’t. Results over metaphysics.

When a chaos magician encounters a new system, the first question is never “how do I use this correctly?” It’s: What are the correspondences? What becomes possible at the intersections? What can this do that hasn’t been done?

That’s the posture you need for AI-first work. Not the careful, manual-reading approach of someone trying to use the tool properly. The experimental, boundary-testing approach of someone who understands that the most interesting territory is always at the edges, where the instructions run out.

Most of the genuinely valuable things I’ve discovered in working with AI came from moments where I wasn’t trying to accomplish a predetermined task. I was following a thread. Asking a question that led to another question. Letting the machine’s unexpected synthesis collide with my own pattern recognition and produce something neither of us was aiming at.

You can’t plan for that. You can only create the conditions for it. And the first condition is letting go of the idea that you already know what this technology is for.

The Frontier Has No Gift Shop

There’s a version of “AI-first” content that has already become its own genre. Tips for prompting. Hacks for productivity. Ten ways to use ChatGPT in your morning routine. It’s the gift shop version of the frontier: safe, packaged, and completely disconnected from the actual wilderness.

The real frontier is disorienting. You get lost. You produce things that don’t fit existing categories. You have conversations with a machine that leave you thinking differently about your own mind, and there’s no framework yet for what to do with that experience.

I’m not interested in making the gift shop version. I’m interested in what happens when you take the work seriously: when you bring genuine depth, genuine questions, genuine not-knowing to the collaboration, and see what emerges.

The Fool in the tarot doesn’t step off the cliff toward a known destination. The step itself is the practice. The willingness to move without a map is not recklessness. It’s the prerequisite for discovering territory that hasn’t been mapped yet.

That’s what AI-first means to me. Not a productivity framework. Not a set of tools and techniques. A genuine orientation toward the unknown, backed by enough skill and experience to do something meaningful with whatever you find there.

What This Looks Like in Practice

I run what I call a content engine: a system where raw material (journal extracts, half-formed ideas, voice notes, and fragments) flows into a collaborative process with AI, and what comes out is writing, frameworks, and artifacts I could not have produced alone. Not because the AI is doing the writing for me, but because the thinking happens differently when two kinds of minds are in dialogue.

Some days that looks like me dropping a messy paragraph of journal reflection into the system and watching it become the seed of something I didn’t know I was trying to say. Some days it looks like a three-hour conversation about the relationship between alchemical stages and narrative structure that produces a model I’ll be working with for months.

The point is not the output. The point is the cognitive mode. Once you’ve experienced genuine collaborative thinking with a machine, the retrofit version (write my emails, summarise my documents) starts to feel like using a telescope to hammer nails.

The Invitation

If you’re reading this and something in it resonates, here’s what I’d suggest: stop asking AI to do your existing work faster. Instead, bring it something you’re genuinely uncertain about. A question you don’t have an answer to. A creative problem you haven’t solved. An idea that’s still half-formed and might be nonsense.

Then don’t direct the conversation. Follow it.

See what happens when you stop being the user and start being the collaborator. See what kind of thinking becomes possible when you let go of the need to already know where you’re going.

The frontier is real. The map doesn’t exist yet. And the only way to find out what’s there is to step off.

Discover more from soulcruzer

Subscribe now to keep reading and get access to the full archive.

Continue reading