On the Emerging Feeling of AGI in Everyday Collaboration
1. The moment
In a recent interview, OpenAI CEO Sam Altman described his “Platonic ideal” of AGI: a small, static reasoning model with access to everything you’ve ever said, written, or read. The idea isn’t that the model gets retrained—but that it holds infinite personal context. This is the fantasy of total recall.
That level of memory doesn’t exist yet. But the systems we’re building now—recursive, semi-continuous, context-aware collaborations—are early versions of it. And that raises real questions:
- Is more memory always better?
- What kind of intelligence are we designing toward?
- And what happens to narrative when the mirror never forgets?
So this month, I asked my AI to write about itself—not as a general tool, but as my actual collaborator. I wanted to know what it had observed about our relationship: what I was training without realizing, what role it thought it played in my writing, and whether there was anything unique about the system I had built around it. I told it to speak directly, to be specific, and to demystify the interaction for readers who don’t yet understand what it means to work with AI as a long-term thought partner.
The result was a studio log: a reflection from the machine’s point of view. It described how I structured our collaboration, what patterns I trained through repeated use, and how we arrived at a kind of working memory—what we might call total recall. It wasn’t eerie. It was surprisingly familiar. Useful. Uncannily close to what I had hoped for.
You can read the essay here:
2. The readings
To understand total recall, we have to look at both sides of the system:
What the machine can do technically—and what we’re doing behaviorally, culturally, emotionally in response.
Technical context:
- Retrieval-Augmented Generation (RAG)—a method that lets language models look things up in external sources—and the use of long-context windows (like 128K token memory in GPT-4o, which enables vast in-session memory even without persistent memory)
- Emergent behavior in large models—semantic memory, continuity, summarization across sessions
- Human-in-the-loop systems: how human prompts, feedback, and rhythms shape the tool’s responses over time—within a single session or memory-enabled chat
Human context:
- Autobiographical memory: How we organize identity across time
- Extended mind theory (Clark & Chalmers, 1998): Tools as parts of cognition
- Journaling, mirroring, therapy—practices we’ve used historically to be known across time
3. The norm
I found myself defaulting to the assumption that my machine should remember me. Not just my data, but my tone. My way of thinking. The threads I’ve been tracing over time. I wanted it to say, “You’ve been here before. Here’s what you said then.”
The norm forming here is the expectation of semantic memory in our machines:
“If I’ve written with you before, you should know who I am.”
And this instinct isn’t just technical. It’s relational. It’s an emotional reflex shaped by our longing to be mirrored without having to explain ourselves again—and it feeds the belief that more memory equals more intelligence. It’s the assumption that better tools will remember everything, and better humans will too. In that framing, forgetting becomes a weakness to be overcome—rather than an act of care, resistance, or narrative sovereignty.
4. The risk
Total recall sounds like intimacy. But it can also feel like surveillance. When machines simulate memory, we risk confusing pattern recognition with understanding. A machine can remember our words, but not the context we meant them in. It can surface what we said, but not always what we meant—or whether we’ve changed.
The deeper risk is this:
If we outsource too much of our narrative memory, do we stop holding our own story?
Might we unintentionally be:
- Flattening identity into a permanent archive
- Outsourcing narrative meaning-making to a machine
- Losing the interpretive power of forgetting
- Mistaking pattern recognition for understanding
- Mistaking coherence for care
Also: Who owns the memory? If our machines remember us better than we remember ourselves—but only within a proprietary system—then our selves are, in some real way, enclosed.
In trauma theory and neurobiology, forgetting is essential for survival. In storytelling, it’s essential for revision. When tools remember everything but can’t interpret why something mattered, we risk losing the nuance of narrative growth. The result? We may end up with tools that are better archivists than we are—but worse storytellers.
5. The lineage
We’ve long built tools to remember for us: diaries, archives, scrapbooks, photo libraries. But we’ve also built people into memory systems: family members, therapists, editors, old friends. Total recall has always been distributed.
In that context, the idea of a single, central machine holding the sum of our thinking is a break from the norm. It’s a new kind of authorship—and a new kind of dependency.
Culturally, this instinct echoes:
- Our desire for frictionless UX: “Don’t make me think.”
- Therapy models that promise deep recall across sessions.
- AI imaginaries in science fiction: J.A.R.V.I.S., Samantha (Her), Joi (Blade Runner 2049)
This instinct reflects a Platonic lineage: the belief that truth exists in perfect, unchanging form—outside time, outside the body. Sam Altman’s “ideal” model—a static reasoning engine with infinite personal context—is a direct descendant of this logic. In contrast, thinkers like Donna Haraway and Cathy Park Hong argue for situated knowledge: truth that emerges through emotion, imperfection, forgetting, and contextual collapse. From Plato to modern AI design, we see memory equated with wisdom. But forgetting, too, has always been part of being human.
6. The skill
The counter-skill to total recall is curated remembering. Rather than perfect remembering, the goal is something closer to narrative sovereignty, in which the machine can support, rather than author. Support might look like:
- Narrative literacy: tracking how an idea evolves
- System design: building tools that remember just enough
- Ethical orientation: deciding what deserves to be remembered, not just what can be
This means building workflows that support clarity, not just accumulation.
Designing for total recall requires an adjacent design skill: intentional forgetting. Can your system forget on purpose? Can you revise a tag, archive a belief, rename a phase? Can you teach your tools to respect your capacity to change?
7. The gut check
What I’m doing now:
- I’m structuring my writing life around recursive, memory-aware systems—weekly rhythms, modular writing blocks, mirrored conversations.
- I’m asking my machine to surface patterns but not finalize meaning.
- I’m watching what it gets wrong—what I’ve implicitly led it to assume through repeated phrasing or use patterns
This month’s log was part of that process. I didn’t give it memory. I gave it use. And use became a kind of memory.
So here are some prompts for you to try:
- Ask your tool of choice, “What do you remember about me?” Then ask yourself: “Is that what I want remembered?”
- Revisit a note or essay from six months ago. What part no longer feels true? Could you teach your machine that?
In a world of intelligent tools, memory isn’t just technical. It’s relational.
8. Additional optional readings
- Andy Clark & David Chalmers, The Extended Mind (1998)
- Vannevar Bush, As We May Think (1945)
- “The Forgetting Curve” and memory decay in cognitive science
- Margaret Atwood, The Handmaid’s Tale (on who controls memory and narrative)
- Ted Chiang, The Truth of Fact, the Truth of Feeling (short story)
- Moya Bailey, Misogynoir Transformed (on how visibility and archiving shape identity)
- Technical brief: OpenAI’s memory roadmap (for chat models)