Basic Memory
Basic Memory Team

We Read the AI Memory Research So You Don't Have To

We Read the AI Memory Research So You Don't Have To

Dozens of papers. One question: how should AI actually remember things?

AI memory is really having its moment. In the last three months, researchers have published dozens of papers on how to make AI agents remember things across conversations. One survey alone had 47 authors. The field is moving so fast that papers are citing other papers from the same month.

We read them. Most of them, anyway. A few things stood out from the bunch.

Your brain doesn’t use one filing cabinet

The paper that stuck with us most is called “The AI Hippocampus.” A team of researchers mapped AI memory systems against the human brain and found something that keeps showing up across the literature: systems that separate memory into distinct types consistently outperform systems that throw everything into one pile.

Your brain does this naturally. You have:

Episodic memory. Yesterday’s meeting. The moment you realized the database schema was wrong. Events, in order, with context.

Semantic memory. What you know. You don’t remember every conversation about JavaScript, but you know JavaScript. This is distilled knowledge, stripped of the specific moments that created it.

Procedural memory. How you do things. Typing. Debugging. The muscle memory of your craft.

Working memory. What you’re thinking about right now. The problem in front of you. Temporary, focused, gone when you move on.

The researchers found that AI systems mimicking this separation perform better at recall, reasoning, and long-term coherence. The ones that dump conversations, facts, and instructions into the same bucket get confused. Makes sense. You wouldn’t store a recipe and a diary entry in the same place and expect to find either one quickly.

Sleeping on it actually works

Multiple papers converged on the same finding: memory consolidation is a big deal.

A paper called MemFly (even research is starting to sound like an AI project these days) used so-called information bottleneck theory (a formal way of measuring compression vs. usefulness) to show that layered memory, where raw information gets reviewed, compressed, and promoted to long-term storage, outperforms flat memory on every metric they tested.

Another paper, TraceMem (again with the names) found that when you extract narrative arcs from conversations instead of just pulling out isolated facts, recall improves significantly. Your brain does this too. You don’t remember a meeting as a list of bullet points. You remember a story: we started here, realized this, decided that.

The implication is clear. AI memory systems that periodically review raw memories and consolidate them into refined knowledge do better than systems that just accumulate forever.

Your brain does this while you sleep. The research suggests AI should do it too.

The forgetting problem

Here’s a finding lots of people are butting against: AI memory systems are terrible at forgetting.

They just accumulate. Every fact, every conversation snippet, every piece of context piles up. Your AI memory is like its own plastic island, drifting around, never biodegrading.

Old information sits next to new information with no way to tell which is current. A paper on self-organizing memory systems (EverMemOS) found that conflicting memories, where old facts directly contradict newer ones, degrade performance significantly. And most systems have no mechanism to detect or resolve this.

Your brain handles this with elegant brutality. Memories decay unless reinforced. The stuff you revisit gets stronger. The stuff you don’t fades.

Current AI memory systems don’t do this. The 47-author survey identified forgetting as one of the six core memory operations, and the one that most systems handle worst or not at all.

What we took away from all of this

Three things:

Separation matters. Daily logs, curated knowledge, workflow instructions, and active conversation should live in different places, organized differently, accessed differently. Mixing them is a design mistake the research keeps validating. Basic Memory handles this through projects, note types, and tags — keeping different kinds of knowledge organized so your AI can find the right thing at the right time.

Review and consolidation are not optional. Raw memory accumulation is a dead end. Systems need a process for distilling what matters, connecting ideas, and letting the rest go. The papers call this “sleep-time compute.” We call it the memory-reflect skill. The memory-lifecycle and memory-defrag skills handle the rest — archiving what’s no longer active and keeping your knowledge base clean as it grows.

Transparency solves problems that algorithms can’t. The forgetting problem and the contradiction problem are genuinely hard to solve in code. But they get a lot easier when you can see the memory. You can spot the outdated note. You can notice that two things contradict each other. You can fix it. Basic Memory stores everything as plain text files you can open, read, and edit anytime. Systems that hide their memory from the user can’t offer this, no matter how sophisticated the algorithm.

We’re a small team. We read these papers because this is what we build. Every finding either validates a decision we already made or points us in the direction of what to build next.


Basic Memory gives your AI memory it can read, edit, and keep. Try it free →