A Machine With Human-Like Memory Systems

Can machines think like humans?

By Taewoon Kim

The project “A Machine With Human-Like Memory Systems” is the core of my PhD work. It was heavily inspired by the cognitive science theories, such as the ones from Endel Tulving. It’s about developing agents equipped with human-like external memory systems, modeled using knowledge graphs. These agents are designed to learn essential human skills such as memory management, reasoning, and exploration through reinforcement learning.

The biggest difference between my agent and the agents such as GPTs is that what my agent remembers is explicitly stored in its memory system as knowledge graphs, whlie the LLMs such as GPT remember things in their weights. These weights are floating values, and although there have been many works to understand what these values actually mean, it’s still not interpretable. Plus, the knowledge graphs of my agent are designed to mimic the human-memory systems, so it’s extra explainable.

Knowledge graphs aim to capture knowledge in the form of graphs. The captured knowledge is highly symbolic, and we have Good old fashioned AI (GOFAI) to process such data. But as we all know that GOFAI suffered from generalization, and that’s why we also have machine learning! Reinforcement learning (RL), a subfield of machine learning, helps my agent to be more generalizable. Putting too many symbolic constraints to my agent can harm its generalization capability. So I eased some of them, and let RL learn the rest. Sometimes we also call something like this Neuro-symbolic AI.

At the moment the work is purely academic. The works are being published in academic conferences. But I believe that in the next upcoming years, I can scale this up and make it even production ready, so that everyone can use it. Maybe it’ll be running on your smartphones! All my works are open-sourced. You can find them in my GitHub.

Share: X (Twitter)