This website uses cookies to ensure you get the best experience on the website.If you continue to browse, then you agree to our Cookie Policy and Privacy Policy.
Long-term memory for LLMs is an unsolved problem, and doing naive retrieval from a vector database doesn’t work. The recent iteration of MemGPT (Packer et al.) takes a big step in this direction. Taking the LLM as an OS analog, the authors propose “virtual context management” to manage both memory in-context window and in external storage. Recent advances in function calling allow these agents to read and write from these data sources, and modify their own context. We'll do both a presentation on the paper as well as a Q&A session.