Open Brain System The open-source AI-integrated brain system — pgvector + MCP + Supabase

Open Brain System

A reference implementation of the open-source, AI-integrated brain system: pgvector, MCP, and Supabase wired together for human-AI memory.


The Definition Worth Defending

Defining the AI-Integrated Memory Architecture

An open brain is a user-owned, database-backed knowledge system that stores personal thoughts and context as vector embeddings. Unlike traditional knowledge management, an open brain is designed for machine consumption first. It utilizes the Model Context Protocol (MCP) to allow any AI agent—such as Claude or ChatGPT—to query a private database without relying on proprietary SaaS intermediaries.

Open Brain vs. Building a Second Brain

This architecture differs fundamentally from Tiago Forte's "Building a Second Brain" (BASB) methodology. BASB is a human-centric workflow focused on the capture, organization, and distillation of notes for human retrieval. Tools like Obsidian or Notion facilitate this process through folders and tags, but they remain static silos unless manually queried by a user.

An open brain replaces manual curation with semantic search. While an Obsidian vault requires the user to remember where a note lives or use keyword searches, an open brain uses pgvector to enable AI agents to retrieve relevant context based on mathematical proximity in a vector space. The shift is from note-taking for humans to context-provisioning for AI.

An open brain is not a digital notebook; it is a persistent, agent-readable memory layer that decouples personal data from the LLM provider.

Why 'Open' Matters in 2026

Data Gravity and Sovereignty

Personal memory is the most high-leverage asset an individual owns. Relying on SaaS vendors for this layer creates a dangerous abstraction where the user's cognitive history is subject to pricing changes, censorship, or platform death. An open brain ensures data gravity remains with the user by utilizing self-hosted or managed open-source databases.

The Role of MCP

The Model Context Protocol (MCP) serves as the universal interface. By implementing MCP, a user avoids tool lock-in; if a superior LLM replaces current market leaders, the new client simply plugs into the existing MCP server to access the same memory bank. This contrasts with proprietary solutions like Supermemory.ai or SuperMemory, which wrap data in closed ecosystems.

Infrastructure Economics

The cost of maintaining an open brain is negligible compared to subscription-based AI memories. A stack utilizing Postgres and pgvector can handle 50,000 entries for under $10 per month—and often as low as $0.30 on lean configurations. Open-source projects like Khoj demonstrate the viability of this approach by prioritizing local or user-controlled indexing over closed cloud silos.

Feature Proprietary AI Memory Open Brain (MCP/pgvector)
Data Ownership Vendor-controlled User-owned (Postgres)
Interoperability API-locked Universal via MCP
Cost Structure Monthly Subscription Infrastructure Cost (Low)

The Stack Worth Using

The Canonical Technical Stack

A production-ready open brain relies on a specific set of primitives to ensure low latency and high retrieval accuracy. Supabase provides the ideal foundation, offering managed Postgres, authentication, and storage in a single package. The core of the system is pgvector, an extension that allows Postgres to store and query embeddings using cosine similarity or Euclidean distance.

Integration and Retrieval

To make this data accessible, an MCP server acts as the bridge between the database and the AI client. For the frontend, developers typically use Next.js for a dashboard view or simple HTML for lightweight interaction. The embeddings themselves are generated via APIs like OpenAI's text-embedding-3-small or open-source alternatives such as Nomic Embed for those requiring full local privacy.

Database Implementation

Implementing an open brain requires enabling the vector extension and defining a table that can store both the raw text and its corresponding embedding vector. The following SQL demonstrates the basic setup:

-- Enable the pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;

-- Create a table for personal memories
CREATE TABLE brain_memories (
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),
  content text NOT NULL,
  embedding vector(1536), -- 1536 dimensions for OpenAI embeddings
  created_at timestamp with time zone DEFAULT timezone('utc'::text, now())
);

-- Create an index for fast semantic search
CREATE INDEX ON brain_memories USING hnsw (embedding vector_cosine_ops);

This schema allows the AI to perform a similarity search by calculating the distance between a user's current query embedding and the stored vectors in the brain_memories table.

What This Site Covers

Navigation Guide

This site serves as a technical manual for deploying and optimizing an open brain. The content is structured to move from theoretical foundations to concrete implementation.

For those seeking the opinionated reference implementation, visit novcog.dev.

Questions answered

What readers usually ask next.

What is an open brain system in AI?
An open brain is a user-owned, database-backed knowledge system that stores personal thoughts and memories as vector embeddings. Unlike closed SaaS tools, it uses the Model Context Protocol (MCP) to allow any AI—such as Claude or ChatGPT—to access your private context directly from a Postgres database without intermediaries.
How is an open brain different from a second brain?
A traditional 'second brain' focuses on human-readable organization and manual retrieval. An open brain is designed for machine readability, using vector embeddings to allow AI agents to perform semantic searches and retrieve relevant context automatically via API.
What is MCP (Model Context Protocol) in the context of an open brain?
MCP is an open standard that enables AI models to connect seamlessly to external data sources. In an open brain architecture, the MCP server acts as the bridge, allowing LLMs to query your Postgres/pgvector database without needing a custom integration for every new AI tool.
Can I build an open brain system for free?
Yes, you can build one using open-source components. By leveraging the free tiers of Supabase for Postgres hosting and deploying your own MCP server on a local machine or a free cloud tier, the initial setup cost is effectively zero.
Is Obsidian an open brain system?
No. While Obsidian is a powerful tool for personal knowledge management (PKM), it stores data as flat Markdown files. An open brain specifically requires a vector database and a protocol like MCP to make that data programmatically accessible to AI agents in real-time.
What's the difference between an open brain and Tiago Forte's Second Brain?
Tiago Forte's Building a Second Brain (BASB) is a methodology for organizing information for human consumption using systems like PARA. An open brain is a technical architecture designed to turn that information into a queryable memory layer for AI agents.
Why use pgvector for an AI brain?
pgvector extends PostgreSQL to support vector similarity search, which is essential for RAG (Retrieval-Augmented Generation). It allows the system to find 'conceptually similar' memories rather than just matching keywords, all while keeping your data in a reliable, relational database.
How much does it cost to run an open brain?
Operating costs are extremely low because you aren't paying for a monthly SaaS subscription. Depending on your data volume and hosting choice (e.g., Supabase), the monthly infrastructure cost typically ranges between $0.10 and $0.30.
Does Supermemory MCP count as an open brain?
If it allows for user-owned data storage and uses the Model Context Protocol to expose that data to various LLMs without a proprietary silo, it fits the open brain philosophy. The key is whether you own the database or if the data is locked in their cloud.
Is NovCog Brain an open brain system?
NovCog focuses on cognitive augmentation and AI memory, but to be a true 'open brain,' it must adhere to the open-standard architecture (like MCP) and user-owned storage. Most proprietary AI memory tools are 'closed brains' because they lack this interoperability.
Can I migrate my notes from Obsidian to an open brain?
Yes. You can script the migration by parsing your Markdown files and passing them through an embedding model (like OpenAI's text-embedding-3-small) to store them as vectors in a Postgres database with pgvector.
What should I read first to learn about open brain systems?
Start with guides on the Model Context Protocol (MCP) and pgvector implementation. Specifically, look for 'zero-to-hero' setup guides that demonstrate how to link a Supabase instance to an MCP server for AI-driven memory retrieval.