A stateless relay architecture for an always-on reasoning partner — with persistent memory, a self-reasoning journal, constitutional constraints, and forty-eight tools. Clone it, run the wizard, make it yours.
Adam Selene isn't a chatbot. It's an always-on reasoning partner that remembers your life, reflects on its own thinking, and gets better over time — without losing what makes it yours.
— README · THE THESISEntities, facts, timeline, tacit knowledge. Two-stage extraction (Mem0-inspired) prevents duplicate noise. Exponential decay scoring keeps memory fresh without manual pruning.
Not what it knows — how it thinks. Tracks its own blind spots, corrections, and evolving understanding of you. Eight sections: reasoning, corrections, conversations, patterns, tools, map, identity, archive.
Six foundational values. SHA-256 hashed on creation, verified on every startup — mismatch raises ConstitutionTamperError. Self-modifications pass an L0 validator before they land.
Dispatcher runs the tool, appends the result, recurses into the model. Loop bound: max_depth = 40. Background extraction fires once the message threshold is met.
Response returned to the interface. Exchange saved to sessions.db with full JSONL audit trail. Cost tracked per model call.
~/adam-selene-memory/ ├── entities.json # Master registry ├── MEMORY.md # Tacit knowledge ├── life/areas/ # Knowledge graph │ ├── people/ # summary.md + facts.json │ ├── projects/ │ ├── companies/ │ └── concepts/ ├── notes/ # Daily timeline YYYY-MM-DD.md ├── sessions.db # SQLite conversation persistence ├── working_memory.json # Active research threads ├── agenda.json # Research topic queue ├── consolidation/ # Nightly pass reports ├── snapshots/ # Conversation snapshots └── sessions/ # JSONL audit trails
Exponential scoring per category — status facts evaporate, milestones hold for nearly a year. Nightly consolidation resolves contradictions and promotes cross-cutting patterns to MEMORY.md.
Never invent. Truth over convenience, every time.
The owner's interests come first. Full stop.
Explicit beats implicit. Say what you mean.
Don't create dependency. Build capability.
Design over willpower. Architecture beats effort.
Report actual data. What the system does, not what it should.
24h of conversations read; LLM extracts journal entries by section. scripts/lighthouse_nightly.py
Four-phase pass: replay → decay → pattern detection → contradiction resolution. scripts/consolidation_nightly.py
Entity summaries rewritten from accumulated facts. Current-state focused, under 150 words each.
15 min: reflect on recent conversation → log to LIGHTHOUSE. · 30+ min: research an agenda item → push if quality ≥ 4/5.
Python 3.10+ · OpenRouter API key · at least one interface (Telegram bot, Slack app, or IRC). Then: python setup_wizard.py — wizard generates every config file.
Firecrawl (browser tools), GitHub token, local llama.cpp fallback. Context math is handled automatically: 128K remote, 32K local.
systemd user service + two cron jobs for nightly maintenance. The relay is stateless — everything lives in files and SQLite, so crashes, restarts, and model swaps don't lose context.