lmagi (language model Augmented Generative Intelligence) is an experimental Artificial General Intelligence (AGI) framework that enhances Large Language Models with formal reasoning capabilities. It's not just a chatbot - it's a reasoning machine that thinks through problems using logical premises, validates conclusions through truth tables, and maintains a persistent memory system of its reasoning process.
┌─────────────────────────────────────────────────────────────────┐
│ USER INTERFACE │
│ (NiceGUI Web Application) │
│ http://localhost:8080 │
└────────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ OPENMIND LAYER │
│ (Orchestrates Two Concurrent Loops) │
│ │
│ ┌──────────────────────┐ ┌──────────────────────┐ │
│ │ MAIN LOOP │ │ REASONING LOOP │ │
│ │ User Input Queue │ │ Autonomous (10s) │ │
│ │ Processes Chat │ │ Internal Thinking │ │
│ └──────────┬───────────┘ └──────────┬───────────┘ │
│ │ │ │
│ └───────────────┬───────────────┘ │
└─────────────────────────────┼──────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ FUNDAMENTAL AGI LAYER │
│ (Wraps Reasoning Engine) │
└────────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ SOCRATIC REASONING ENGINE │
│ (Premise → Logic → Conclusion) │
│ │
│ 1. Add Premise (validate) │
│ 2. Generate new premises (up to 5) │
│ 3. Draw Conclusion (via LLM) │
│ 4. Validate via Truth Tables │
│ 5. Save to Memory System │
└────────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ LOGIC TABLES │
│ (Propositional Logic Validation) │
│ │
│ • Truth table generation │
│ • Tautology checking │
│ • Modus ponens inference │
│ • Boolean operators (and, or, not, xor, nand, nor, →) │
└────────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ MEMORY SYSTEM │
│ (Persistent File Storage) │
│ │
│ memory/stm/ → User conversations (timestamped) │
│ memory/logs/ → Reasoning logs & premises │
│ memory/truth/ → Validated logical truths │
│ mindx/ → Internal reasoning artifacts │
└─────────────────────────────────────────────────────────────────┘
Most chatbots simply forward your question to an LLM. lmagi is different:
- Input → Treated as a logical premise
- Processing → Generates additional related premises
- Reasoning → Validates through formal logic (truth tables)
- Output → A logically sound conclusion
- Memory → Everything saved for future reference
- You type a question in the web interface
- lmagi treats it as a premise
- Generates a reasoned conclusion
- Saves the conversation to
memory/stm/
- Runs independently every 10 seconds
- Uses the last prompt to "think" internally
- Generates conclusions without user input
- Saves thoughts to
memory/logs/thoughts.json - Toggle on/off via UI switch
lmagi can use any of 5 different LLM providers:
| Provider | Default Model | Use Case |
|---|---|---|
| OpenAI | gpt-4o | Best reasoning quality |
| Groq | mixtral-8x7b-32768 | Fast inference |
| Together.ai | mistralai/Mixtral-8x7B-Instruct | Alternative cloud hosting |
| AI71 | tiiuae/falcon-180B-chat | Falcon models |
| Ollama | User's local models | Privacy/offline operation |
Switching is seamless - select from the menu, and all reasoning uses that model.
Let's trace what happens when you ask: "What is the meaning of life?"
You type: "What is the meaning of life?"
↓
NiceGUI chat interface captures input
↓
Added to internal_queue (asyncio.Queue)
↓
Displayed as user message bubble
# openmind.py - main_loop()
prompt = await self.internal_queue.get() # Get your question
self.prompt = prompt # Store for autonomous reasoning
conclusion = await self.get_conclusion_from_agi(prompt)# automind.py - FundamentalAGI
self.agi.reasoning.add_premise(environment_data)
conclusion = self.agi.reasoning.draw_conclusion()This is where the magic happens:
# SocraticReasoning.py
def draw_conclusion(self):
# Start with your premise
premises = ["What is the meaning of life?"]
# Generate related premises (up to 5)
for i in range(5):
new_premise = self.chatter.generate_response(current_premise)
# Example: "Life has purpose", "Purpose requires meaning", etc.
premises.append(new_premise)
# Use premises as context for conclusion
conclusion = self.chatter.generate_response(premises_context)
# Validate conclusion is logically sound
if self.logic_tables.tautology(conclusion):
self.save_truth(conclusion)
return conclusionEvery interaction creates multiple files:
memory/stm/1730404800.json
{
"instruction": "What is the meaning of life?",
"response": "The meaning of life is found in..."
}
memory/logs/premises.json
{
"premises": [
"What is the meaning of life?",
"Life has purpose",
"Purpose requires meaning",
...
],
"conclusion": "The meaning of life is..."
}
memory/logs/conclusions.txt
Premises: ['What is the meaning of life?', ...]
Conclusion: The meaning of life is...
memory/truth/{timestamp}_truth.json
{
"truth": "The meaning of life is...",
"timestamp": "2024-10-31T16:00:00"
}
Response appears in chat bubble
↓
Page auto-scrolls to bottom
↓
Ready for next question
While you're thinking of your next question, lmagi is thinking too:
# reasoning_loop() - runs every 10 seconds
while True:
prompt = self.prompt # Last user input
conclusion = await self.get_conclusion_from_agi(prompt)
# Saved separately as "internal thoughts"
save_internal_reasoning({
"prompt": prompt,
"conclusion": conclusion,
"timestamp": now
})
await asyncio.sleep(10) # Think again in 10slmagi/
├── lmagi.py # 🚀 MAIN APPLICATION - Start here
│ # Creates web UI, handles routes
│
├── automind/ # 🧠 AGI REASONING COMPONENTS
│ ├── openmind.py # Orchestrator: manages loops & models
│ ├── automind.py # FundamentalAGI wrapper class
│ ├── agi.py # Core AGI class
│ ├── SocraticReasoning.py # Premise-based reasoning engine
│ └── logic.py # Propositional logic & truth tables
│
├── webmind/ # 🌐 WEB & API INTEGRATION
│ ├── api.py # API key management (.env handling)
│ ├── chatter.py # LLM model wrappers (5 providers)
│ ├── ollama_handler.py # Local Ollama integration
│ └── html_head.py # HTML meta tags for UI
│
├── memory/ # 💾 PERSISTENT STORAGE (created at runtime)
│ ├── stm/ # Short-term memory: conversations
│ ├── logs/ # Reasoning process logs
│ ├── truth/ # Validated logical truths
│ ├── ltm/ # Long-term memory (planned)
│ └── episodic/ # Episodic memory (planned)
│
├── mindx/ # 🤔 INTERNAL REASONING (created at runtime)
│ └── agency/ # Executable folder (future)
│
├── gfx/ # 🎨 STATIC ASSETS
│ ├── easystyle.css # UI styling
│ └── *.jpg, *.png # Graphics
│
├── requirements.txt # 📦 Python dependencies
├── setup.sh # ⚙️ Automated environment setup
├── manage.sh # 🛠️ Management utilities
└── .env # 🔑 API keys (create this!)
- Python 3.9+ - Core language
- asyncio - Concurrent execution (dual loops)
- NiceGUI - Web framework (wraps FastAPI)
- FastAPI - ASGI web server (included with NiceGUI)
- ujson - Fast JSON serialization for memory
- python-dotenv - Environment variable management
- openai - OpenAI API client
- groq - Groq API client
- together - Together.ai API client
- ai71 - AI71 Falcon models
- aiohttp - Ollama HTTP streaming
- itertools - Truth table combinations
- pathlib - Cross-platform file operations
- logging - Comprehensive logging system
Unlike typical chatbots, lmagi validates conclusions through propositional logic:
- Truth tables verify tautologies
- Modus ponens for inference
- Premise validation before acceptance
Every thought process is logged:
- See which premises were considered
- Track internal reasoning loops
- Review truth validations
- Audit the entire reasoning chain
Nothing is forgotten:
- All conversations timestamped and saved
- Logical truths accumulate over time
- Internal thoughts captured separately
- Memory can inform future reasoning (future feature)
Switch between 5 different LLM providers without changing code:
- Compare reasoning across models
- Fall back if one API is down
- Use local models (Ollama) for privacy
- Optimize cost vs. quality
The system can reason independently:
- Continuous background processing
- Generates insights without prompts
- Expands on previous conversations
- Toggle on/off as needed
- Input a complex problem
- Watch lmagi break it into premises
- Get logically validated conclusions
- Review the reasoning chain in logs
- Feed research questions
- Autonomous mode generates related insights
- Truth tables validate logical consistency
- Memory accumulates research findings
- Learn formal logic through examples
- See how premises lead to conclusions
- Understand truth table validation
- Study Socratic reasoning method
- Explore philosophical questions
- Multi-premise reasoning for depth
- Logical validation of arguments
- Persistent knowledge base
- Compare how different models reason
- Test same question across providers
- Analyze reasoning quality differences
- Benchmark performance
# Concurrent execution without blocking
main_loop() # User interactions
↕ (independent)
reasoning_loop() # Autonomous thinking# Non-blocking message handling
await internal_queue.put(user_message)
↓
await internal_queue.get() # Processed in order# Real-time response streaming
async for line in response.content:
chunk = json.loads(line)
display_partial(chunk["response"])stm (short-term) → logs (processing) → truth (validated)
↓ ↓
timestamped eternal truths
- Ollama Integration Partial: Web UI exists but not connected to SocraticReasoning
- Autonomous Loop Repetition: Uses same prompt repeatedly (could be enhanced)
- No Memory Retrieval: Memory saved but not yet queried for context
- Single Conclusion: Doesn't explore multiple reasoning paths
- English Only: No multi-language support
- LTM Integration: Long-term memory folder exists but unused
- Episodic Memory: Multimodal storage planned
- Agency System:
mindx/agency/folder suggests autonomous agents - Memory-Informed Reasoning: Use past truths in new reasoning
- Multi-Path Exploration: Generate multiple conclusions, pick best
- OpenAI/Groq/Together: 1-3 seconds (network dependent)
- Ollama: Sub-second (local, hardware dependent)
- Reasoning Overhead: ~0.1s (premise generation + logic validation)
- Base Application: ~50MB
- Per Conversation: ~2-5KB (JSON files)
- Truth Tables: Grows with logical complexity
- Logs: Append-only, can grow large over time
- Concurrent Users: Limited (NiceGUI single-user by design)
- Memory Storage: Unlimited (filesystem-based)
- Model Switching: Zero downtime
- Autonomous Load: Minimal (10s intervals)
-
Setup Environment
./setup.sh source venv/bin/activate -
Add API Key
echo 'OPENAI_API_KEY=your-key-here' >> .env # or use UI after starting
-
Run Application
python lmagi.py # Visit http://localhost:8080 -
First Interaction
- Click FAB menu (top-left)
- Select your LLM provider
- Type a question
- Watch the reasoning unfold
-
Explore Memory
ls -ltr memory/stm/ # Your conversations cat memory/logs/premises.json # Reasoning process cat memory/logs/thoughts.json # Autonomous thoughts
The name says it all: language model Augmented Generative Intelligence
- Language Model: Uses state-of-the-art LLMs
- Augmented: Enhanced with formal logic and reasoning
- Generative: Creates new insights and conclusions
- Intelligence: Aspires toward AGI through structured thinking
lmagi embodies the idea that true intelligence requires:
- Premises: Starting from known facts
- Reasoning: Logical progression through ideas
- Validation: Checking conclusions for consistency
- Memory: Learning from past interactions
- Autonomy: Independent thought processes
It's not about replacing human thinking - it's about creating a reasoning companion that thinks through problems methodically, transparently, and persistently.
Project Stats:
- Lines of Code: ~2,051
- Python Files: 13
- Supported LLMs: 5
- Memory Systems: 4 (stm, logs, truth, mindx)
- Concurrent Loops: 2
- License: MIT (c) 2024 Gregory L. Magnusson