Skip to content

Latest commit

 

History

History
487 lines (405 loc) · 18.8 KB

File metadata and controls

487 lines (405 loc) · 18.8 KB

lmagi - Complete Application Overview

What is lmagi?

lmagi (language model Augmented Generative Intelligence) is an experimental Artificial General Intelligence (AGI) framework that enhances Large Language Models with formal reasoning capabilities. It's not just a chatbot - it's a reasoning machine that thinks through problems using logical premises, validates conclusions through truth tables, and maintains a persistent memory system of its reasoning process.

The Big Picture

┌─────────────────────────────────────────────────────────────────┐
│                         USER INTERFACE                          │
│                    (NiceGUI Web Application)                    │
│                     http://localhost:8080                       │
└────────────────────────────┬────────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────────┐
│                         OPENMIND LAYER                          │
│              (Orchestrates Two Concurrent Loops)                │
│                                                                 │
│  ┌──────────────────────┐        ┌──────────────────────┐     │
│  │   MAIN LOOP          │        │  REASONING LOOP      │     │
│  │   User Input Queue   │        │  Autonomous (10s)    │     │
│  │   Processes Chat     │        │  Internal Thinking   │     │
│  └──────────┬───────────┘        └──────────┬───────────┘     │
│             │                               │                  │
│             └───────────────┬───────────────┘                  │
└─────────────────────────────┼──────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                    FUNDAMENTAL AGI LAYER                        │
│                  (Wraps Reasoning Engine)                       │
└────────────────────────────┬────────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────────┐
│                   SOCRATIC REASONING ENGINE                     │
│                  (Premise → Logic → Conclusion)                 │
│                                                                 │
│  1. Add Premise (validate)                                     │
│  2. Generate new premises (up to 5)                            │
│  3. Draw Conclusion (via LLM)                                  │
│  4. Validate via Truth Tables                                  │
│  5. Save to Memory System                                      │
└────────────────────────────┬────────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────────┐
│                        LOGIC TABLES                             │
│              (Propositional Logic Validation)                   │
│                                                                 │
│  • Truth table generation                                      │
│  • Tautology checking                                          │
│  • Modus ponens inference                                      │
│  • Boolean operators (and, or, not, xor, nand, nor, →)        │
└────────────────────────────┬────────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────────┐
│                       MEMORY SYSTEM                             │
│                  (Persistent File Storage)                      │
│                                                                 │
│  memory/stm/          → User conversations (timestamped)       │
│  memory/logs/         → Reasoning logs & premises              │
│  memory/truth/        → Validated logical truths               │
│  mindx/               → Internal reasoning artifacts           │
└─────────────────────────────────────────────────────────────────┘

Core Concepts

1. Augmented Intelligence, Not Just Prompting

Most chatbots simply forward your question to an LLM. lmagi is different:

  • Input → Treated as a logical premise
  • Processing → Generates additional related premises
  • Reasoning → Validates through formal logic (truth tables)
  • Output → A logically sound conclusion
  • Memory → Everything saved for future reference

2. Dual-Mode Operation

User Interaction Mode

  • You type a question in the web interface
  • lmagi treats it as a premise
  • Generates a reasoned conclusion
  • Saves the conversation to memory/stm/

Autonomous Reasoning Mode

  • Runs independently every 10 seconds
  • Uses the last prompt to "think" internally
  • Generates conclusions without user input
  • Saves thoughts to memory/logs/thoughts.json
  • Toggle on/off via UI switch

3. Multi-Model Architecture

lmagi can use any of 5 different LLM providers:

Provider Default Model Use Case
OpenAI gpt-4o Best reasoning quality
Groq mixtral-8x7b-32768 Fast inference
Together.ai mistralai/Mixtral-8x7B-Instruct Alternative cloud hosting
AI71 tiiuae/falcon-180B-chat Falcon models
Ollama User's local models Privacy/offline operation

Switching is seamless - select from the menu, and all reasoning uses that model.

How It Works: A Complete Interaction Flow

Let's trace what happens when you ask: "What is the meaning of life?"

Step 1: User Input (UI Layer)

You type: "What is the meaning of life?"
                    ↓
NiceGUI chat interface captures input
                    ↓
Added to internal_queue (asyncio.Queue)
                    ↓
Displayed as user message bubble

Step 2: OpenMind Processing

# openmind.py - main_loop()
prompt = await self.internal_queue.get()  # Get your question
self.prompt = prompt  # Store for autonomous reasoning
conclusion = await self.get_conclusion_from_agi(prompt)

Step 3: AGI Reasoning Layer

# automind.py - FundamentalAGI
self.agi.reasoning.add_premise(environment_data)
conclusion = self.agi.reasoning.draw_conclusion()

Step 4: Socratic Reasoning Engine

This is where the magic happens:

# SocraticReasoning.py
def draw_conclusion(self):
    # Start with your premise
    premises = ["What is the meaning of life?"]

    # Generate related premises (up to 5)
    for i in range(5):
        new_premise = self.chatter.generate_response(current_premise)
        # Example: "Life has purpose", "Purpose requires meaning", etc.
        premises.append(new_premise)

    # Use premises as context for conclusion
    conclusion = self.chatter.generate_response(premises_context)

    # Validate conclusion is logically sound
    if self.logic_tables.tautology(conclusion):
        self.save_truth(conclusion)

    return conclusion

Step 5: Memory Persistence

Every interaction creates multiple files:

memory/stm/1730404800.json
{
  "instruction": "What is the meaning of life?",
  "response": "The meaning of life is found in..."
}

memory/logs/premises.json
{
  "premises": [
    "What is the meaning of life?",
    "Life has purpose",
    "Purpose requires meaning",
    ...
  ],
  "conclusion": "The meaning of life is..."
}

memory/logs/conclusions.txt
Premises: ['What is the meaning of life?', ...]
Conclusion: The meaning of life is...

memory/truth/{timestamp}_truth.json
{
  "truth": "The meaning of life is...",
  "timestamp": "2024-10-31T16:00:00"
}

Step 6: Display Response

Response appears in chat bubble
                    ↓
Page auto-scrolls to bottom
                    ↓
Ready for next question

Simultaneous: Autonomous Reasoning

While you're thinking of your next question, lmagi is thinking too:

# reasoning_loop() - runs every 10 seconds
while True:
    prompt = self.prompt  # Last user input
    conclusion = await self.get_conclusion_from_agi(prompt)

    # Saved separately as "internal thoughts"
    save_internal_reasoning({
        "prompt": prompt,
        "conclusion": conclusion,
        "timestamp": now
    })

    await asyncio.sleep(10)  # Think again in 10s

Project Structure

lmagi/
├── lmagi.py                    # 🚀 MAIN APPLICATION - Start here
│                               # Creates web UI, handles routes
│
├── automind/                   # 🧠 AGI REASONING COMPONENTS
│   ├── openmind.py            # Orchestrator: manages loops & models
│   ├── automind.py            # FundamentalAGI wrapper class
│   ├── agi.py                 # Core AGI class
│   ├── SocraticReasoning.py   # Premise-based reasoning engine
│   └── logic.py               # Propositional logic & truth tables
│
├── webmind/                    # 🌐 WEB & API INTEGRATION
│   ├── api.py                 # API key management (.env handling)
│   ├── chatter.py             # LLM model wrappers (5 providers)
│   ├── ollama_handler.py      # Local Ollama integration
│   └── html_head.py           # HTML meta tags for UI
│
├── memory/                     # 💾 PERSISTENT STORAGE (created at runtime)
│   ├── stm/                   # Short-term memory: conversations
│   ├── logs/                  # Reasoning process logs
│   ├── truth/                 # Validated logical truths
│   ├── ltm/                   # Long-term memory (planned)
│   └── episodic/              # Episodic memory (planned)
│
├── mindx/                      # 🤔 INTERNAL REASONING (created at runtime)
│   └── agency/                # Executable folder (future)
│
├── gfx/                        # 🎨 STATIC ASSETS
│   ├── easystyle.css          # UI styling
│   └── *.jpg, *.png           # Graphics
│
├── requirements.txt            # 📦 Python dependencies
├── setup.sh                    # ⚙️ Automated environment setup
├── manage.sh                   # 🛠️ Management utilities
└── .env                        # 🔑 API keys (create this!)

Key Technologies

Backend

  • Python 3.9+ - Core language
  • asyncio - Concurrent execution (dual loops)
  • NiceGUI - Web framework (wraps FastAPI)
  • FastAPI - ASGI web server (included with NiceGUI)
  • ujson - Fast JSON serialization for memory
  • python-dotenv - Environment variable management

LLM Integration

  • openai - OpenAI API client
  • groq - Groq API client
  • together - Together.ai API client
  • ai71 - AI71 Falcon models
  • aiohttp - Ollama HTTP streaming

Logic & Reasoning

  • itertools - Truth table combinations
  • pathlib - Cross-platform file operations
  • logging - Comprehensive logging system

What Makes lmagi Unique?

1. Formal Logic Integration

Unlike typical chatbots, lmagi validates conclusions through propositional logic:

  • Truth tables verify tautologies
  • Modus ponens for inference
  • Premise validation before acceptance

2. Transparent Reasoning

Every thought process is logged:

  • See which premises were considered
  • Track internal reasoning loops
  • Review truth validations
  • Audit the entire reasoning chain

3. Persistent Memory

Nothing is forgotten:

  • All conversations timestamped and saved
  • Logical truths accumulate over time
  • Internal thoughts captured separately
  • Memory can inform future reasoning (future feature)

4. Model Agnostic

Switch between 5 different LLM providers without changing code:

  • Compare reasoning across models
  • Fall back if one API is down
  • Use local models (Ollama) for privacy
  • Optimize cost vs. quality

5. Autonomous Thinking

The system can reason independently:

  • Continuous background processing
  • Generates insights without prompts
  • Expands on previous conversations
  • Toggle on/off as needed

Real-World Use Cases

1. Complex Problem Solving

  • Input a complex problem
  • Watch lmagi break it into premises
  • Get logically validated conclusions
  • Review the reasoning chain in logs

2. Research & Analysis

  • Feed research questions
  • Autonomous mode generates related insights
  • Truth tables validate logical consistency
  • Memory accumulates research findings

3. Educational Tool

  • Learn formal logic through examples
  • See how premises lead to conclusions
  • Understand truth table validation
  • Study Socratic reasoning method

4. Philosophical Inquiry

  • Explore philosophical questions
  • Multi-premise reasoning for depth
  • Logical validation of arguments
  • Persistent knowledge base

5. Multi-Model Experimentation

  • Compare how different models reason
  • Test same question across providers
  • Analyze reasoning quality differences
  • Benchmark performance

Technical Innovations

Dual-Loop Architecture

# Concurrent execution without blocking
main_loop()          # User interactions
    ↕ (independent)
reasoning_loop()     # Autonomous thinking

Queue-Based Input

# Non-blocking message handling
await internal_queue.put(user_message)
    ↓
await internal_queue.get()  # Processed in order

Streaming Ollama Integration

# Real-time response streaming
async for line in response.content:
    chunk = json.loads(line)
    display_partial(chunk["response"])

Hierarchical Memory

stm (short-term)  →  logs (processing)  →  truth (validated)
        ↓                                         ↓
    timestamped                              eternal truths

Current Limitations & Future Roadmap

Known Limitations

  1. Ollama Integration Partial: Web UI exists but not connected to SocraticReasoning
  2. Autonomous Loop Repetition: Uses same prompt repeatedly (could be enhanced)
  3. No Memory Retrieval: Memory saved but not yet queried for context
  4. Single Conclusion: Doesn't explore multiple reasoning paths
  5. English Only: No multi-language support

Future Enhancements (Based on Code Structure)

  1. LTM Integration: Long-term memory folder exists but unused
  2. Episodic Memory: Multimodal storage planned
  3. Agency System: mindx/agency/ folder suggests autonomous agents
  4. Memory-Informed Reasoning: Use past truths in new reasoning
  5. Multi-Path Exploration: Generate multiple conclusions, pick best

Performance Characteristics

Response Time

  • OpenAI/Groq/Together: 1-3 seconds (network dependent)
  • Ollama: Sub-second (local, hardware dependent)
  • Reasoning Overhead: ~0.1s (premise generation + logic validation)

Memory Usage

  • Base Application: ~50MB
  • Per Conversation: ~2-5KB (JSON files)
  • Truth Tables: Grows with logical complexity
  • Logs: Append-only, can grow large over time

Scalability

  • Concurrent Users: Limited (NiceGUI single-user by design)
  • Memory Storage: Unlimited (filesystem-based)
  • Model Switching: Zero downtime
  • Autonomous Load: Minimal (10s intervals)

Getting Started: First Steps

  1. Setup Environment

    ./setup.sh
    source venv/bin/activate
  2. Add API Key

    echo 'OPENAI_API_KEY=your-key-here' >> .env
    # or use UI after starting
  3. Run Application

    python lmagi.py
    # Visit http://localhost:8080
  4. First Interaction

    • Click FAB menu (top-left)
    • Select your LLM provider
    • Type a question
    • Watch the reasoning unfold
  5. Explore Memory

    ls -ltr memory/stm/     # Your conversations
    cat memory/logs/premises.json  # Reasoning process
    cat memory/logs/thoughts.json  # Autonomous thoughts

Philosophy Behind lmagi

The name says it all: language model Augmented Generative Intelligence

  • Language Model: Uses state-of-the-art LLMs
  • Augmented: Enhanced with formal logic and reasoning
  • Generative: Creates new insights and conclusions
  • Intelligence: Aspires toward AGI through structured thinking

lmagi embodies the idea that true intelligence requires:

  1. Premises: Starting from known facts
  2. Reasoning: Logical progression through ideas
  3. Validation: Checking conclusions for consistency
  4. Memory: Learning from past interactions
  5. Autonomy: Independent thought processes

It's not about replacing human thinking - it's about creating a reasoning companion that thinks through problems methodically, transparently, and persistently.


Project Stats:

  • Lines of Code: ~2,051
  • Python Files: 13
  • Supported LLMs: 5
  • Memory Systems: 4 (stm, logs, truth, mindx)
  • Concurrent Loops: 2
  • License: MIT (c) 2024 Gregory L. Magnusson