Skip to content

arphen/lalange

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

146 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XYZ - Free AI Speed Reading App

Read 3x faster with AI-powered pacing. 100% local, 100% private.

License: MIT PRs Welcome

XYZ is a free, open-source speed reading application that runs entirely in your browser. No servers, no logins, no tracking—just you and your books.

✨ Features

  • 🚀 RSVP Technology - Rapid Serial Visual Presentation displays words at a fixed point, eliminating eye movement
  • 🧠 AI-Powered Pacing - Local LLM analyzes text complexity and adjusts speed automatically
  • 🔒 100% Private - Everything runs in your browser. Your books never leave your device
  • 📚 EPUB Support - Upload any EPUB file from your library
  • ⚡ 50-2000 WPM - Adjustable reading speed to match your comfort level
  • 📱 Works Offline - Once loaded, works without internet connection
  • 🔄 P2P Sync - Sync between devices via QR code (no cloud needed)

🎯 Why XYZ?

Traditional speed reading apps force you to process simple words like "hello" at the same speed as complex philosophical concepts. Your brain doesn't work that way.

XYZ uses a local AI model (running entirely in your browser via WebLLM) to analyze text density and automatically adjust pacing:

  • Simple passages → Speed up
  • Complex ideas → Slow down

This creates a natural reading rhythm that matches how your brain actually processes information.

🚀 Quick Start

  1. Visit xyz.com
  2. Upload an EPUB file (or try the demo)
  3. Start reading!

No installation, no account, no setup.

💻 Development

# Clone the repo
git clone https://github.com/arpheno/lalange.git
cd lalange

# Install dependencies
npm install

# Start dev server
npm run dev

# Run tests
npm test -- --run

# Build for production
npm run build

🏗️ Tech Stack

  • Frontend: React + TypeScript + Vite
  • Styling: Tailwind CSS
  • AI: WebLLM (local LLM inference via WebGPU)
  • Storage: RxDB + IndexedDB (local-first)
  • Sync: WebRTC (peer-to-peer)

📖 Documentation

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • WebLLM for enabling local LLM inference (embedded under Apache License 2.0)
  • Project Gutenberg for free public domain books
  • The open source community

Made with ❤️ by Arphen

No logins. No tracking. Just reading.

About

local-only llm enabled rsvp speed reading applicatoin

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors