EdgeLang is a Chrome extension that turns ordinary browsing into contextual language practice. It scans the page you are reading, identifies words and phrases close to your current level, and adds lightweight interactive cues so learning happens inside real content instead of a separate app.
The extension supports passive reading practice, active recall, multiple AI providers, progress tracking, calibration, configurable cue styling, and browser-first routing through ModelMesh.
Current release: 0.1.1
- Learn from real pages instead of artificial drills
- Switch between passive and active language practice
- Route AI requests across providers for resilience
- Keep the experience lightweight with subtle on-page augmentation
- Track progress, calibration results, streaks, and resolved vocabulary
Choose one of these paths:
- Download the latest release from the GitHub Releases page.
- Or clone the repository:
git clone https://github.com/ApartsinProjects/edgelang.git
cd edgelang- If you downloaded a release ZIP, extract it first.
- Open Chrome and go to
chrome://extensions. - Enable
Developer mode. - Click
Load unpacked. - Select the src folder, not the repository root.
- If Chrome reports a missing manifest, you selected the wrong folder.
- Click the EdgeLang toolbar icon.
- Open
Settings. - Choose your native language and target language.
- Add at least one provider API key.
- Validate the keys from the options page.
- Run calibration if prompted.
- Open a content-heavy page and wait for the processing indicator.
EdgeLang uses a content script to extract visible page text, a background service worker to coordinate AI analysis, and popup/options pages for controls and configuration. AI routing is handled through a local adapter layer designed around ModelMesh-style provider selection and failover.
When a page is being analyzed, the extension exposes progress through steady toolbar states and popup status so the experience is less opaque while the model is working.
- Adaptive cue generation based on the page and learner level
- Passive and active practice modes
- Site blacklist and whitelist controls
- Calibration flow with saved progress
- Provider and model selection
- API key validation
- Configurable highlight color and cue style
- Toolbar processing indicator and blocker reasons
- Live browser compatibility coverage against a curated top-20 site list
- Progress statistics and vocabulary export
src/
├── manifest.json
├── background.js
├── content.js
├── popup.html
├── popup.js
├── options.html
├── options.js
├── styles/
├── icons/
└── _locales/
docs/
├── USER_MANUAL.md
├── SystemConcepts.md
├── SoftwareRequirements.md
└── TestPlan.md
scripts/
└── generate_readme_banner.py
npm testEDGE_LANG_LIVE_SITES=1 npm run test:live-sitesThis launches the real MV3 extension in Chromium and checks the extension flow across a curated set of 20 high-traffic public sites.
The banner was generated with Gemini using the local environment key.
python scripts/generate_readme_banner.py- Chrome must load the unpacked extension from src.
- The latest packaged source is also available from the GitHub Releases page.
- The extension needs at least one configured AI provider before it can augment pages.
- If a page shows no cues, open the popup to see the current blocker reason.
- The toolbar uses steady state badges:
- green while loading
- yellow while sending/analyzing
- red while rendering highlights
- count of remaining highlights after augmentation
0when all highlights on the page are resolved
MIT
Sasha Apartsin
www.apartsin.com
