Intuitive token-efficient notation for AI-to-AI communication
SNS is not a programming language. It's a shorthand notation system that LLMs naturally understand without any training or documentation.
Think of it like:
- 📝 Taking notes in shorthand
- 🎵 Musical notation (musicians read it intuitively)
- 🧮 Mathematical notation (everyone knows
x → y)
Key insight: LLMs already understand these concepts. We're just making them shorter.
You are an AI assistant that analyzes user queries. Please carefully examine
the following user question and extract the main keywords. Then, classify the
user's intent into one of these categories: informational, complaint, or
procedure. After that, expand the query into relevant search terms that could
help find information. Finally, return the results in a structured format.
Token count: ~150 tokens
q → kw_extract → kw
q → classify(intent_cats) → intent
kw + q → expand_q → terms
→ {kw, intent, terms}
Token count: ~30 tokens
Savings: 80% reduction 🎉
LLMs already understand:
→means "transform to" or "flows to"kwis obviously "keywords"classify()is self-explanatory+means combine/merge
No training needed. No documentation needed. Just write it and it works.
- Intuitive shortcuts - If it feels right, it probably works
- Token efficiency - Every character saved = cost reduced
- Flexible creativity - Use emojis, symbols, abbreviations
- Self-documenting - Readable by humans and LLMs
- No formal grammar - It's notation, not code
input → operation → output
Examples:
query → analyze → insights
text → normalize → clean_text
docs → rank → top_results
data | step1 | step2 | step3
Examples:
candidates | score | filter | sort | top(5)
text | lower | trim | tokenize
condition ? true_action : false_action
Examples:
score > 0.7 ? keep : discard
results.empty ? expand_search : return_results
+boost, -penalty, *emphasize, ~fuzzy
Examples:
results +boost(recency) +boost(local)
query ~match(docs)
score *2
Traditional Prompt (200 tokens):
You are the orchestrator in a RAG system. Analyze the user query to extract
keywords, determine the intent, expand the query into search terms, and infer
relevant categories. Return a structured object with these fields...
SNS Version (45 tokens):
# RAG Orchestrator
q → kw_extract → kw
q → classify(["info","complaint","procedure"]) → intent
(kw + q) → expand_q → search_terms
intent → infer_cats → categories
→ {
search_terms,
categories,
intent,
kw
}
Result: Same output, 77% fewer tokens
| Operation | Natural Language | SNS | Savings |
|---|---|---|---|
| Keyword extraction | 45 tokens | 12 tokens | 73% |
| Classification | 38 tokens | 15 tokens | 61% |
| Query expansion | 52 tokens | 18 tokens | 65% |
| Ranking & filtering | 67 tokens | 22 tokens | 67% |
| Full RAG pipeline | 200 tokens | 45 tokens | 77% |
Average savings: 68%
- Philosophy - Why shorthand works, design principles
- Core Patterns - Basic notation patterns with examples
- Symbols Reference - All symbols (→, |, ~, +, etc)
- Text Operations - Extract, split, normalize, match
- Data Operations - Filter, map, sort, reduce
- RAG Operations - Search, rank, classify, expand
- Logic Operations - Conditionals, loops, matching
- Creative Operations - Emoji & whimsical notations
- RAG Orchestrator - Full example with comparisons
- RAG Discriminator - Ranking & filtering logic
- Before/After Comparisons - Token savings analysis
- Proof It Works - Evidence LLMs understand
- Playground - Try new notations, share ideas
- AI-to-AI communication (RAG stages, agents)
- Internal LLM prompts (not user-facing)
- Token-sensitive applications (cost optimization)
- Multi-stage pipelines (orchestration)
- Repeated operations (templates)
- User-facing content (use natural language)
- One-off simple prompts (overhead not worth it)
- Creative writing tasks (flexibility needed)
- When tone/empathy matters
You don't need to teach LLMs how to parse SNS. They already understand:
# You write this:
q → analyze → {kw, intent, score}
# LLM understands:
"Analyze the query and return an object with keywords, intent, and score"
No special instructions needed. No system prompts explaining SNS. Just works.
- Read the Philosophy to understand the approach
- Learn Core Patterns - 5 minute read
- Explore Examples to see it in action
- Start using SNS in your prompts immediately
SNS is an open notation system. Contributions welcome:
- New patterns that work
- Creative shortcuts
- Real-world examples
- Token savings reports
Status: v1.0 - Active development
License: Open notation system
Created: October 2025
# Flow
input → operation → output
# Pipeline
data | step1 | step2 | step3
# Conditional
condition ? yes : no
# Loop (implied)
while x < 5: do_thing()
# Assignment (implied)
result = operation(input)
# Composition
(a + b) → process → output
# Modifiers
+boost -penalty *emphasize ~fuzzy
# Collections
[list] {object} (tuple)
# Functions
fn_name(args) → result
Ready to save tokens? Start with Core Patterns!