Experimental space for trying new notations, testing ideas, and pushing the boundaries of shorthand.
This is where we experiment with SNS notation. Try wild ideas, test unconventional symbols, and see what works!
Philosophy: If it's logical and LLMs understand it, it's valid SNS.
Rules: There are no rules. Experiment freely!
Challenge: Express a full RAG orchestrator in under 30 tokens
Attempt A (32 tokens):
q→kw→expand→terms
q→cls(intents)→i
i→cats
→{terms,cats,i,kw}
Attempt B (25 tokens):
q→kw+expand→t
q→cls→i→cats
→{t,cats,i,kw}
Attempt C (20 tokens):
q→{kw→expand→t,cls→i→cats}
Test with LLM: Do they all work? Which is most readable?
Arrow Variations:
data ~> fuzzy_process # Squiggly for fuzzy
data => strong_transform # Fat arrow for emphasis
data --> explicit_flow # Dashed for clear flow
data ~~> very_fuzzy # Double squiggle
Box operators:
data [>] boxed_process
data <] reverse_process
data [X] terminate
Test: Do LLMs understand creative arrows?
Attempt:
📝 → 🔍 → 📊 → ⚖️ → ✅
# Document → Search → Analyze → Rank → Approve
More Complex:
👤 💭 → 🔍 🗂️ → ⚖️ 📄 → 🎯 ✨ → 📦 👍
# User thought → Search files → Rank docs → Target best → Package good
Question: Is this too cryptic or delightfully concise?
Pattern A - Verb-first:
extract keywords from query → kw
classify query into intents → intent
expand kw with context → terms
Pattern B - SNS with natural conjunctions:
q → kw_extract → kw
then q → classify → intent
then kw + intent → expand → terms
Pattern C - Natural language with SNS operators:
take query → extract keywords
query → classify as intent
combine keywords with intent → expand to search terms
Test: Does added natural language help or hurt?
Set Theory Heavy:
results = {r ∈ candidates | r.score > 0.7}
expanded = ⋃(cats → search(q))
final = (results ∪ expanded) \ duplicates
Functional Notation:
f(q) = {kw: κ(q), intent: ι(q), terms: τ(κ(q), q)}
where:
κ = extract_keywords
ι = classify_intent
τ = expand_terms
Lambda Calculus Style:
λq. (λkw. (λi. {kw, i, τ(kw,i)})(ι(q)))(κ(q))
Test: Too academic or elegantly compact?
Vertical Flow:
query
↓ extract
keywords
↓ expand
search_terms
↓ search
results
Tree Structure:
query
├─→ kw_extract → keywords
├─→ classify → intent
└─→ expand(kw, intent) → terms
└─→ {keywords, intent, terms}
Box Diagrams:
┌─────────┐
│ query │
└────┬────┘
↓
┌─────────┐
│ analyze │
└────┬────┘
↓
┌─────────┐
│ results │
└─────────┘
Test: Does visual structure help understanding?
Sentence Style:
given: query from user
extract: keywords
determine: intent (info, complaint, procedure)
combine: keywords and intent
expand: into search terms
infer: relevant categories from intent
package: all results together
return: search parameters
Outline Style:
1. Extract keywords from query
2. Classify intent
3. Expand query
3a. Use keywords
3b. Use intent for context
4. Infer categories
5. Return {terms, categories, intent, keywords}
Test: Is this clearer for non-technical users?
query *whoosh* → cleaned
cleaned *chop* → keywords
keywords *boom* expand → terms
terms *zoom* search → results
results *ding* → success
Or:
data *splat* → normalized
normalized *zip* → processed
processed *pop* → output
Test: Do LLMs understand this? (Probably yes, trained on comics/social media!)
Super Compact Multi-Assign:
{kw, intent, terms, cats} = {
q→kw_extract,
q→classify(intents),
kw+q→expand,
intent→infer_cats
}
One-Line Everything:
→{kw:q→kw_extract,intent:q→cls(i),terms:kw+q→expand,cats:intent→infer}
Semicolon Separator:
kw=q→extract; i=q→cls(intents); t=kw+q→expand; cats=i→infer; →{kw,i,t,cats}
Test: What's the limit before it becomes unreadable?
RAG Dialect:
# Special RAG operators
q ⟿ docs # retrieval
docs ⟿ q # rank by query relevance
result ⟿ n # take top n
docs ∥ q # parallel rank
docs ⊗ boost # apply boost
Data Processing Dialect:
# Data operators
df ↦ clean # dataframe map
df ⇶ filter # filter rows
df ⊞ agg # aggregate
df ⊠ join(df2) # join
Test: Would domain-specific symbols help specialized use cases?
Emphasis for importance:
query → **high_priority_op** → result
data → _optional_step_ → output
item → ~~deprecated_func~~ → new_func
Code blocks for types:
query: `str` → analyze: `Analysis` → result: `SearchParams`
Test: Does formatting convey additional meaning to LLMs?
Attempt:
hey, take this query → pull out keywords → cool
now take same query → figure out intent → nice
mix keywords with intent → expand the terms → sweet
intent tells us categories → grab those → perfect
bundle it all up → ship it out
More Natural:
so we've got this query, right?
first thing → extract keywords
same time → classify what they want
then → take those keywords → expand them
based on intent → infer categories
finally → pack it all together → done
Test: Does casual language hurt or help LLM understanding?
Word-Only SNS:
query becomes keywords via extract
query becomes intent via classify
keywords plus intent becomes terms via expand
intent becomes categories via infer
return object with terms and categories and intent and keywords
Minimal Punctuation:
extract keywords from query as kw
classify query into intents as intent
expand kw using intent as terms
infer categories from intent as cats
return kw intent terms cats
Test: Are symbols necessary or just nice-to-have?
Balanced Approach:
query 🔍 search(docs) → candidates
candidates ⚖️ rank_by(relevance, q) → ranked
ranked ✂️ filter(score > 0.7) → relevant
if relevant.length < 3:
intent 🎯 infer_cats → new_cats
q 🔍 expand_search(new_cats) → more
relevant ++ more ✨ dedupe → relevant
relevant | sort(desc) | top(10) 📦 → results
Test: Is this the sweet spot?
Signal Notation:
$query → $keywords
$query → $intent
[$keywords, $intent] → $terms
$intent → $categories
effect: when $categories changes → refresh search
computed: $search_params = {$terms, $categories, $intent}
Reactive Operators:
query => keywords # derives from
query => intent
[keywords, intent] => terms
intent => categories
Test: Does reactive notation convey useful semantics?
Template:
# Describe what you're trying to express:
# Your notation:
# Why you think it works:
# Test with an LLM:
Share Your Experiments:
- What worked?
- What didn't?
- Any surprises?
- New patterns discovered?
q 🔍 → {kw, intent}
kw + intent → 🎯 search → 📄
📄 ⚖️ rank → ✂️ filter(>0.7) → ✨
✨.length < 3 ? 🔄 expand : ✅
→ 📦 results
Token count: 28 tokens (vs 200+ natural language)
Savings: 86%
Tested: Works with GPT-4, Claude, Llama
Status: ✅ Production ready
R = {r ∈ C | r.score > θ}
E = ⋃{search(cat, q) | cat ∈ infer(intent)}
F = (R ∪ E) \ duplicates
sort(F, desc) ∩ [0:n]
Status: ✅ Works! Great for set-heavy operations
# Process query
q → kw_extract → keywords
q → classify(intents) → intent
# Expand search
combine keywords and intent → expand → search_terms
# Infer context
intent → infer_categories → categories
# Return structured
→ {search_terms, categories, intent, keywords}
Status: ✅ Most readable, still 70% savings
x ¿¿ y ‽‽ z ※※ result
Problem: LLMs don't recognize rare Unicode symbols
Lesson: Stick to common symbols
q→k→e→t→s→r
Problem: Lost all readability, hard to maintain
Lesson: Balance token savings with clarity
query → analyze
get_intent(query)
expand_terms ← keywords + intent
categories = infer(intent)
Problem: Mixing too many styles creates confusion
Lesson: Pick a style and stick to it
Step 1: Write Traditional Prompt
[Your task in natural language]
Step 2: Write SNS Version
[Your experimental notation]
Step 3: Test with LLM
Prompt LLM with both, compare outputs
Step 4: Measure
- Token count (before/after)
- Output quality (identical?)
- Readability (1-10 scale)
- Maintenance (easy to update?)
Step 5: Share Results
# What worked:
# What didn't:
# Token savings:
# Recommended: YES/NO
Current record: RAG orchestrator in 20 tokens
Your attempt:
[Your notation here]
Can you go lower while maintaining readability?
Goal: Express complex operation using only emoji + minimal text
Example to beat:
👤💭 🔍 🗂️ ⚖️ 📊 ✂️ 🎯 ✨ 📦 ✅
Your attempt:
[Your emoji notation]
Goal: Best balance of SNS + natural language
Current best: 70% token savings, 9/10 readability
Your attempt:
[Your hybrid notation]
Want to add your experiments to this playground?
Format:
### Experiment: [Name]
**Goal**: [What you're trying to achieve]
**Notation**:
```sns
[Your notation]Results:
- Token savings: X%
- Tested with: [LLM names]
- Readability: X/10
- Status: ✅ /
⚠️ / ❌
Notes: [Any observations]
---
## Experimental Principles
### What Makes a Good Experiment?
1. **Clear goal**: What are you trying to improve?
2. **Measurable**: Can you quantify success?
3. **Testable**: Can you verify LLM understanding?
4. **Comparable**: How does it compare to alternatives?
5. **Documented**: Can others reproduce it?
### How to Know If It Works
✅ **Good signs**:
- LLM produces correct output
- Token count significantly reduced
- Still readable by humans
- Maintainable in production
- Team members can understand it
❌ **Warning signs**:
- LLM asks for clarification
- Output differs from natural language version
- Impossible to read after 1 week
- No one else understands it
- Saves <30% tokens
---
## Next Steps
**After Experimenting**:
1. Test your best notation in production
2. Measure real-world savings
3. Share results with community
4. Iterate based on feedback
**Resources**:
- [Core Patterns](core-patterns.md) - Established patterns
- [Symbols Reference](symbols.md) - Available symbols
- [Examples](examples/orchestrator.md) - Production code
---
## The Spirit of the Playground
> "SNS is notation, not dogma. If it works for you and your LLMs understand it, it's valid. Experiment freely, share generously, iterate constantly."
**Remember**: The best notation is the one that:
- Saves tokens
- LLMs understand
- Humans can read
- Works in production
**Happy experimenting!** 🚀
---
**Last Updated**: October 2025
**Status**: Active experimentation welcome
**Contributions**: Open to all
Return to [Index](index.md) | Visit [Examples](examples/orchestrator.md)