Get started with SNS-Core in 5 minutes!
Open model.sns and copy the entire contents.
Paste the model.sns content into any LLM (ChatGPT, Claude, Gemini, etc.).
Ask the LLM:
Convert this prompt to SNS notation:
[Your natural language prompt here]
Copy the SNS notation and use it in your AI system!
You provide:
Analyze the user query and extract keywords.
Then classify the intent as question, complaint, or request.
Return structured results.
LLM converts to:
q → kw_extract → kw
q → classify(["question","complaint","request"]) → intent
→ {kw, intent}
Token savings: ~50 tokens → ~15 tokens (70% reduction)
Read Core Patterns to understand:
- Flow:
a → b → c - Pipeline:
a | b | c - Conditional:
x ? y : z
Natural Language:
Filter the documents to keep only those with score above 0.7,
then sort them by relevance, and return the top 5
Convert to SNS:
docs | filter(score > 0.7) | sort(relevance) | top(5)
Test it with your LLM—it works!
Take an internal AI-to-AI prompt from your system and convert it:
- Identify operations (extract, classify, filter, etc.)
- Connect with arrows (
→) or pipes (|) - Use abbreviations (
query→q,keywords→kw) - Remove filler words ("please", "carefully", "then")
- Test with your LLM
Before (Natural Language):
You are the orchestrator in a RAG pipeline. Analyze the user's query
to extract keywords and classify the intent into one of these categories:
informational, transactional, or navigational. Then expand the keywords
into search terms and infer the relevant document categories. Finally,
return a structured object containing the search terms, categories,
intent, and keywords.
Token count: ~150 tokens
Cost (at 10K queries/day): ~$45/day
After (SNS):
q → kw_extract → kw
q → classify(["info","transactional","nav"]) → intent
(kw + q) → expand_q → terms
intent → infer_cats → cats
→ {terms, cats, intent, kw}
Token count: ~30 tokens
Cost (at 10K queries/day): ~$9/day
Savings: $36/day = $1,080/month 💰
Give this to ChatGPT/Claude (no context needed):
q → kw_extract → kw
q → classify(["positive","negative","neutral"]) → sentiment
→ {kw, sentiment}
q = "I love this product!"
Expected output:
{
"kw": ["love", "product"],
"sentiment": "positive"
}It works! 🎉
# Extract and analyze
text → extract_keywords → keywords
# Classify
text → classify(categories) → category
# Pipeline
data | step1 | step2 | step3
# Conditional
score > 0.7 ? keep : discard
# Filter collection
[items] | filter(condition)
# Multiple operations
input → op1 → result1
input → op2 → result2
→ {result1, result2}
# Combine inputs
(input1 + input2) → process → output
# Check and branch
valid ? process : reject
- Convert 1 internal AI prompt to SNS
- Test it with your LLM
- Measure token savings
- Convert all high-volume internal prompts
- Calculate monthly cost savings
- Share results with team
- Consider training an SLM on SNS (see SLM Training Guide)
- Build SNS into your deployment pipeline
- Contribute your use case back to the community
- Questions: GitHub Discussions
- Bug Reports: GitHub Issues
- Examples: See examples/ directory
- Full Docs: See README.md
- Start small: Convert one prompt first
- Use model.sns: Let the LLM do the conversion
- Verify quality: Compare outputs to natural language version
- Measure savings: Track token counts
- Share learnings: Contribute back to the community
Ready to save 60-85% on your AI costs? Let's go! 🚀
Repository: github.com/EsotericShadow/sns-core