You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Rename `--metrics` flag to `--tokens` for clarity
- Add `--cost` flag to enable cost estimation for each model
- Update README with comprehensive multi-model comparison example
- Include new CLI options in configuration and help documentation
- Improve documentation to highlight token and cost tracking benefits
This change introduces more granular insights into model interactions, allowing users to:
- Compare token consumption across different models
- Estimate computational costs
- Make informed decisions about model selection
- Understand resource utilization during multi-model comparisons
Copy file name to clipboardExpand all lines: README.md
+78Lines changed: 78 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,6 +79,47 @@ For more information on AIA visit these locations:
79
79
80
80
```
81
81
82
+
---
83
+
84
+
## Concurrent Multi-Model Comparison
85
+
86
+
One of AIA's most powerful features is the ability to send a single prompt to multiple AI models simultaneously and compare their responses side-by-side—complete with token usage and cost tracking.
87
+
88
+
```bash
89
+
# Compare responses from 3 models with token counts and cost estimates
Copy file name to clipboardExpand all lines: docs/guides/models.md
+78Lines changed: 78 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -167,6 +167,84 @@ Model Details:
167
167
-**Error Handling**: Invalid models are reported but don't prevent valid models from working
168
168
-**Batch Mode Support**: Multi-model responses are properly formatted in output files
169
169
170
+
### Token Usage and Cost Tracking
171
+
172
+
One of AIA's most powerful capabilities is real-time tracking of token usage and cost estimates across multiple models. This enables informed decisions about model selection based on both quality and cost.
173
+
174
+
#### Enabling Token Tracking
175
+
176
+
```bash
177
+
# Display token usage for each model
178
+
aia my_prompt -m gpt-4o,claude-3-sonnet --tokens
179
+
180
+
# Include cost estimates (automatically enables --tokens)
Assign specific roles to each model in multi-model mode to get diverse perspectives on your prompts. Each model receives a prepended role prompt that shapes its perspective.
0 commit comments