You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Analyzed 12 GitHub MCP tools for response structure and usefulness. Average rating: 4.00/5 — most tools are well-designed for agentic work. Key findings: security tools return massive payloads (95KB), some REST endpoints ignore pagination, while GraphQL-based tools deliver optimal structure. Three tools flagged for improvement.
Full Structural Analysis Report
Executive Summary
Metric
Value
Tools Analyzed
12
Total Tokens (Today)
37,725
Average Usefulness Rating
4.00/5
Best Rated Tool
get_file_contents: 5/5
Worst Rated Tool
get_me: 1/5
Context-Efficient Tools
7 tools under 500 tokens
Oversized Responses
3 tools exceeded MCP limits
Usefulness Ratings for Agentic Work
Tool
Toolset
Rating
Assessment
get_file_contents
repos
⭐⭐⭐⭐⭐
Excellent - minimal overhead, just content
list_issues
issues
⭐⭐⭐⭐⭐
Excellent - rich GraphQL structure with full metadata
list_discussions
discussions
⭐⭐⭐⭐⭐
Excellent - clean GraphQL with cursor pagination
list_branches
repos
⭐⭐⭐⭐⭐
Excellent - minimal array, essential data only
search_repositories
search
⭐⭐⭐⭐⭐
Excellent - comprehensive metadata for discovery
list_commits
repos
⭐⭐⭐⭐⭐
Excellent - balanced detail for commit analysis
get_label
labels
⭐⭐⭐⭐⭐
Excellent - minimal, complete label metadata
search_issues
search
⭐⭐⭐⭐
Good - full issue bodies, some user object redundancy
list_pull_requests
pull_requests
⭐⭐⭐
Adequate - 18KB payload with duplicated repo objects
list_workflows
actions
⭐⭐⭐
Adequate - pagination broken, returns all 192 workflows
list_code_scanning_alerts
security
⭐⭐
Limited - 95KB payload, no pagination, hard to parse
Clean text response with minimal overhead. Just returns file contents. Perfect for agents - efficient and actionable.
list_branches
repos
100
array
⭐⭐⭐⭐⭐
Minimal, clean array response. Just essential data: name, SHA, protection status. Highly efficient for agents.
get_label
labels
50
object
⭐⭐⭐⭐⭐
Minimal response with just label metadata. Efficient and complete for label operations.
list_discussions
discussions
275
object
⭐⭐⭐⭐⭐
GraphQL structure with cursor pagination. Clean response with category, user, timestamps. Well-organized and efficient for agents.
list_commits
repos
325
array
⭐⭐⭐⭐⭐
Clean commit history with message, author, committer. Good balance of detail. Efficient for commit analysis tasks.
search_repositories
search
350
object
⭐⭐⭐⭐⭐
Excellent search results with rich metadata. Includes stars, topics, language, dates. All needed discovery data present. Immediately actionable.
list_issues
issues
1,650
object
⭐⭐⭐⭐⭐
GraphQL response with cursor pagination. Rich issue details including full body, labels, user info. Excellent structure for agents. Complete and actionable.
search_issues
search
2,600
object
⭐⭐⭐⭐
Comprehensive search results with full issue bodies. Includes extensive user info with many URL fields. Some redundancy in user object but overall good. Actionable for finding specific issues.
list_workflows
actions
3,900
oversized
⭐⭐⭐
Response too large (15KB) for single workflow with perPage=1. Returns full list of 192 workflows regardless of perPage parameter. Pagination appears broken. Usable but inefficient.
list_pull_requests
pull_requests
4,650
oversized
⭐⭐⭐
Response too large (18KB) for MCP, requires file system read. Includes full repo objects for both head and base with ALL fields duplicated. Heavy redundancy. Usable but context-inefficient.
list_code_scanning_alerts
security
23,800
oversized
⭐⭐
Massive payload (95KB) for security alerts. Returns extensive alert details with full rule descriptions, locations, and analysis. No pagination respected. Context-heavy and hard to process. Needs filtering options.
get_me
context
0
error
⭐
403 error - not accessible by integration. Tool is unavailable in GitHub Actions context, making it useless for agentic workflows running in CI.
30-Day Trend Summary
Metric
Value
Data Points
26 records (2 analysis runs)
Date Range
Feb 6 - Feb 10, 2026
Tools Tracked
12 unique tools
Average Daily Tokens
~38,000 tokens per analysis
Rating Trend
Stable at 4.00-4.07 average
Notable Changes:
list_issues token count increased from 650 → 1,650 (154% increase) due to richer issue body content in latest analysis
list_pull_requests token count increased from 2,800 → 4,650 (66% increase) with full repo duplication
list_workflows token count increased from 3,200 → 3,900 (22% increase)
list_code_scanning_alerts saw massive increase from unmeasured → 23,800 tokens on first full test
Recommendations
🟢 High-Value Tools (Rating 4-5, recommend for frequent use):
get_me — fix permissions for GitHub Actions integration context
💡 Context-Efficient Alternatives:
Instead of list_pull_requests → use search_pull_requests with minimal fields
Instead of list_code_scanning_alerts → add severity/state filters to reduce payload
Instead of list_workflows → use get_workflow when workflow name is known
Visualizations
Response Size by Toolset
Analysis: Security toolset dominates with 23,800 tokens average. Actions and pull_requests toolsets are also heavy (3,900-4,650 tokens). Repos, discussions, and labels toolsets are highly efficient (<500 tokens average).
Usefulness Ratings by Toolset
Analysis: Most toolsets score 4+ (good to excellent). Security and context toolsets score lower due to oversized payloads and permission errors respectively. Green bars (repos, discussions, labels, issues, search) indicate excellent agentic workflow support.
Daily Token Trend (30-Day Window)
Analysis: Token usage increased significantly from Feb 6 (12,750 tokens) to Feb 10 (37,725 tokens) — a 196% increase. This reflects deeper testing including security alerts and larger issue/PR payloads. Trend suggests response sizes are growing as more complete data is returned.
Token Size vs Usefulness Scatter
Analysis:Sweet spot is top-left quadrant (low tokens, high usefulness). Most repos, search, discussions, and labels tools cluster here. Security and pull_requests tools sit in the bottom-right (high tokens, lower usefulness). Ideal tools for agents are those rated 5 with <500 tokens.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Analyzed 12 GitHub MCP tools for response structure and usefulness. Average rating: 4.00/5 — most tools are well-designed for agentic work. Key findings: security tools return massive payloads (95KB), some REST endpoints ignore pagination, while GraphQL-based tools deliver optimal structure. Three tools flagged for improvement.
Full Structural Analysis Report
Executive Summary
Usefulness Ratings for Agentic Work
Schema Analysis
Response Size Analysis
Tool-by-Tool Analysis
30-Day Trend Summary
Notable Changes:
list_issuestoken count increased from 650 → 1,650 (154% increase) due to richer issue body content in latest analysislist_pull_requeststoken count increased from 2,800 → 4,650 (66% increase) with full repo duplicationlist_workflowstoken count increased from 3,200 → 3,900 (22% increase)list_code_scanning_alertssaw massive increase from unmeasured → 23,800 tokens on first full testRecommendations
🟢 High-Value Tools (Rating 4-5, recommend for frequent use):
get_file_contents,list_branches,get_label— ultra-efficient, minimal contextlist_issues,list_discussions— excellent GraphQL structuresearch_repositories,list_commits— well-balanced metadata🟡 Use with Caution (Rating 3, context-heavy):
list_pull_requests— consider using minimal fields orgetsingle PR instead of listlist_workflows— pagination broken, consider filtering by namesearch_issues— preferlist_issueswith filters when possible🔴 Needs Improvement (Rating 1-2):
list_code_scanning_alerts— add filtering parameters, respect pagination, reduce payload sizeget_me— fix permissions for GitHub Actions integration context💡 Context-Efficient Alternatives:
list_pull_requests→ usesearch_pull_requestswith minimal fieldslist_code_scanning_alerts→ add severity/state filters to reduce payloadlist_workflows→ useget_workflowwhen workflow name is knownVisualizations
Response Size by Toolset
Analysis: Security toolset dominates with 23,800 tokens average. Actions and pull_requests toolsets are also heavy (3,900-4,650 tokens). Repos, discussions, and labels toolsets are highly efficient (<500 tokens average).
Usefulness Ratings by Toolset
Analysis: Most toolsets score 4+ (good to excellent). Security and context toolsets score lower due to oversized payloads and permission errors respectively. Green bars (repos, discussions, labels, issues, search) indicate excellent agentic workflow support.
Daily Token Trend (30-Day Window)
Analysis: Token usage increased significantly from Feb 6 (12,750 tokens) to Feb 10 (37,725 tokens) — a 196% increase. This reflects deeper testing including security alerts and larger issue/PR payloads. Trend suggests response sizes are growing as more complete data is returned.
Token Size vs Usefulness Scatter
Analysis: Sweet spot is top-left quadrant (low tokens, high usefulness). Most repos, search, discussions, and labels tools cluster here. Security and pull_requests tools sit in the bottom-right (high tokens, lower usefulness). Ideal tools for agents are those rated 5 with <500 tokens.
References: §21862776279
Beta Was this translation helpful? Give feedback.
All reactions