Maestro Mobile Testing Skill for Claude Code / AI Agents #2985
Replies: 2 comments
-
|
This looks ace! 🎉 My first bit of feedback would be to soften some of the more opinionated bits that work for your project, but might not fit all contexts. e.g. Use IDs if there are translations AND they're available AND ask the user first if they prefer it over test duplication. Typically, folks fluent in Maestro who arent doing i18n go in the other direction, and optimise for text labels, which creates human-readable tests, which makes for maintainable tests (although the latter may be less important for Agent-maintained tests). Secondly, I'd consider evolving on some of the useful patterns that exist. e.g. Consider some of the recipes in the docs, highlighting that |
Beta Was this translation helpful? Give feedback.
-
|
Thanks @Fishbowler, really appreciate the feedback! Just shipped v1.1.0 addressing both points: 1. Softened the selector stance — Pattern 1 is now "Selector Strategy: testID vs Text" with a context-dependent decision table:
The original "always use testIDs, never text" was too opinionated — you're right that single-language projects benefit from text selectors for readability. 2. Added idiomatic patterns from the docs:
Also updated the new-test checklist to reference all three patterns. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Maestro Mobile Testing — Agent Skill for Claude Code
Hey Maestro community! 👋
I built a Maestro-focused agent skill that gives AI coding assistants (Claude Code, and any agent supporting the open skills format) deep knowledge of Maestro patterns and best practices for mobile E2E testing.
What is it?
An installable skill that teaches AI agents how to write correct Maestro tests — covering patterns that LLMs frequently get wrong out of the box:
async/await, nofetch(), usehttp.get()andjson()insteadclearStatedoes NOT clear iOS Keychain —expo-secure-storetokens persist across resetsextendedWaitUntilto avoid race conditionswhen:conditions for guest vs authenticated states{domain}-{element}and{domain}-{element}-{id}patternsmobile-dev-inc/action-maestro-cloud@v2Install
Repository
🔗 github.com/tovimx/maestro-mobile-testing-skill
Why?
Maestro has great docs, but AI coding assistants don't always know the nuances — they'll generate
async/awaitin GraalJS scripts, miss the Keychain gotcha, or write flaky tests without properextendedWaitUntil. This skill front-loads all those patterns so the AI gets it right the first time.Works with any React Native / Expo app. The skill is framework-agnostic — it covers Maestro patterns, not app-specific logic.
Feedback welcome!
This is the first Maestro-specific skill in the ecosystem (there are equivalents for Playwright and Cypress). Would love feedback from the community on what patterns to add or improve.
Thanks for building such a great testing framework! 🎉
Beta Was this translation helpful? Give feedback.
All reactions