Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 64 additions & 0 deletions .claude/skills/adding-a-new-skill/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
---
name: adding-a-skill
description: extending yourself with a new reusable skill by interviewing the user
---

If the user wants to add a new skill, you can help them with this:

1. Ask the user for a short name and description of the skill. The name should be
something that can be rendered as a short (10-30 character) descriptive sequence
of words separated by hyphens. The description should be a line of text that is
focused on conveying to a future agent when the skill would be appropriate to
use and roughly what it does.

2. Ask the user for details. You can build up an idea of what the skill should do
over multiple rounds of questioning. You want to find out:

- If the skill should involve any key thoughts or considerations.
- If the skill has an order of steps, or is an unordered set of tasks.
- If the skill involves running sub-agents and if so how many.
- **If the skill involves a lot of work** that might benefit from splitting
into subagent tasks (see below for considerations).
- If the skill should include commands to run.
- If the skill should include code to write.
- If for code or commands to there is specific text or more of a
sketch or template of text _like_ some example to include.
- Any hard rules that an agent doing the skill should ALWAYS or NEVER do.
- A set of conditions for stopping, looping/extending, or resetting/restarting.
- How the result of applying the skill should be conveyed to the user.

**Subagent considerations**

Skills that involve a lot of work may benefit from splitting into pieces and
running each piece as a subagent. This keeps each subagent focused on a
moderate amount of work so it doesn't get lost or wander off track. If the
skill might use subagents, identify:

- How the work could be split into moderate-sized pieces
- What information each subagent needs to do its piece
- The skill file should have an "Inputs" section listing what's needed
- The skill should suggest a format for the subagent prompt

3. Once you have learned this information from the user, assemble it into a
file in the repository. Add a file named `.claude/skills/<skill-name>/SKILL.md`
with the following structure:

- YAML frontmatter with the fields `name` and `description` drawn from
the name and description.
- An introductory paragraph or two describing the goal of the skill and
any thoughts or special considerations to perform, as well as any
description of the meta-parameters like how to split work among subagents
and how to stop/loop/restart. If the skill involves a lot of work,
suggest how it might be split into moderate-sized subagent tasks.
- **If the skill might use subagents**: An "Inputs" section that lists
what information is needed for each piece, and suggests a format for the
subagent prompt. This section comes right after the overview.
- Either a sequential list of steps or an unordered list of tasks.
- Any code or commands in either specific or example form.
- Any of the ALWAYS/NEVER conditions.
- A "Completion" section describing how to summarize the work, noting that
if invoked as a subagent, the summary should be passed back to the
invoking agent.

When you're done, save that file and then present the user with a link to it
so they can open it and review it.
215 changes: 215 additions & 0 deletions .claude/skills/adding-tests/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,215 @@
---
name: adding-tests
description: analyzing a change to determine what tests are needed and adding them to the test suite
---

# Overview

This skill is for analyzing a code change and adding appropriate test coverage.
It covers both unit tests and randomized/fuzz tests, ensuring that new
functionality is tested and bug fixes include regression tests.

This skill involves a fair amount of work. Consider splitting it into pieces and
running each piece as a subagent (e.g., one for analyzing test needs, one for
writing unit tests, one for randomized tests). Keep each subagent focused on a
moderate amount of work so it doesn't get lost or wander off track.

The output is either confirmation that tests were added (with details), or
confirmation that no additional tests are needed.

# Inputs

Before starting the analysis, gather the following information (if running as a
subagent, the invoking agent should provide these; otherwise, determine them
yourself or ask the user):

1. **Git range**: The git command to get the diff (e.g., `git diff master...HEAD`).

2. **Type of change**: Is this a new feature, bug fix, refactor, or performance
change?

3. **Bug/issue reference** (if applicable): For bug fixes, the issue number or
description of what was broken.

4. **Specific test requirements** (optional): Any specific testing requirements
the user has mentioned.

If invoking as a subagent, the prompt should include: "Analyze and add tests
for: <change type>. Get the diff using `<git-command>`. <issue reference if
applicable> <specific requirements>"

# Analyzing Test Needs

## Step 1: Understand the Change

Get the diff using the command provided by the invoking agent:

Categorize the change:
- **New feature**: Adding new functionality
- **Bug fix**: Correcting incorrect behavior
- **Refactor**: Changing implementation without changing behavior
- **Performance**: Optimizing existing code

## Step 2: Find Existing Tests

Locate tests related to the changed code:

1. Look for test files with similar names (e.g., `Foo.cpp` → `FooTests.cpp`)
2. Search for tests that reference the modified functions/classes
3. Check if there are integration tests that exercise this code path

```bash
# Find test files
find src -name "*Tests.cpp" | xargs grep -l "FunctionName"
```

## Step 3: Determine What Tests Are Needed

### For New Features

- Unit tests for each new public function/method
- Tests for expected behavior with valid inputs
- Tests for edge cases (empty input, max values, etc.)
- Tests for error handling with invalid inputs
- Integration tests if the feature involves multiple components

### For Bug Fixes

- A regression test that would have failed before the fix
- The test should exercise the exact condition that caused the bug
- Include a comment referencing the issue/bug number if available

### For Refactors

- Existing tests should still pass (no new tests typically needed)
- If existing test coverage is inadequate, add tests before refactoring

### For Performance Changes

- Ensure functional tests still pass
- Consider adding benchmark tests if appropriate
- Consider adding metrics or tracy ZoneScoped annotations to help
quantify performance
- Consider adding a `LogSlowExecution` wrapper that will flag any
long-running units of work on the IO service

### Some Special Cases

- Write fee bump tests to go along with any new regular transaction tests or any
logic changes to transaction processing and application.

## Step 4: Check for Randomized Test Needs

For changes affecting:
- Transaction processing
- Consensus/SCP
- Ledger state management
- Serialization/deserialization
- Any protocol-critical code

Consider whether randomized testing is appropriate:
- Fuzz targets for parsing/deserialization
- Property-based tests for invariants
- Simulation tests for distributed behavior

# Writing Tests

## Unit Test Patterns

Find existing tests in the same area and follow their patterns. Common patterns
in this codebase:

1. **Test fixture setup**: Look for how test fixtures are created
2. **Assertion style**: Match the assertion macros used elsewhere
3. **Test naming**: Follow the naming convention of nearby tests
4. **Helper functions**: Reuse existing test helpers rather than creating new ones
5. **Test Strictness**: Ensure tests are strict and fail on any unexpected behavior,
check:
- [ ] Do my tests have timeouts? (no infinite hangs)
- [ ] Do my tests assert on failure? (no silent failures or early returns)
- [ ] Am I using logging where I should use assert/panic?

## Test File Organization

- Tests typically live in `src/` alongside the code they test, in a `test/`
subdirectory.
- Test files are usually named `*Tests.cpp`
- There are often "test utility" helper files named `*TestUtils.cpp`
- Tests are organized into test suites by component
- There are also some general testing utility files in `src/test`
- The unit test framework is "Catch2", a common C++ framework.

## Adding a Unit Test

1. Find the appropriate test file (or create one following conventions)
2. Add the test case following existing patterns
3. Ensure the test is self-contained and doesn't depend on external state
4. Run the new test in isolation to verify it works

## Adding a Fuzz Target

If adding a fuzz target:

1. Check existing fuzz targets in the codebase for patterns
2. Create a target that exercises the specific code path
3. Ensure the target can handle arbitrary input without crashing (except for
intentional assertion failures)
4. Document what the fuzz target is testing

# Output Format

Report what was done:

```
## Tests Added

### Unit Tests

1. **src/ledger/LedgerManagerTests.cpp**: `processEmptyTransaction`
- Tests that empty transactions are rejected with appropriate error
- Regression test for issue #1234

2. **src/ledger/LedgerManagerTests.cpp**: `processTransactionWithMaxOps`
- Tests boundary condition at maximum operation count

### Randomized Tests

1. **src/fuzz/FuzzTransactionFrame.cpp**: Extended to cover new transaction type
- Added generation of the new transaction variant

## No Additional Tests Needed

[If applicable, explain why existing coverage is sufficient]
```

# ALWAYS

- ALWAYS find and follow existing test patterns in the same area
- ALWAYS include regression tests for bug fixes
- ALWAYS test both success and failure paths
- ALWAYS test edge cases and boundary conditions
- ALWAYS run new tests to verify they pass
- ALWAYS run new tests with the bug still present (if a regression test) to verify they would have caught it
- ALWAYS reuse existing test helpers and fixtures
- ALWAYS keep tests focused and independent

# NEVER

- NEVER add tests that depend on external state or ordering
- NEVER add tests that are flaky or timing-dependent
- NEVER duplicate existing test coverage
- NEVER write tests that test implementation details rather than behavior
- NEVER add tests without running them
- NEVER skip randomized test consideration for protocol-critical code
- NEVER create new test helpers when suitable ones exist

# Completion

Summarize your work as follows:

1. Summary of tests added (count and type)
2. Details of each test added
3. Confirmation that new tests pass
4. Any notes about test coverage that might still be lacking

If invoked as a subagent, pass this summary back to the invoking agent.
70 changes: 70 additions & 0 deletions .claude/skills/configuring-the-build/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
---
name: configuring-the-build
description: modifying build configuration to enable/disable variants, switch compilers or flags, or otherwise prepare for a build
---

# Overview

The build works like this:
- We start by running `./autogen.sh`
- `autogen.sh` runs `autoconf` to turn `configure.ac` into `configure`
- `autogen.sh` also runs `automake` to turn `Makefile.am` into `Makefile.in` and `src/Makefile.am` into `src/Makefile.in`
- We then run `./configure`
- `configure` turns `Makefile.in` into `Makefile` and `src/Makefile.in` into `src/Makefile`
- `configure` also turns `config.h.in` into `config.h` that contains some variables
- `configure` also writes `config.log`, if there are errors they will be there

- ALWAYS run `./autogen.sh` and `./configure` from top-level, never a subdirectory
- ALWAYS configure with `--enable-ccache` for caching
- ALWAYS configure with `--enable-sdfprefs` to inhibit noisy build output
- NEVER edit `configure` directly, only ever edit `configure.ac`
- NEVER edit `Makefile` or `Makefile.in` directly, only ever edit `Makefile.am`

To change configuration settings, re-run `./configure` with new flags.

You can see the existing configuration flags by looking at the head of `config.log`

## Configuration variables

To change compiler from clang to gcc, switch the value you pass for CC and CXX.
For example run `CXX=g++ CC=gcc ./configure ...` to configure with gcc. We want
builds to always work with gcc _and_ clang.

To alter compile flags (say turn on or off optimization, or debuginfo) change
CXXFLAGS. For example run `CXXFLAGS='-O0 -g' ./configure ...` to build
non-optimized and with debuginfo. Normally you should not have to change these.

Sometimes you will need to change to a different implementation of the C++
standard library. To do this, pass `-stdlib=libc++` or `-stdlib=libstdc++`
in `CXXFLAGS` explicitly. But again, normally you don't need to do this.

## Configuration flags

Here are some common configuration flags you might want to change:

- `--disable-tests` turns off `BUILD_TESTS`, which excludes unit tests and all
test-support infrastructure from core. We want this build variant to work
since it is the one we ship, but it is uncommon when doing development.

- `--disable-postgres` turns off postgresql backend support in core, leaving
only sqlite. tests will run faster, and also this is a configuration we want
to work (we will remove postgres entirely someday).

There are also some flags that turn on compile-time instrumentation for
different sorts of testing. Turn these on if doing specific diagnostic tests,
and/or to check for "anything breaking by accident". If you turn any on, you
will need to do a clean build -- the object files will have the wrong content.

- `--enable-asan` turns on address sanitizer.
- `--enable-threadsanitizer` same, but for thread sanitizer.
- `--enable-memcheck` same, but for memcheck.
- `--enable-undefinedcheck` same, but for undefined-behaviour sanitizer.
- `--enable-extrachecks` turns on C++ stdlib debugging, slows things down.
- `--enable-fuzz` builds core with fuzz instrumentation, plus fuzz targets.

There is more you can learn by reading `configure.ac` directly but the
instructions above ought to suffice for 99% of tasks. Try not to do anything
too strange with the configuration.

When in doubt, or if things get stuck, you can always re-run `./autogen.sh`
and `./configure`.
Loading
Loading