Skip to content

Testing: Add performance benchmark suite #38

@knowlen

Description

@knowlen

Description

As identified in PR #32 review, we should add performance benchmarks to ensure the refactored architecture actually improves performance and to track performance over time.

Proposed Benchmark Suite

1. Architecture Benchmarks

Compare old monolithic client vs new mixin-based architecture:

  • Client initialization time
  • Method resolution time
  • Memory usage comparison

2. Import Time Benchmarks

# Measure import times
import time

start = time.perf_counter()
from esologs import Client
end = time.perf_counter()
print(f'Import time: {end - start:.4f}s')

3. API Call Benchmarks

  • Single API call performance
  • Concurrent API calls
  • Large response handling
  • Token refresh performance

4. Memory Usage Profiling

import tracemalloc

tracemalloc.start()
# Create client and make calls
current, peak = tracemalloc.get_traced_memory()
print(f'Current memory: {current / 1024 / 1024:.2f} MB')

Implementation Tools

  • pytest-benchmark for benchmark tests
  • memory_profiler for memory analysis
  • py-spy for profiling
  • GitHub Actions integration for tracking over time

Benchmark Scenarios

  1. Minimal usage (auth only)
  2. Single endpoint usage
  3. Full API usage
  4. Concurrent operations
  5. Large result set handling

Success Metrics

  • Client initialization < 100ms
  • Method resolution < 1ms
  • Memory overhead < 50MB for basic usage
  • No memory leaks during extended usage

CI Integration

  • Run benchmarks on PR submissions
  • Compare against baseline
  • Fail if regression > 10%
  • Generate performance reports

References

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions