Skip to content

Latest commit

 

History

History
250 lines (179 loc) · 4.7 KB

File metadata and controls

250 lines (179 loc) · 4.7 KB

API Documentation

Browser Cleaner API

The auto-browser-cleaner provides both a Python API and REST API for programmatic access.

Python API

Core Classes

BrowserCleaner

Main class for performing browser cleaning operations.

from auto_browser_cleaner import BrowserCleaner

# Initialize with config file
cleaner = BrowserCleaner('config.yaml')

# Run cleaning process
results = cleaner.clean_all()

Methods:

  • clean_all() - Clean all configured browsers
  • clean_browser(browser_name) - Clean specific browser
  • get_statistics() - Get cleaning statistics
  • validate_config() - Validate configuration
RetentionPolicy

Manage data retention policies.

from auto_browser_cleaner.retention import RetentionPolicy

policy = RetentionPolicy()
policy.add_rule('facebook.com', days=7)
policy.add_rule('*.google.com', days=30)

Methods:

  • add_rule(domain, days) - Add retention rule
  • remove_rule(domain) - Remove retention rule
  • should_delete(url, date) - Check if item should be deleted
BrowserDetector

Detect installed browsers.

from auto_browser_cleaner.browsers import BrowserDetector

detector = BrowserDetector()
browsers = detector.detect_all()

Methods:

  • detect_all() - Detect all installed browsers
  • detect_browser(name) - Detect specific browser
  • get_browser_paths() - Get browser data paths

Configuration API

from auto_browser_cleaner.config import Config

# Load configuration
config = Config.load('config.yaml')

# Access settings
browsers = config.get_browsers()
retention = config.get_retention_policies()
schedule = config.get_schedule()

# Modify configuration
config.set_retention_policy('example.com', days=14)
config.save('config.yaml')

Scheduling API

from auto_browser_cleaner.scheduler import Scheduler

scheduler = Scheduler('config.yaml')
scheduler.start()  # Start background scheduler

REST API

The web dashboard provides a REST API accessible at /api/v1/.

Endpoints

GET /api/v1/status

Get application status.

Response:

{
    "status": "running",
    "version": "1.0.0",
    "last_run": "2023-10-15T10:30:00Z"
}
POST /api/v1/clean

Trigger cleaning process.

Request Body:

{
    "browsers": ["chrome", "firefox"],
    "dry_run": false
}

Response:

{
    "job_id": "abc123",
    "status": "started",
    "message": "Cleaning process initiated"
}
GET /api/v1/statistics

Get cleaning statistics.

Response:

{
    "total_cleaned": 1500,
    "space_freed": "250MB",
    "last_run": "2023-10-15T10:30:00Z",
    "by_browser": {
        "chrome": {"items": 800, "size": "150MB"},
        "firefox": {"items": 700, "size": "100MB"}
    }
}
GET /api/v1/config

Get current configuration.

Response:

{
    "browsers": ["chrome", "firefox"],
    "retention_policies": {
        "default": 30,
        "facebook.com": 7
    },
    "schedule": "0 2 * * *"
}
PUT /api/v1/config

Update configuration.

Request Body:

{
    "retention_policies": {
        "example.com": 14
    }
}

Authentication

API endpoints require authentication when enabled in configuration:

curl -H "Authorization: Bearer <token>" http://localhost:8080/api/v1/status

Error Handling

All API methods raise specific exceptions:

  • ConfigurationError - Invalid configuration
  • BrowserNotFoundError - Browser not detected
  • CleaningError - Error during cleaning process
  • ValidationError - Invalid input parameters

Examples

Basic Usage

from auto_browser_cleaner import BrowserCleaner

# Simple cleaning
cleaner = BrowserCleaner()
results = cleaner.clean_all()
print(f"Cleaned {results.total_items} items")

Custom Retention Policies

from auto_browser_cleaner.retention import RetentionPolicy

policy = RetentionPolicy()
# Keep social media for 7 days
policy.add_rule('*.facebook.com', days=7)
policy.add_rule('*.twitter.com', days=7)
# Keep work sites for 90 days
policy.add_rule('*.company.com', days=90)

cleaner = BrowserCleaner(retention_policy=policy)
results = cleaner.clean_all()

Scheduled Cleaning

from auto_browser_cleaner.scheduler import Scheduler

# Clean every day at 2 AM
scheduler = Scheduler()
scheduler.schedule('0 2 * * *', cleaner.clean_all)
scheduler.start()

Statistics and Reporting

from auto_browser_cleaner.statistics import StatisticsCollector
from auto_browser_cleaner.reports import ReportGenerator

# Collect statistics
stats = StatisticsCollector()
data = stats.get_summary()

# Generate report
reporter = ReportGenerator()
reporter.generate_html_report(data, 'cleaning_report.html')