Skip to content

Latest commit

 

History

History
582 lines (444 loc) · 14.1 KB

File metadata and controls

582 lines (444 loc) · 14.1 KB

Honeyman Project - Validation Test Cases

This document provides comprehensive test cases to validate each detection capability of the Honeyman Project. These tests should be run in a controlled environment to ensure proper system functionality.

🛡️ Test Environment Setup

Prerequisites

  • Isolated test network (separate from production)
  • Test devices: Laptop, smartphone, USB devices
  • Wireless adapters with monitor mode support
  • Administrative access to test systems

Safety Notice

⚠️ IMPORTANT: All tests should be conducted in isolated environments. Never run these tests on production networks or systems without explicit authorization.

📡 WiFi Detection Tests

Test Case W1: Evil Twin Access Point Detection

Objective: Verify detection of evil twin access points

Setup:

  1. Identify a legitimate WiFi network in range
  2. Create a fake access point with the same SSID but different BSSID
  3. Configure similar signal strength and security settings

Test Steps:

# Using hostapd to create evil twin
sudo hostapd /etc/hostapd/evil_twin.conf &

# Monitor detection
tail -f logs/wifi_enhanced.log | grep -i "evil_twin"

Expected Results:

  • Detection within 30 seconds of evil twin activation
  • Alert showing "evil_twin_same_ssid" threat type
  • Threat score >= 0.6
  • Dashboard shows "HIGH" or "CRITICAL" alert

Pass Criteria:

  • ✅ Detection occurs within 60 seconds
  • ✅ Threat score >= 0.6
  • ✅ Correct threat classification

Test Case W2: Beacon Flooding Attack

Objective: Verify detection of beacon flooding attacks

Setup:

  1. Configure wireless adapter in monitor mode
  2. Prepare beacon flooding script or tool
  3. Set flood rate to >100 beacons per minute

Test Steps:

# Using mdk4 for beacon flooding
sudo mdk4 wlan0mon b -f /tmp/ssid_list.txt -s 1000

# Monitor detection
grep -i "beacon_flood" logs/wifi_enhanced.log

Expected Results:

  • Detection within 10 seconds of flooding start
  • Alert showing "beacon_flooding" threat type
  • Beacon rate calculation shown in logs
  • CRITICAL threat level classification

Pass Criteria:

  • ✅ Detection within 30 seconds
  • ✅ Beacon rate accurately measured (>100/min)
  • ✅ Threat score >= 0.8

Test Case W3: Deauthentication Attack

Objective: Verify detection of WiFi deauthentication attacks

Setup:

  1. Identify target network and client
  2. Configure wireless adapter in monitor mode
  3. Prepare deauth attack tools

Test Steps:

# Using aireplay-ng for deauth attack
sudo aireplay-ng -0 10 -a [TARGET_BSSID] wlan0mon

# Monitor detection
grep -i "deauth" logs/wifi_enhanced.log

Expected Results:

  • Detection of excessive deauth frames
  • Identification of attack pattern
  • Source MAC address logging
  • HIGH threat level alert

Pass Criteria:

  • ✅ Deauth attack detected within 60 seconds
  • ✅ Attack source identified
  • ✅ Threat score >= 0.5

📱 BLE Detection Tests

Test Case B1: Flipper Zero Detection

Objective: Verify detection of Flipper Zero or similar devices

Setup:

  1. Acquire Flipper Zero or simulate with ESP32
  2. Configure device with Nordic UART service
  3. Enable BLE advertising with suspicious patterns

Test Steps:

# Monitor BLE detection
python3 ble_enhanced_detector.py &
tail -f logs/ble_enhanced.log | grep -i "flipper\|suspicious"

Expected Results:

  • Device fingerprint analysis
  • Nordic UART service detection
  • Threat classification as "suspicious_device"
  • HIGH threat level assignment

Pass Criteria:

  • ✅ Device detected within 45 seconds
  • ✅ Nordic UART service identified
  • ✅ Threat score >= 0.7

Test Case B2: BLE Rapid Scanning Behavior

Objective: Verify detection of rapid BLE scanning patterns

Setup:

  1. Configure test device for rapid connect/disconnect cycles
  2. Set appearance/disappearance rate >5 times per 5 minutes
  3. Use unique MAC address for tracking

Test Steps:

# Simulate rapid scanning with Python script
python3 simulate_ble_scanner.py --rapid-mode &

# Monitor detection
grep -i "rapid_appearance\|frequent" logs/ble_enhanced.log

Expected Results:

  • Pattern recognition of rapid appearances
  • Behavioral analysis scoring
  • Alert for "frequent_appearance_pattern"
  • MEDIUM threat classification

Pass Criteria:

  • ✅ Pattern detected after 5+ rapid appearances
  • ✅ Behavioral score calculated correctly
  • ✅ Threat score >= 0.3

Test Case B3: Proximity Attack Simulation

Objective: Verify detection of very close BLE devices

Setup:

  1. Position test device very close to detector (<1 meter)
  2. Configure high transmission power
  3. Monitor RSSI readings

Test Steps:

# Position device close and monitor
# RSSI should be > -30 dBm
grep -i "proximity\|rssi" logs/ble_enhanced.log

Expected Results:

  • RSSI measurement > -30 dBm
  • Proximity attack detection
  • Distance estimation < 1 meter
  • MEDIUM threat alert

Pass Criteria:

  • ✅ High RSSI detected (> -30 dBm)
  • ✅ Proximity threat identified
  • ✅ Accurate distance estimation

💻 Web Honeypot Tests

Test Case H1: Credential Harvesting Detection

Objective: Verify capture of login attempts on honeypot portal

Setup:

  1. Access corporate portal at http://localhost:8080
  2. Prepare test credentials for submission
  3. Monitor form submission logging

Test Steps:

# Submit credentials through web form
curl -X POST http://localhost:8080/api/login-attempt \
  -H "Content-Type: application/json" \
  -d '{"username":"testuser","password":"testpass"}'

# Monitor logs
grep -i "credential\|login" logs/opencanary.log

Expected Results:

  • Credential capture in logs
  • User agent and timestamp recorded
  • Source IP identification
  • Form submission details logged

Pass Criteria:

  • ✅ Credentials captured accurately
  • ✅ Metadata (IP, User-Agent) logged
  • ✅ Timestamp within 1 second of submission

Test Case H2: Port Scanning Detection

Objective: Verify detection of network port scanning

Setup:

  1. Configure port scanning tool (nmap)
  2. Scan multiple honeypot service ports
  3. Monitor connection attempts

Test Steps:

# Perform port scan
nmap -sS -p 1-1000 localhost

# Monitor detection
grep -i "portscan\|scan" logs/opencanary.log

Expected Results:

  • Multiple port connection attempts logged
  • Source IP and port information
  • Scanning pattern recognition
  • MEDIUM to HIGH threat classification

Pass Criteria:

  • ✅ Multiple port attempts detected
  • ✅ Scanning pattern identified
  • ✅ Source accurately logged

Test Case H3: Document Access Monitoring

Objective: Verify canary document access detection

Setup:

  1. Access document portal at http://localhost:8080/documents.html
  2. Attempt to download sensitive documents
  3. Monitor access attempts

Test Steps:

# Access document portal and click documents
# Monitor access logs
grep -i "document_access" logs/web_access.log

Expected Results:

  • Document access attempts logged
  • Document names and categories recorded
  • User session tracking
  • Access denied simulation

Pass Criteria:

  • ✅ Document access logged
  • ✅ User session tracked
  • ✅ Access attempt details captured

🔌 USB Detection Tests

Test Case U1: Unknown Device Insertion

Objective: Verify detection of USB device insertion

Setup:

  1. Prepare unknown USB device (flash drive, etc.)
  2. Monitor USB subsystem events
  3. Insert device while monitoring

Test Steps:

# Monitor USB detection
python3 usb_detection_enhanced.py &

# Insert USB device
# Monitor logs
tail -f logs/usb_enhanced.log

Expected Results:

  • USB insertion event detected
  • Device enumeration information
  • Vendor/Product ID logging
  • Device type classification

Pass Criteria:

  • ✅ Insertion detected immediately
  • ✅ Device information captured
  • ✅ Classification performed

Test Case U2: HID Device Detection

Objective: Verify detection of Human Interface Devices

Setup:

  1. Connect HID device (keyboard, mouse)
  2. Monitor HID-specific enumeration
  3. Detect rapid input patterns

Test Steps:

# Connect HID device with rapid input
# Monitor HID-specific detection
grep -i "hid\|keyboard\|mouse" logs/usb_enhanced.log

Expected Results:

  • HID device classification
  • Input pattern analysis
  • Suspicious behavior detection
  • MEDIUM threat if unusual patterns

Pass Criteria:

  • ✅ HID device correctly classified
  • ✅ Input patterns analyzed
  • ✅ Behavioral scoring applied

Test Case U3: Mass Storage Analysis

Objective: Verify analysis of USB mass storage devices

Setup:

  1. Connect USB flash drive or external storage
  2. Monitor filesystem enumeration
  3. Analyze device properties

Test Steps:

# Connect mass storage device
# Monitor storage analysis
grep -i "storage\|filesystem\|mount" logs/usb_enhanced.log

Expected Results:

  • Storage device recognition
  • Filesystem type detection
  • Capacity and properties logging
  • Basic content analysis

Pass Criteria:

  • ✅ Storage device detected
  • ✅ Properties correctly identified
  • ✅ Security analysis performed

📊 Correlation Testing

Test Case C1: Multi-Vector Attack Correlation

Objective: Verify correlation across different attack vectors

Setup:

  1. Execute WiFi evil twin attack
  2. Simultaneously attempt credential harvesting
  3. Insert suspicious USB device
  4. Monitor correlation engine

Test Steps:

# Start multi-vector attacks simultaneously
./test_multi_vector_attack.sh

# Monitor correlation
grep -i "correlation\|multi" logs/multi_vector.log

Expected Results:

  • Cross-protocol threat correlation
  • Temporal relationship identification
  • Elevated threat scoring
  • CRITICAL alert generation

Pass Criteria:

  • ✅ Multiple vectors correlated
  • ✅ Temporal analysis performed
  • ✅ Combined threat score elevated

Test Case C2: Timeline Correlation

Objective: Verify temporal correlation of related events

Setup:

  1. Execute attacks in sequence with timing
  2. Monitor timeline analysis
  3. Verify correlation windows

Test Steps:

# Execute timed sequence of attacks
python3 test_timeline_correlation.py

# Monitor timeline analysis
grep -i "timeline\|sequence" logs/correlation.log

Expected Results:

  • Timeline reconstruction
  • Event sequence analysis
  • Correlation confidence scoring
  • Attack campaign identification

Pass Criteria:

  • ✅ Timeline accurately reconstructed
  • ✅ Sequence correlation identified
  • ✅ Confidence scores calculated

Test Case C3: Behavioral Analysis Validation

Objective: Verify behavioral pattern recognition

Setup:

  1. Generate consistent attack patterns
  2. Vary attack timing and intensity
  3. Monitor behavioral analysis

Test Steps:

# Generate behavioral patterns
python3 test_behavioral_patterns.py

# Monitor behavioral analysis
grep -i "behavior\|pattern" logs/behavioral.log

Expected Results:

  • Pattern recognition accuracy
  • Behavioral baseline establishment
  • Anomaly detection
  • Learning algorithm adaptation

Pass Criteria:

  • ✅ Patterns accurately identified
  • ✅ Baselines established
  • ✅ Anomalies detected

🎯 Dashboard Validation Tests

Test Case D1: Real-time Data Display

Objective: Verify dashboard displays threat data in real-time

Setup:

  1. Access enhanced dashboard
  2. Generate test threats
  3. Monitor dashboard updates

Test Steps:

# Access dashboard at http://72.60.25.24:8080/enhanced_dashboard.html
# Generate test threats
# Verify real-time updates

Expected Results:

  • Threat counters update within 10 seconds
  • Charts reflect new data
  • Timeline shows recent events
  • Status indicators accurate

Pass Criteria:

  • ✅ Updates within 10 seconds
  • ✅ Data accuracy maintained
  • ✅ Visual elements functional

Test Case D2: API Response Validation

Objective: Verify API endpoints return correct data

Setup:

  1. Generate known threat data
  2. Query API endpoints
  3. Validate response accuracy

Test Steps:

# Test API endpoints
curl http://72.60.25.24:8080/api/threats/stats
curl http://72.60.25.24:8080/api/threats/recent
curl http://72.60.25.24:8080/api/threats/correlations

# Verify response data

Expected Results:

  • Accurate threat statistics
  • Proper JSON formatting
  • Correct data relationships
  • Reasonable response times (<2s)

Pass Criteria:

  • ✅ Data accuracy verified
  • ✅ Response format correct
  • ✅ Performance acceptable

Test Case D3: Mobile Responsiveness

Objective: Verify dashboard functionality on mobile devices

Setup:

  1. Access dashboard on mobile device/browser
  2. Test responsive design
  3. Verify functionality

Test Steps:

# Access dashboard with mobile user agent
# Test touch interactions
# Verify layout adaptation

Expected Results:

  • Layout adapts to screen size
  • Touch interactions work
  • All data remains accessible
  • Performance acceptable

Pass Criteria:

  • ✅ Responsive design functional
  • ✅ Mobile interactions work
  • ✅ Data accessibility maintained

📋 Test Execution Schedule

Automated Testing

# Run automated test suite
./run_validation_tests.sh

# Generate test report
./generate_test_report.sh

Manual Testing Checklist

  • WiFi Detection Tests (W1-W3)
  • BLE Detection Tests (B1-B3)
  • Web Honeypot Tests (H1-H3)
  • USB Detection Tests (U1-U3)
  • Correlation Tests (C1-C3)
  • Dashboard Tests (D1-D3)

Performance Benchmarks

  • Detection Time: <60 seconds for all threat types
  • False Positive Rate: <10% for all capabilities
  • System Resource Usage: <80% CPU, <4GB RAM
  • Dashboard Response Time: <2 seconds

🔄 Continuous Validation

Regular Testing Schedule

  • Daily: Automated basic functionality tests
  • Weekly: Manual validation of key capabilities
  • Monthly: Complete test suite execution
  • Quarterly: Performance benchmark validation

Test Result Documentation

All test results should be documented with:

  • Test execution timestamp
  • Pass/fail status for each test case
  • Performance metrics collected
  • Any anomalies or issues identified
  • Recommendations for improvements

This comprehensive test suite ensures the Honeyman Project maintains its detection capabilities and performance standards throughout its deployment lifecycle.