This document provides comprehensive test cases to validate each detection capability of the Honeyman Project. These tests should be run in a controlled environment to ensure proper system functionality.
- Isolated test network (separate from production)
- Test devices: Laptop, smartphone, USB devices
- Wireless adapters with monitor mode support
- Administrative access to test systems
Objective: Verify detection of evil twin access points
Setup:
- Identify a legitimate WiFi network in range
- Create a fake access point with the same SSID but different BSSID
- Configure similar signal strength and security settings
Test Steps:
# Using hostapd to create evil twin
sudo hostapd /etc/hostapd/evil_twin.conf &
# Monitor detection
tail -f logs/wifi_enhanced.log | grep -i "evil_twin"Expected Results:
- Detection within 30 seconds of evil twin activation
- Alert showing "evil_twin_same_ssid" threat type
- Threat score >= 0.6
- Dashboard shows "HIGH" or "CRITICAL" alert
Pass Criteria:
- ✅ Detection occurs within 60 seconds
- ✅ Threat score >= 0.6
- ✅ Correct threat classification
Objective: Verify detection of beacon flooding attacks
Setup:
- Configure wireless adapter in monitor mode
- Prepare beacon flooding script or tool
- Set flood rate to >100 beacons per minute
Test Steps:
# Using mdk4 for beacon flooding
sudo mdk4 wlan0mon b -f /tmp/ssid_list.txt -s 1000
# Monitor detection
grep -i "beacon_flood" logs/wifi_enhanced.logExpected Results:
- Detection within 10 seconds of flooding start
- Alert showing "beacon_flooding" threat type
- Beacon rate calculation shown in logs
- CRITICAL threat level classification
Pass Criteria:
- ✅ Detection within 30 seconds
- ✅ Beacon rate accurately measured (>100/min)
- ✅ Threat score >= 0.8
Objective: Verify detection of WiFi deauthentication attacks
Setup:
- Identify target network and client
- Configure wireless adapter in monitor mode
- Prepare deauth attack tools
Test Steps:
# Using aireplay-ng for deauth attack
sudo aireplay-ng -0 10 -a [TARGET_BSSID] wlan0mon
# Monitor detection
grep -i "deauth" logs/wifi_enhanced.logExpected Results:
- Detection of excessive deauth frames
- Identification of attack pattern
- Source MAC address logging
- HIGH threat level alert
Pass Criteria:
- ✅ Deauth attack detected within 60 seconds
- ✅ Attack source identified
- ✅ Threat score >= 0.5
Objective: Verify detection of Flipper Zero or similar devices
Setup:
- Acquire Flipper Zero or simulate with ESP32
- Configure device with Nordic UART service
- Enable BLE advertising with suspicious patterns
Test Steps:
# Monitor BLE detection
python3 ble_enhanced_detector.py &
tail -f logs/ble_enhanced.log | grep -i "flipper\|suspicious"Expected Results:
- Device fingerprint analysis
- Nordic UART service detection
- Threat classification as "suspicious_device"
- HIGH threat level assignment
Pass Criteria:
- ✅ Device detected within 45 seconds
- ✅ Nordic UART service identified
- ✅ Threat score >= 0.7
Objective: Verify detection of rapid BLE scanning patterns
Setup:
- Configure test device for rapid connect/disconnect cycles
- Set appearance/disappearance rate >5 times per 5 minutes
- Use unique MAC address for tracking
Test Steps:
# Simulate rapid scanning with Python script
python3 simulate_ble_scanner.py --rapid-mode &
# Monitor detection
grep -i "rapid_appearance\|frequent" logs/ble_enhanced.logExpected Results:
- Pattern recognition of rapid appearances
- Behavioral analysis scoring
- Alert for "frequent_appearance_pattern"
- MEDIUM threat classification
Pass Criteria:
- ✅ Pattern detected after 5+ rapid appearances
- ✅ Behavioral score calculated correctly
- ✅ Threat score >= 0.3
Objective: Verify detection of very close BLE devices
Setup:
- Position test device very close to detector (<1 meter)
- Configure high transmission power
- Monitor RSSI readings
Test Steps:
# Position device close and monitor
# RSSI should be > -30 dBm
grep -i "proximity\|rssi" logs/ble_enhanced.logExpected Results:
- RSSI measurement > -30 dBm
- Proximity attack detection
- Distance estimation < 1 meter
- MEDIUM threat alert
Pass Criteria:
- ✅ High RSSI detected (> -30 dBm)
- ✅ Proximity threat identified
- ✅ Accurate distance estimation
Objective: Verify capture of login attempts on honeypot portal
Setup:
- Access corporate portal at http://localhost:8080
- Prepare test credentials for submission
- Monitor form submission logging
Test Steps:
# Submit credentials through web form
curl -X POST http://localhost:8080/api/login-attempt \
-H "Content-Type: application/json" \
-d '{"username":"testuser","password":"testpass"}'
# Monitor logs
grep -i "credential\|login" logs/opencanary.logExpected Results:
- Credential capture in logs
- User agent and timestamp recorded
- Source IP identification
- Form submission details logged
Pass Criteria:
- ✅ Credentials captured accurately
- ✅ Metadata (IP, User-Agent) logged
- ✅ Timestamp within 1 second of submission
Objective: Verify detection of network port scanning
Setup:
- Configure port scanning tool (nmap)
- Scan multiple honeypot service ports
- Monitor connection attempts
Test Steps:
# Perform port scan
nmap -sS -p 1-1000 localhost
# Monitor detection
grep -i "portscan\|scan" logs/opencanary.logExpected Results:
- Multiple port connection attempts logged
- Source IP and port information
- Scanning pattern recognition
- MEDIUM to HIGH threat classification
Pass Criteria:
- ✅ Multiple port attempts detected
- ✅ Scanning pattern identified
- ✅ Source accurately logged
Objective: Verify canary document access detection
Setup:
- Access document portal at http://localhost:8080/documents.html
- Attempt to download sensitive documents
- Monitor access attempts
Test Steps:
# Access document portal and click documents
# Monitor access logs
grep -i "document_access" logs/web_access.logExpected Results:
- Document access attempts logged
- Document names and categories recorded
- User session tracking
- Access denied simulation
Pass Criteria:
- ✅ Document access logged
- ✅ User session tracked
- ✅ Access attempt details captured
Objective: Verify detection of USB device insertion
Setup:
- Prepare unknown USB device (flash drive, etc.)
- Monitor USB subsystem events
- Insert device while monitoring
Test Steps:
# Monitor USB detection
python3 usb_detection_enhanced.py &
# Insert USB device
# Monitor logs
tail -f logs/usb_enhanced.logExpected Results:
- USB insertion event detected
- Device enumeration information
- Vendor/Product ID logging
- Device type classification
Pass Criteria:
- ✅ Insertion detected immediately
- ✅ Device information captured
- ✅ Classification performed
Objective: Verify detection of Human Interface Devices
Setup:
- Connect HID device (keyboard, mouse)
- Monitor HID-specific enumeration
- Detect rapid input patterns
Test Steps:
# Connect HID device with rapid input
# Monitor HID-specific detection
grep -i "hid\|keyboard\|mouse" logs/usb_enhanced.logExpected Results:
- HID device classification
- Input pattern analysis
- Suspicious behavior detection
- MEDIUM threat if unusual patterns
Pass Criteria:
- ✅ HID device correctly classified
- ✅ Input patterns analyzed
- ✅ Behavioral scoring applied
Objective: Verify analysis of USB mass storage devices
Setup:
- Connect USB flash drive or external storage
- Monitor filesystem enumeration
- Analyze device properties
Test Steps:
# Connect mass storage device
# Monitor storage analysis
grep -i "storage\|filesystem\|mount" logs/usb_enhanced.logExpected Results:
- Storage device recognition
- Filesystem type detection
- Capacity and properties logging
- Basic content analysis
Pass Criteria:
- ✅ Storage device detected
- ✅ Properties correctly identified
- ✅ Security analysis performed
Objective: Verify correlation across different attack vectors
Setup:
- Execute WiFi evil twin attack
- Simultaneously attempt credential harvesting
- Insert suspicious USB device
- Monitor correlation engine
Test Steps:
# Start multi-vector attacks simultaneously
./test_multi_vector_attack.sh
# Monitor correlation
grep -i "correlation\|multi" logs/multi_vector.logExpected Results:
- Cross-protocol threat correlation
- Temporal relationship identification
- Elevated threat scoring
- CRITICAL alert generation
Pass Criteria:
- ✅ Multiple vectors correlated
- ✅ Temporal analysis performed
- ✅ Combined threat score elevated
Objective: Verify temporal correlation of related events
Setup:
- Execute attacks in sequence with timing
- Monitor timeline analysis
- Verify correlation windows
Test Steps:
# Execute timed sequence of attacks
python3 test_timeline_correlation.py
# Monitor timeline analysis
grep -i "timeline\|sequence" logs/correlation.logExpected Results:
- Timeline reconstruction
- Event sequence analysis
- Correlation confidence scoring
- Attack campaign identification
Pass Criteria:
- ✅ Timeline accurately reconstructed
- ✅ Sequence correlation identified
- ✅ Confidence scores calculated
Objective: Verify behavioral pattern recognition
Setup:
- Generate consistent attack patterns
- Vary attack timing and intensity
- Monitor behavioral analysis
Test Steps:
# Generate behavioral patterns
python3 test_behavioral_patterns.py
# Monitor behavioral analysis
grep -i "behavior\|pattern" logs/behavioral.logExpected Results:
- Pattern recognition accuracy
- Behavioral baseline establishment
- Anomaly detection
- Learning algorithm adaptation
Pass Criteria:
- ✅ Patterns accurately identified
- ✅ Baselines established
- ✅ Anomalies detected
Objective: Verify dashboard displays threat data in real-time
Setup:
- Access enhanced dashboard
- Generate test threats
- Monitor dashboard updates
Test Steps:
# Access dashboard at http://72.60.25.24:8080/enhanced_dashboard.html
# Generate test threats
# Verify real-time updatesExpected Results:
- Threat counters update within 10 seconds
- Charts reflect new data
- Timeline shows recent events
- Status indicators accurate
Pass Criteria:
- ✅ Updates within 10 seconds
- ✅ Data accuracy maintained
- ✅ Visual elements functional
Objective: Verify API endpoints return correct data
Setup:
- Generate known threat data
- Query API endpoints
- Validate response accuracy
Test Steps:
# Test API endpoints
curl http://72.60.25.24:8080/api/threats/stats
curl http://72.60.25.24:8080/api/threats/recent
curl http://72.60.25.24:8080/api/threats/correlations
# Verify response dataExpected Results:
- Accurate threat statistics
- Proper JSON formatting
- Correct data relationships
- Reasonable response times (<2s)
Pass Criteria:
- ✅ Data accuracy verified
- ✅ Response format correct
- ✅ Performance acceptable
Objective: Verify dashboard functionality on mobile devices
Setup:
- Access dashboard on mobile device/browser
- Test responsive design
- Verify functionality
Test Steps:
# Access dashboard with mobile user agent
# Test touch interactions
# Verify layout adaptationExpected Results:
- Layout adapts to screen size
- Touch interactions work
- All data remains accessible
- Performance acceptable
Pass Criteria:
- ✅ Responsive design functional
- ✅ Mobile interactions work
- ✅ Data accessibility maintained
# Run automated test suite
./run_validation_tests.sh
# Generate test report
./generate_test_report.sh- WiFi Detection Tests (W1-W3)
- BLE Detection Tests (B1-B3)
- Web Honeypot Tests (H1-H3)
- USB Detection Tests (U1-U3)
- Correlation Tests (C1-C3)
- Dashboard Tests (D1-D3)
- Detection Time: <60 seconds for all threat types
- False Positive Rate: <10% for all capabilities
- System Resource Usage: <80% CPU, <4GB RAM
- Dashboard Response Time: <2 seconds
- Daily: Automated basic functionality tests
- Weekly: Manual validation of key capabilities
- Monthly: Complete test suite execution
- Quarterly: Performance benchmark validation
All test results should be documented with:
- Test execution timestamp
- Pass/fail status for each test case
- Performance metrics collected
- Any anomalies or issues identified
- Recommendations for improvements
This comprehensive test suite ensures the Honeyman Project maintains its detection capabilities and performance standards throughout its deployment lifecycle.