🌍 GlobeHands (EarthControls) Link
A futuristic, touch-free web application that allows users to interact with a 3D Earth in real-time using native hand gestures via their device's webcam.
Imagine rotating, zooming, and interacting with a digital globe—just like Minority Report—right from your browser, securely and without needing a mouse or touchscreen.
- Gesture-Driven UX: Rotate, zoom, and scale a 3D globe purely through hand movements.
- Real-Time Hand Tracking: Utilizes Google's MediaPipe for ultra-low latency (<100ms) hand landmark detection.
- Privacy-First (Browser Native): All computer vision processing happens directly in the browser via WebAssembly. Neither video feeds nor hand data ever leave your device.
- Immersive 3D Rendering: High-performance React Three Fiber environment featuring procedural starfields, realistic lighting, and smooth animations.
- Visual Feedback: Live camera overlay with a landmark skeleton UI and a dynamic gesture recognition badge to guide interactions.
This project modernizes the 3D-web stack by merging advanced computer vision with declarative Three.js integration.
- Core Framework: Next.js 14 (App Router), TypeScript
- 3D Engine: React Three Fiber (
@react-three/fiber),@react-three/drei - Computer Vision: Google MediaPipe Hands (
@mediapipe/hands,@mediapipe/camera_utils) - State Management: Zustand (for reactive, uncoupled gesture state handling)
- Styling: Tailwind CSS
- Camera Feed: User's webcam is captured securely entirely on the client-side (
<CameraFeed />). - Landmark Detection: MediaPipe extracts 21 3D spatial points (landmarks) per hand.
- Gesture Classification: Custom math logic (
GestureClassifier.ts) calculates finger curl, pinch distance, and palm movement to determine the active gesture. - State Update: The resulting gesture event is stored into a lightning-fast Zustand
GestureStore. - 3D Transformation: The
GlobeControllertranslates these active gestures into actual 3D math (quaternions for rotation, field-of-view adjustments for zoom) and seamlessly updates the<GlobeModel />via R3F'suseFrameloop.
| Hand Gesture | Action on Globe |
|---|---|
| Open Palm + Move | Rotate / Pan the globe |
| Pinch (1 hand) | Zoom In |
| Spread (1 hand) | Zoom Out |
| Two-Hand Pinch & Spread | Scale the globe up / down |
| Fist (Closed hand) | Stop / Lock position |
| Index Point Up | Reset globe to default orientation |
├── app/
│ └── page.tsx # Main UI Layout wrapping 3D context & UI overlays
├── components/
│ ├── Camera/ # MediaPipe WebCam initialization & Skeleton Overlay
│ ├── Gesture/ # Hand logic routing, Store configuration, UI Labels
│ ├── Globe/ # R3F Canvas, Scene Lighting, Globe Meshes & Controllers
│ └── UI/ # Settings panels, Gesture Guides
├── hooks/ # Custom hooks bridging MediaPipe and Three.js logic
└── lib/ # Math utilities for gesture calculations
- Node.js 18+
- A modern web browser (Canvas, WebGL2, and WebAssembly support required — Chrome 90+, Edge 90+, Firefox 95+)
- A working webcam
-
Clone the repository:
git clone https://github.com/yourusername/globe-control.git cd globe-control -
Install dependencies:
pnpm install # or npm install / yarn install -
Run the development server:
pnpm dev
-
Navigate to:
http://localhost:3000 -
Allow camera access when prompted by the browser to begin testing gesture interactions.
Contributions, issues, and feature requests are welcome!
Feel free to check out the underlying vision in the PRD.md file to see the project's roadmap and upcoming milestones.