┌──────────────────────────┐
│ OpenVLA Model │
│ (vision-language-action) │
└───────────┬──────────────┘
│
│ Tokenized / Decoded Output
▼
┌──────────────────────────┐
│ OpenVLA Adapter │
│ • Extract START(x,y) │
│ • Extract GOAL(x,y) │
│ • Extract OBST(...) │
└───────────┬──────────────┘
│
│ Planning Cues
▼
┌──────────────────────────┐
│ Physics-Informed NN │
│ (PINN solving Eikonal) │
│ │
│ PDE: |∇ϕ(x,y)| = 1 │
│ BC: ϕ(goal) = 0 │
│ Obstacles → Penalty │
└───────────┬──────────────┘
│
│ Learned Potential Field ϕ(x,y)
▼
┌──────────────────────────┐
│ Path Extraction │
│ • Follow −∇ϕ downhill │
│ • From START to GOAL │
│ • Avoid obstacles │
└───────────┬──────────────┘
│
│ Planned Trajectory
▼
┌──────────────────────────┐
│ Executable Path │
│ (to robot / controller) │
└──────────────────────────┘
ChittiAI/openvlapp
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|