Skip to content

ChittiAI/openvlapp

 
 

Repository files navigation

OpenVLA: An Open-Source Vision-Language-Action Model

    ┌──────────────────────────┐
    │      OpenVLA Model       │
    │ (vision-language-action) │
    └───────────┬──────────────┘
                │
                │  Tokenized / Decoded Output
                ▼
    ┌──────────────────────────┐
    │   OpenVLA Adapter        │
    │  • Extract START(x,y)    │
    │  • Extract GOAL(x,y)     │
    │  • Extract OBST(...)     │
    └───────────┬──────────────┘
                │
                │  Planning Cues
                ▼
    ┌──────────────────────────┐
    │   Physics-Informed NN    │
    │  (PINN solving Eikonal)  │
    │                          │
    │   PDE: |∇ϕ(x,y)| = 1     │
    │   BC:  ϕ(goal) = 0       │
    │   Obstacles → Penalty    │
    └───────────┬──────────────┘
                │
                │  Learned Potential Field ϕ(x,y)
                ▼
    ┌──────────────────────────┐
    │   Path Extraction        │
    │  • Follow −∇ϕ downhill   │
    │  • From START to GOAL    │
    │  • Avoid obstacles       │
    └───────────┬──────────────┘
                │
                │  Planned Trajectory
                ▼
    ┌──────────────────────────┐
    │   Executable Path        │
    │  (to robot / controller) │
    └──────────────────────────┘

About

OpenVLA: An open-source vision-language-action model for robotic manipulation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 99.9%
  • Makefile 0.1%