Autonomous navigation in unknown environments is fundamentally limited by uncertainty arising from sensing, state estimation, and environment dynamics.
This work presents a modular navigation framework that explicitly models uncertainty, risk, and irreversibility, enabling dynamic replanning under partial observability.
The framework integrates classical planning algorithms with learned heuristics while preserving formal guarantees, and extends decision-making through risk-aware planning, safety constraints, and resource-aware strategies.
The system is evaluated through controlled experiments, parameter sweeps, and ablation studies, with emphasis on reproducibility and quantitative analysis.
Autonomous robots operating in real-world environments must make decisions under uncertainty.
Key challenges include:
- incomplete or evolving maps
- noisy sensing and state estimation
- dynamic obstacles
- safety-critical constraints
Traditional planning methods assume deterministic and fully known environments, which limits their applicability.
This work investigates how navigation systems can explicitly incorporate uncertainty, risk, and safety into planning and decision-making.
This work is structured around the following research questions:
We investigate whether neural approximations of heuristic functions can reduce computational cost while preserving the guarantees of classical planners such as A*.
We explore belief-space representations and risk-aware cost formulations for planning under partial observability.
We study risk-weighted planning and risk budget constraints, analyzing trade-offs between optimality and safety.
We introduce returnability constraints and analyze feasibility thresholds to prevent entry into unsafe or unrecoverable states.
We integrate anomaly detection mechanisms, including innovation-based intrusion detection and integrity monitoring.
We investigate energy-aware and connectivity-aware planning, along with adaptive safe-mode mechanisms.
We explore decentralized coordination strategies and risk allocation across agents.
We examine extensions involving language-based interaction and trust-aware decision-making.
We consider a robot operating in an unknown environment with:
- partial observability
- uncertain state estimation
- dynamic environmental changes
The objective is to compute navigation strategies that optimize:
- path efficiency
- safety
- robustness
- resource usage
while explicitly accounting for uncertainty and risk.
The framework combines:
- Classical planning algorithms (A*, graph search)
- Probabilistic reasoning (belief representation, uncertainty)
- Risk-aware cost modeling
- Learning-based heuristic estimation
- Constraint-aware decision-making
The system is structured into modular components, each corresponding to a research question.
- Code:
contributions/01_learned_astar/
- Run:
python contributions/01_learned_astar/experiments/eval_astar_learned.py- Addresses: RQ1
-
Code:
contributions/03_belief_risk_planning/ -
Addresses: RQ2, RQ3
-
Code:
contributions/04_irreversibility_returnability/ -
Addresses: RQ4
-
Code:
contributions/05_safe_mode_navigation/ -
Addresses: RQ3, RQ6
-
Code:
contributions/06_energy_connectivity/ -
Addresses: RQ6
-
Code:
contributions/08_security_ids/ -
Addresses: RQ5
-
Code:
contributions/09_multi_robot/ -
Addresses: RQ7
-
Code:
contributions/10_human_language_ethics/ -
Addresses: RQ8
The framework follows a structured experimental approach:
- parameter sweeps
- multi-seed evaluation
- ablation studies
- comparative analysis
All experiments generate:
- CSV logs
- quantitative metrics
- visualizations
Outputs are stored under:
contributions/*/results/
The system demonstrates:
- reduced node expansions using learned heuristics
- improved safety through risk-aware planning
- robustness under uncertainty
- detection of anomalous behavior
Detailed results are available per module.
The results indicate that:
- uncertainty must be explicitly modeled
- risk-aware planning improves safety
- learned components enhance efficiency
- modular design enables systematic evaluation
This work presents a unified framework for navigation under uncertainty, integrating planning, learning, and safety-aware decision-making.
It provides a foundation for future research in:
- safe autonomy
- uncertainty-aware control
- multi-agent systems
- structured experiments
- modular implementation
- reproducible pipelines
python -m venv venv
source venv/bin/activate
pip install -r requirements.txtpython contributions/01_learned_astar/experiments/eval_astar_learned.pyPanagiota Grosdouli Electrical and Computer Engineering Democritus University of Thrace
Apache License 2.0