Repository for RL algorithms with FNO (Fourier Neural Operator) Encoder
This repository contains implementations of various reinforcement learning algorithms enhanced with FNO encoders for different environments.
pip install torch numpy wandb tqdm matplotlibFor Carla Lane PPO:
pip install gym d4rlFor Rainbow (Atari):
pip install atari-py opencv-pythonFor State DMC SAC:
pip install hydra-core dmc2gym imageio termcolor tensorboardFor Offline RBVE:
pip install gym[atari]Three variants of PPO implementation for Carla lane-keeping task:
cd Carla-Lane-PPO/PPO-pytorch
python main.pycd Carla-Lane-PPO/PPO-pytorch-FNO
python main.py <seed>
# Example: python main.py 2024Or use the provided script:
./run.shcd Carla-Lane-PPO/PPO-pytorch-FNO-proj
python main.py <seed>
# Example: python main.py 2024Or use the provided script:
./run.shNote:
- Requires
carla-lane-v0environment from d4rl - Logs to wandb project 'CARLA_PPO'
- Uses 4-frame stacking with grayscale conversion
Rainbow DQN implementation with FNO encoder for Atari games.
cd Rainbow
python rainbow.py --game <game_name> --architecture fno-model [options]Key arguments:
--game: Atari game name (e.g., 'space_invaders', 'breakout')--architecture: Choose from 'canonical', 'data-efficient', or 'fno-model'--seed: Random seed (default: 2025)--T-max: Training steps (default: 50M)--gpu-id: GPU ID to use
Example:
python rainbow.py --game space_invaders --architecture fno-model --seed 2025Note:
- Logs to wandb automatically
- Saves models and memory periodically
Soft Actor-Critic (SAC) with FNO encoder for DeepMind Control Suite tasks.
cd State-DMC-SAC
python train.py env=<env_name> [options]Available environments:
cartpole_swingupcheetah_runball_in_cup_catch- And other DMC tasks
Example:
python train.py env=cartpole_swingup seed=1Or use the provided script:
./run.shConfiguration:
- Edit
config/train.yamlfor training parameters - Edit
config/agent/sac.yamlfor agent-specific parameters - Logs to wandb project 'state_based_fno'
Offline RL with FNO encoder for Atari games using pre-collected datasets.
cd Offline-RBVE
python main_lr.pyConfiguration:
Edit the ArgumentStorage dictionary in main_lr.py:
env_name: Atari environment (e.g., "Boxing-v0")data_path: Path to pre-collected datasetlearning_rate: Learning rate (default: 1e-4)num_iterations: Number of training iterations
Note:
- Requires pre-processed offline Atari dataset
- Runs multiple learning rate experiments in parallel using threading
- Saves models and metrics automatically
A2C agent implementation for ViZDoom environments.
Available agents:
a2c_agent.py- Advantage Actor-Criticdqn_agent.py- Deep Q-Networkddqn_agent.py- Double DQNppo_agent.py- Proximal Policy Optimizationreinforce_agent.py- REINFORCE algorithm
Usage:
Import the desired agent from algos.agents and integrate with your training loop.
from algos.agents.a2c_agent import A2CAgent
from algos.preprocessing.stack_frame import StackFrame
# Initialize agent
agent = A2CAgent(input_shape, action_size, seed, device,
gamma, alpha, beta, update_every, actor_m, critic_m)
# Training loop
for episode in episodes:
state = env.reset()
# ... training logicEach project includes an fno.py file with Fourier Neural Operator implementations:
FNO1d- For 1D state spaces (State-DMC-SAC)FNO2d- For 2D image observations (Carla, Rainbow, Offline-RBVE)
The FNO encoder provides efficient spectral feature extraction for RL tasks.
All projects support Weights & Biases (wandb) logging. Configure your wandb entity in the respective training scripts.
If you use this code, please cite the relevant papers for the algorithms and FNO encoder.