Running an Algorithm
This guide explains how to run algorithms in CREW Wildfire. You'll learn how to execute algorithms, configure parameters, and find your results.
Prerequisites
Before running any algorithm, ensure you have:
- Completed the CREW Wildfire installation
- Docker container is running
- Navigated to the CREW Wildfire directory and activated the crew conda environment:
Basic Command Structure
To run an algorithm, use the following command structure:
python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS
Where:
- ALGORITHM_NAME: The algorithm implementation to use (e.g., CAMON, COELA, HMAS_2)
- LEVEL_NAME: The preset level to run (see Preset Levels)
- SEED: Random seed for reproducibility
- STEPS: Maximum number of steps for the simulation
You may also add no_graphics=True to skip rendering, saving computation. However, this will prevent POV and minimap observations, leaving only encoded minimaps and ground truth data.
Available Algorithms
CREW Wildfire provides several state-of-the-art multi-agent algorithms:
- CAMON: Cooperative Agents for Multi-Object Navigation
- COELA: Cooperative Embodied Learning Agents with LLM-based Communication
- Embodied: Embodied LLM Agents Learn to Cooperate in Organized Teams
- HMAS_2: Hybrid Multi-agent System v2
Choosing a Level
CREW Wildfire offers various preset levels for different scenarios: - Tree cutting missions - Fire scouting and detection - Firefighter transport operations - Civilian rescue missions - Fire suppression tasks - Full environment scenarios
For a complete list of available levels and their details, see Preset Levels. To create custom levels, refer to Creating Custom Levels.
Running Examples
-
Run CAMON on the
Cut_Trees_Sparse_smalllevel: -
Run COELA on the
Scout_Fire_smalllevel: -
Run Embodied on the
Suppress_Fire_Extinguishlevel: -
Run HMAS-2 on the full environment:
Running asynchronously
For long-running or parallel simulations, you can run the algorithm in the background by creating a shell script:
-
Create a file named
run_algorithm.sh:#!/bin/bash python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS & python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS & python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS & ... wait -
Make it executable:
-
Run it:
Finding Results
As the simulation runs, results such as POV + Minimap observations, chats, and scores are stored in the results/logs directory with the following structure:
└── logs/ALGORITHM_NAME/LEVEL_NAME...
├── Agent_1/
│ ├── Minimap/ # Minimap Screen Captures
│ ├── POV/ # POV Screen Captures
│ └── chats.txt # Chat history
│
├── Agent_2/
│ └── ...
├── Agent_3/
│ └── ...
│
├── Server_Accumulative/ # Team Accumulative Minimap
├── Server_Map/ # Ground Truth Map
└── data.csv # Score + API Calls, Input + Output tokens
When the test is complete, a video will be rendered from all the Minimaps and POVs:
