Skip to content

Running an Algorithm

This guide explains how to run algorithms in CREW Wildfire. You'll learn how to execute algorithms, configure parameters, and find your results.

Prerequisites

Before running any algorithm, ensure you have:

  1. Completed the CREW Wildfire installation
  2. Docker container is running
  3. Navigated to the CREW Wildfire directory and activated the crew conda environment:
    cd crew-algorithms/crew_algorithms/wildfire_alg
    conda activate crew
    

Basic Command Structure

To run an algorithm, use the following command structure:

python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS

Where: - ALGORITHM_NAME: The algorithm implementation to use (e.g., CAMON, COELA, HMAS_2) - LEVEL_NAME: The preset level to run (see Preset Levels) - SEED: Random seed for reproducibility - STEPS: Maximum number of steps for the simulation

You may also add no_graphics=True to skip rendering, saving computation. However, this will prevent POV and minimap observations, leaving only encoded minimaps and ground truth data.

Available Algorithms

CREW Wildfire provides several state-of-the-art multi-agent algorithms:

  1. CAMON: Cooperative Agents for Multi-Object Navigation
  2. COELA: Cooperative Embodied Learning Agents with LLM-based Communication
  3. Embodied: Embodied LLM Agents Learn to Cooperate in Organized Teams
  4. HMAS_2: Hybrid Multi-agent System v2

Choosing a Level

CREW Wildfire offers various preset levels for different scenarios: - Tree cutting missions - Fire scouting and detection - Firefighter transport operations - Civilian rescue missions - Fire suppression tasks - Full environment scenarios

For a complete list of available levels and their details, see Preset Levels. To create custom levels, refer to Creating Custom Levels.

Running Examples

  1. Run CAMON on the Cut_Trees_Sparse_small level:

    python algorithms\\CAMON\\__main__.py envs.level=Cut_Trees_Sparse_small envs.seed=483 envs.max_steps=20
    

  2. Run COELA on the Scout_Fire_small level:

    python algorithms\\COELA\\__main__.py envs.level=Scout_Fire_small envs.seed=42 envs.max_steps=50
    

  3. Run Embodied on the Suppress_Fire_Extinguish level:

    python algorithms\\COELA\\__main__.py envs.level=Suppress_Fire_Extinguish envs.seed=42 envs.max_steps=200
    

  4. Run HMAS-2 on the full environment:

    python algorithms\\HMAS_2\\__main__.py envs.level=Full_Game envs.seed=100 envs.max_steps=300
    

Running asynchronously

For long-running or parallel simulations, you can run the algorithm in the background by creating a shell script:

  1. Create a file named run_algorithm.sh:

    #!/bin/bash
    python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS &
    python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS &
    python algorithms\\ALGORITHM_NAME\\__main__.py envs.level=LEVEL_NAME envs.seed=SEED envs.max_steps=STEPS &
    ...
    wait
    

  2. Make it executable:

    chmod +x run_algorithm.sh
    

  3. Run it:

    ./run_algorithm.sh
    

Finding Results

As the simulation runs, results such as POV + Minimap observations, chats, and scores are stored in the results/logs directory with the following structure:

└── logs/ALGORITHM_NAME/LEVEL_NAME...          
    ├── Agent_1/
       ├── Minimap/     # Minimap Screen Captures
       ├── POV/         # POV Screen Captures
       └── chats.txt    # Chat history
        ├── Agent_2/
       └── ...
    ├── Agent_3/
       └── ...
        ├── Server_Accumulative/  # Team Accumulative Minimap
    ├── Server_Map/           # Ground Truth Map
    └── data.csv             # Score + API Calls, Input + Output tokens

When the test is complete, a video will be rendered from all the Minimaps and POVs:

demo