Browse Source

Following code upstream and changing strategy

Nicole Portas 1 month ago
parent
commit
2487bbb520
3 changed files with 697 additions and 130 deletions
  1. 138 23
      README.md
  2. 540 88
      builder/builder.py
  3. 19 19
      docker-compose.yml

+ 138 - 23
README.md

@@ -1,34 +1,149 @@
-# ArduPilot Custom Firmware Builder (EqualMass Edition)
+# ArduPilot Custom Firmware Builder
 
-## 🛠EqualMass Customization Layer (Added Feb 2026)
+## Table of Contents
+1. [Overview](#overview)
+2. [Live Versions](#live-versions)
+3. [Running Locally Using Docker](#running-locally-using-docker)
+4. [Running Locally Without Docker on Ubuntu](#running-locally-without-docker-on-ubuntu)
+5. [Directory Structure](#directory-structure)
+6. [Acknowledgements](#acknowledgements)
 
-This repository has been customized to support a persistent, automated **Patch Management System**. These modifications allow for custom source code injection and regional data integration while maintaining a production-grade infrastructure on Debian 13.
+## Overview
+The ArduPilot Custom Firmware Builder is a web-based application designed to generate downloadable customized ArduPilot firmware, tailored to user specifications. This tool facilitates the customization and building of firmware by allowing users to select the options that best fit their needs, thus providing a streamlined interface for creating ArduPilot firmware.
 
-### 📁 Modified & New Files
-* **`docker-compose.yml`**: Updated to include the `overlay-manager` sidecar service and shared volumes for persistent code injection.
-* **`builder/builder.py`**: Modified to detect custom source files in the `/srv` directory. The engine now automatically merges these overlays into the ArduPilot source tree before the Waf compiler begins the build.
-* **`web/templates/index.html`**: The main dashboard now features a stylized "Patch Manager" bridge button in the navigation bar to allow seamless switching between the builder and the sidecar.
-* **`overlay_manager/` (New Service)**: 
-    * `main.py`: A FastAPI-based engine designed to manage (Upload/Edit/Delete) custom source files within the Docker environment.
-    * `templates/index.html`: A custom UI for managing patches, matching the ArduPilot ecosystem aesthetic.
-    * `static/logo.png`: Localized branding assets for the sidecar interface.
+## Live Versions
+- **Stable Version:** The stable version of the ArduPilot Custom Firmware Builder can be accessed at [custom.ardupilot.org](https://custom.ardupilot.org).
+- **Beta Version:** We maintain a beta version available at [custom-beta.ardupilot.org](https://custom-beta.ardupilot.org) where newly developed features are tested before they are rolled out in the stable version.
 
-### 🌐 Network & Routing Architecture
-The system is unified behind an **Nginx Reverse Proxy** to provide a professional, single-domain experience:
-* **Main Builder UI**: `https://ardupilot.equalmass.com/` (Proxied to port `11080`).
-* **Patch Manager**: `https://ardupilot.equalmass.com/patch-manager/` (Proxied to port `11081`).
-* **SSL Termination**: Managed via Nginx for the `equalmass.com` domain.
+## Running Locally Using Docker
+To minimize setup overhead and enhance ease of use, running this application in Docker containers is highly recommended. Follow the instructions below to run the application locally using Docker:
 
+1. **Install Docker and Docker Compose:** Make sure Docker and Docker Compose are installed on your machine. For installation instructions, visit the [Docker website](https://docs.docker.com/engine/install).
+   
+2. **Clone the Repository:**
+   ```bash
+   git clone https://github.com/ardupilot/CustomBuild.git
+   cd CustomBuild
+   ```
 
----
+3. **Configure Environment Variables:**
+   Copy the `.env` file to the root of the cloned repository from `./examples/.env.sample` and configure the necessary parameters within it.
 
-## ArduPilot Custom Firmware Builder (Original Documentation)
+   ```bash
+   cp ./examples/.env.sample .env
+   ```
 
-This is the web application for ArduPilot's custom firmware builder. It allows users to build custom ArduPilot firmware by selecting the features they want to include.
+4. **Build and Start the Docker Containers:**
+   - To build and start the application, run:
+     ```bash
+     sudo docker compose up --build
+     ```
+   - If you want to run the application with the last built image, simply execute:
+     ```bash
+     sudo docker compose up
+     ```
 
-### 🛠️ Development
+   Use the `-d` flag to run the application in daemon mode:
+   ```bash
+   sudo docker compose up -d
+   ```
 
-You can run the application using Docker and Docker Compose:
+   **Note:** When starting the application for the first time, it takes some time to initialize the ArduPilot Git repositories at the backend. This process also involves populating the list of available versions and releases using the GitHub API, so please be patient.
 
-```bash
-docker-compose up --build
+5. **Access the Web Interface:** 
+   The application binds to port 11080 on your host machine by default. Open your web browser and go to `http://localhost:11080` to interact with the web interface. To change the port, set the `WEB_PORT` environment variable in the .env file mentioned in the _Configure Environment Variables_ section.
+
+6. **Stopping the Application:**
+   To stop the application, you can use the following command:
+   ```bash
+   sudo docker compose down
+   ```
+   This will stop and remove the containers, but it will not delete any built images or volumes, preserving your data for future use.
+
+## Running Locally Without Docker on Ubuntu
+To run the ArduPilot Custom Firmware Builder locally without Docker, ensure you have an environment capable of building ArduPilot. Refer to the [ArduPilot Environment Setup Guide](https://ardupilot.org/dev/docs/building-setup-linux.html) if necessary.
+
+1. **Clone the Custom-Build Repository:**
+   ```bash
+   git clone https://github.com/ardupilot/CustomBuild.git
+   cd CustomBuild
+   ```
+2. **Create and use a virtual environment:**
+   ```bash
+   python3 -m venv path/to/virtual/env
+   source path/to/virtual/env/bin/activate
+   ```
+
+   If the python venv module is not installed, run:
+   ```bash
+   sudo apt install python3-venv
+   ```
+
+   To deactive the virtual environment, run:
+   ```bash
+   deactivate
+   ```
+
+3. **Install Dependencies:**
+   ```bash
+   pip install -r web/requirements.txt -r builder/requirements.txt
+   ```
+
+   If pip is not installed, run:
+   ```bash
+   sudo apt install python3-pip
+   ```
+
+4. **Install and Run Redis:**
+   Use your package manager to install Redis:
+   ```bash
+   sudo apt install redis-server
+   ```
+   Ensure the Redis server is running:
+   ```bash
+   sudo systemctl status redis-server
+   ```
+
+5. **Execute the Application:**
+   - For a development environment with auto-reload, run:
+     ```bash
+     python3 web/main.py
+     ```
+     To change the port, use the `--port` argument:
+     ```bash
+     python3 web/main.py --port 9000
+     ```
+   - For a production environment, use:
+     ```bash
+     uvicorn web.main:app --host 0.0.0.0 --port 8080
+     ```
+
+    During the coding and testing phases, use the development environment to easily debug and make changes with auto-reload enabled. When deploying the app for end users, use the production environment to ensure better performance, scalability, and security.
+
+    The application will automatically set up the required base directory at `./base` upon first execution. You may customize this path by setting the `CBS_BASEDIR` environment variable.
+
+6. **Access the Web Interface:**
+
+   Once the application is running, you can access the interface in your web browser at http://localhost:8080.
+   
+   The default port is 8080, or the value of the `WEB_PORT` environment variable if set. You can override this by passing the `--port` argument when running the application directly (e.g., `python3 web/main.py --port 9000`) or when using uvicorn (e.g., `uvicorn web.main:app --port 5000`). Refer to the [uvicorn documentation](https://www.uvicorn.org/) for additional configuration options.
+
+## Directory Structure
+The default directory structure is established as follows:
+```
+/home/<username>
+└── CustomBuild
+    └── base
+        ├── ardupilot            (used by the web component)
+        ├── artifacts
+        ├── configs
+        |   └── remotes.json     (auto-generated, see examples/remotes.json.sample)
+        ├── secrets
+        |   └── reload_token     (optional)
+        ├── tmp
+            └── ardupilot        (used by the builder component)
+```
+The build artifacts are organized under the `base/artifacts` subdirectory.
+
+## Acknowledgements
+This project includes many valuable contributions made during the Google Summer of Code 2021. For more information, please see the [GSOC 2021 Blog Post](https://discuss.ardupilot.org/t/gsoc-2021-custom-firmware-builder/74946).

+ 540 - 88
builder/builder.py

@@ -1,104 +1,556 @@
+import ap_git
+from build_manager import (
+    BuildManager as bm,
+)
+import subprocess
 import os
 import shutil
-import subprocess
 import logging
-import time
-from build_manager import BuildManager, BuildState
+import tarfile
+from metadata_manager import (
+    APSourceMetadataFetcher as apfetch,
+    RemoteInfo,
+    VehiclesManager as vehm
+)
+from pathlib import Path
+
+CBS_BUILD_TIMEOUT_SEC = int(os.getenv('CBS_BUILD_TIMEOUT_SEC', 900))
+
 
 class Builder:
-    def __init__(self, build_id: str = None, workdir: str = None, **kwargs):
+    """
+    Processes build requests, perform builds and ship build artifacts
+    to the destination directory shared by BuildManager.
+    """
+
+    def __init__(self, workdir: str, source_repo: ap_git.GitRepo) -> None:
+        """
+        Initialises the Builder class.
+
+        Parameters:
+            workdir (str): Workspace for the builder.
+            source_repo (ap_git.GitRepo): Ardupilot repository to be used for
+                                          retrieving source for doing builds.
+
+        Raises:
+            RuntimeError: If BuildManager or APSourceMetadataFetcher is not
+            initialised.
+        """
+        if bm.get_singleton() is None:
+            raise RuntimeError(
+                "BuildManager should be initialized first."
+            )
+        if apfetch.get_singleton() is None:
+            raise RuntimeError(
+                "APSourceMetadataFetcher should be initialised first."
+            )
+        if vehm.get_singleton() is None:
+            raise RuntimeError(
+                "VehiclesManager should be initialised first."
+            )
+
+        self.__workdir_parent = workdir
+        self.__master_repo = source_repo
         self.logger = logging.getLogger(__name__)
-        self.bm = BuildManager.get_singleton()
-        
-        # --- MODIFICATION START ---
-        # Reason: Upstream (web app) and worker container need a consistent 
-        # base directory. We default to /base where the ArduPilot source lives.
-        self.workdir = workdir or os.environ.get("CBS_BASEDIR", "/base")
-        # --- MODIFICATION END ---
-        
-        self.build_id = build_id
-        if self.build_id:
-            self._setup_paths()
-
-    def _setup_paths(self):
-        self.artifacts_dir = self.bm.get_build_artifacts_dir_path(self.build_id)
-        self.log_file = self.bm.get_build_log_path(self.build_id)
-        self.info = self.bm.get_build_info(self.build_id)
-
-    def __run_cmd(self, cmd, cwd, log_handle):
-        # --- MODIFICATION START ---
-        # Reason: Using python3 explicitly to run 'waf' is more robust than 
-        # relying on the execution bit (+x) inside a Docker volume.
-        process = subprocess.Popen(
-            cmd, cwd=cwd, stdout=log_handle, stderr=subprocess.STDOUT, text=True
+        self.__shutdown_requested = False
+
+    def __log_build_info(self, build_id: str) -> None:
+        """
+        Logs the build information to the build log.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+        """
+        build_info = bm.get_singleton().get_build_info(build_id)
+        logpath = bm.get_singleton().get_build_log_path(build_id)
+        with open(logpath, "a") as build_log:
+            build_log.write(f"Vehicle ID: {build_info.vehicle_id}\n"
+                            f"Board: {build_info.board}\n"
+                            f"Remote URL: {build_info.remote_info.url}\n"
+                            f"git-sha: {build_info.git_hash}\n"
+                            "---\n"
+                            "Selected Features:\n")
+            for d in build_info.selected_features:
+                build_log.write(f"{d}\n")
+            build_log.write("---\n")
+
+    def __generate_extrahwdef(self, build_id: str) -> None:
+        """
+        Generates the extra hardware definition file (`extra_hwdef.dat`) for
+        the build.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+
+        Raises:
+            RuntimeError: If the parent directory for putting `extra_hwdef.dat`
+            does not exist.
+        """
+        # Log to build log
+        logpath = bm.get_singleton().get_build_log_path(build_id)
+        with open(logpath, "a") as build_log:
+            build_log.write("Generating extrahwdef file...\n")
+
+        path = self.__get_path_to_extra_hwdef(build_id)
+        self.logger.debug(
+            f"Path to extra_hwdef for build id {build_id}: {path}"
         )
-        process.wait()
-        if process.returncode != 0:
-            raise subprocess.CalledProcessError(process.returncode, cmd)
-        # --- MODIFICATION END ---
+        if not os.path.exists(os.path.dirname(path)):
+            raise RuntimeError(
+                f"Create parent directory '{os.path.dirname(path)}' "
+                "before writing extra_hwdef.dat"
+            )
 
-    def build(self, build_id: str = None):
-        if build_id:
-            self.build_id = build_id
-            self._setup_paths()
+        build_info = bm.get_singleton().get_build_info(build_id)
+        selected_features = build_info.selected_features
+        self.logger.debug(
+            f"Selected features for {build_id}: {selected_features}"
+        )
+        all_features = apfetch.get_singleton().get_build_options_at_commit(
+            remote=build_info.remote_info.name,
+            commit_ref=build_info.git_hash,
+        )
+        all_defines = {
+            feature.define
+            for feature in all_features
+        }
+        enabled_defines = selected_features.intersection(all_defines)
+        disabled_defines = all_defines.difference(enabled_defines)
+        self.logger.info(f"Enabled defines for {build_id}: {enabled_defines}")
+        self.logger.info(f"Disabled defines for {build_id}: {enabled_defines}")
 
-        if not self.build_id:
-            raise ValueError("No build_id provided to Builder.")
+        with open(self.__get_path_to_extra_hwdef(build_id), "w") as f:
+            # Undefine all defines at the beginning
+            for define in all_defines:
+                f.write(f"undef {define}\n")
+            # Enable selected defines
+            for define in enabled_defines:
+                f.write(f"define {define} 1\n")
+            # Disable the remaining defines
+            for define in disabled_defines:
+                f.write(f"define {define} 0\n")
 
+    def __ensure_remote_added(self, remote_info: RemoteInfo) -> None:
+        """
+        Ensures that the remote repository is correctly added to the
+        master repository.
+
+        Parameters:
+            remote_info (RemoteInfo): Information about the remote repository.
+        """
         try:
-            self.logger.info(f"[{self.build_id}] Starting build process...")
-            os.makedirs(self.artifacts_dir, exist_ok=True)
+            self.__master_repo.remote_add(
+                remote=remote_info.name,
+                url=remote_info.url,
+            )
+            self.logger.info(
+                f"Added remote {remote_info.name} to master repo."
+            )
+        except ap_git.DuplicateRemoteError:
+            self.logger.debug(
+                f"Remote {remote_info.name} already exists."
+                f"Setting URL to {remote_info.url}."
+            )
+            # Update the URL if the remote already exists
+            self.__master_repo.remote_set_url(
+                remote=remote_info.name,
+                url=remote_info.url,
+            )
+            self.logger.info(
+                f"Updated remote url to {remote_info.url}"
+                f"for remote {remote_info.name}"
+            )
+
+    def __provision_build_source(self, build_id: str) -> None:
+        """
+        Provisions the source code for a specific build.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+        """
+        # Log to build log
+        logpath = bm.get_singleton().get_build_log_path(build_id)
+        with open(logpath, "a") as build_log:
+            build_log.write("Cloning build source...\n")
+
+        build_info = bm.get_singleton().get_build_info(build_id)
+        logging.info(
+            f"Ensuring {build_info.remote_info.name} is added to master repo."
+        )
+        self.__ensure_remote_added(build_info.remote_info)
+
+        logging.info(f"Cloning build source for {build_id} from master repo.")
+
+        ap_git.GitRepo.shallow_clone_at_commit_from_local(
+            source=self.__master_repo.get_local_path(),
+            remote=build_info.remote_info.name,
+            commit_ref=build_info.git_hash,
+            dest=self.__get_path_to_build_src(build_id),
+        )
+
+    def __create_build_artifacts_dir(self, build_id: str) -> None:
+        """
+        Creates the output directory to store build artifacts.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+        """
+        p = Path(bm.get_singleton().get_build_artifacts_dir_path(build_id))
+        self.logger.info(f"Creating directory at {p}.")
+        try:
+            Path.mkdir(p, parents=True)
+        except FileExistsError:
+            shutil.rmtree(p)
+            Path.mkdir(p)
+
+    def __create_build_workdir(self, build_id: str) -> None:
+        """
+        Creates the working directory for the build.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+        """
+        p = Path(self.__get_path_to_build_dir(build_id))
+        self.logger.info(f"Creating directory at {p}.")
+        try:
+            Path.mkdir(p, parents=True)
+        except FileExistsError:
+            shutil.rmtree(p)
+            Path.mkdir(p)
+
+    def __generate_archive(self, build_id: str) -> None:
+        """
+        Placeholder for generating the zipped build artifact.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+        """
+        build_info = bm.get_singleton().get_build_info(build_id)
+        archive_path = bm.get_singleton().get_build_archive_path(build_id)
+
+        files_to_include = []
+
+        # include binaries
+        bin_path = os.path.join(
+            self.__get_path_to_build_dir(build_id),
+            build_info.board,
+            "bin"
+        )
+
+        # Ensure bin_path exists
+        Path.mkdir(Path(bin_path), exist_ok=True)
+
+        bin_list = os.listdir(bin_path)
+        self.logger.debug(f"bin_path: {bin_path}")
+        self.logger.debug(f"bin_list: {bin_list}")
+        for file in bin_list:
+            file_path_abs = os.path.abspath(
+                os.path.join(bin_path, file)
+            )
+            files_to_include.append(file_path_abs)
+
+        # include log
+        log_path_abs = os.path.abspath(
+            bm.get_singleton().get_build_log_path(build_id)
+        )
+        files_to_include.append(log_path_abs)
+
+        # include extra_hwdef.dat
+        extra_hwdef_path_abs = os.path.abspath(
+            self.__get_path_to_extra_hwdef(build_id)
+        )
+        files_to_include.append(extra_hwdef_path_abs)
+
+        # create archive
+        with tarfile.open(archive_path, "w:gz") as tar:
+            for file in files_to_include:
+                arcname = f"{build_id}/{os.path.basename(file)}"
+                self.logger.debug(f"Added {file} as {arcname}")
+                tar.add(file, arcname=arcname)
+        self.logger.info(f"Generated {archive_path}.")
+
+    def __clean_up_build_workdir(self, build_id: str) -> None:
+        """
+        Removes the temporary build directory, including the source tree
+        and any applied custom overlays.
+        """
+        logpath = bm.get_singleton().get_build_log_path(build_id)
+        cleanup_msg = f"Cleaning up build workspace for {build_id} (removing source tree and applied custom overlays)..."
+        
+        self.logger.info(cleanup_msg)
+        with open(logpath, "a") as build_log:
+            build_log.write(f"{cleanup_msg}\n")
+            build_log.flush()
             
-            # --- MODIFICATION START ---
-            # Reason: We must ensure we are in the directory containing 'waf'.
-            # If the path is wrong, the build fails immediately.
-            repo = self.workdir
-            if not os.path.exists(os.path.join(repo, "waf")):
-                self.logger.error(f"[{self.build_id}] waf not found in {repo}")
-                raise FileNotFoundError(f"Cannot find waf in {repo}")
-
-            with open(self.log_file, "a") as log:
-                log.write(f"Starting build: {self.info.vehicle_id} on {self.info.board}\n")
-                
-                # --- EQUALMASS OVERLAY INJECTION ---
-                # Reason: This is where your custom files from the sidecar manager 
-                # are merged into the ArduPilot source before the compiler starts.
-                overlay_dir = "/app/overlay"
-                if os.path.exists(overlay_dir) and os.listdir(overlay_dir):
-                    self.logger.info(f"[{self.build_id}] Custom overlay found. Injecting...")
-                    shutil.copytree(overlay_dir, repo, dirs_exist_ok=True)
-                else:
-                    self.logger.info(f"[{self.build_id}] No overlay files. Building vanilla.")
-                
-                # Running waf via python3 for better Docker compatibility
-                self.logger.info(f"[{self.build_id}] Running waf configure...")
-                self.__run_cmd(["python3", "waf", "configure", "--board", self.info.board], repo, log)
+        shutil.rmtree(self.__get_path_to_build_dir(build_id))
+
+    def __process_build(self, build_id: str) -> None:
+        """
+        Processes a new build by preparing source code and extra_hwdef file
+        and running the build finally.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+        """
+        self.__create_build_workdir(build_id)
+        self.__create_build_artifacts_dir(build_id)
+        self.__log_build_info(build_id)
+        self.__provision_build_source(build_id)
+        self.__generate_extrahwdef(build_id)
+        self.__build(build_id)
+        self.__generate_archive(build_id)
+        self.__clean_up_build_workdir(build_id)
+
+    def __get_path_to_build_dir(self, build_id: str) -> str:
+        """
+        Returns the path to the temporary workspace for a build.
+        This directory contains the source code and extra_hwdef.dat file.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+
+        Returns:
+            str: Path to the build directory.
+        """
+        return os.path.join(self.__workdir_parent, build_id)
+
+    def __get_path_to_extra_hwdef(self, build_id: str) -> str:
+        """
+        Returns the path to the extra_hwdef definition file for a build.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+
+        Returns:
+            str: Path to the extra hardware definition file.
+        """
+        return os.path.join(
+            self.__get_path_to_build_dir(build_id),
+            "extra_hwdef.dat",
+        )
+
+    def __get_path_to_build_src(self, build_id: str) -> str:
+        """
+        Returns the path to the source code for a build.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+
+        Returns:
+            str: Path to the build source directory.
+        """
+        return os.path.join(
+            self.__get_path_to_build_dir(build_id),
+            "build_src"
+        )
+
+    # =========================================================================
+    # MODIFICATION START: 1. Added verbose custom overlay method
+    # =========================================================================
+    def __apply_custom_overlays(self, build_id: str) -> None:
+        """
+        Applies custom file tree overlays directly to the cloned build 
+        source before compilation begins, with verbose file-level logging.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+        """
+        overlay_dir = os.path.abspath("custom_overlays")
+        build_src_dir = self.__get_path_to_build_src(build_id)
+        logpath = bm.get_singleton().get_build_log_path(build_id)
+
+        # Check if the directory exists and has contents
+        has_overlays = False
+        if os.path.exists(overlay_dir) and os.path.isdir(overlay_dir):
+            if any(os.scandir(overlay_dir)):
+                has_overlays = True
+
+        if not has_overlays:
+            msg = f"No files found in {overlay_dir}. Compiling a vanilla version of ArduPilot."
+            self.logger.info(msg)
+            with open(logpath, "a") as build_log:
+                build_log.write(f"{msg}\n")
+                build_log.flush()
+            return
+
+        init_msg = f"Scanning and applying custom overlays from {overlay_dir} to {build_src_dir}..."
+        self.logger.info(init_msg)
+        with open(logpath, "a") as build_log:
+            build_log.write(f"{init_msg}\n")
+            build_log.flush()
+
+        def verbose_copy(src, dst):
+            """Custom copy function to log each individual file being patched."""
+            rel_path = os.path.relpath(src, overlay_dir)
+            copy_msg = f"  -> Patching file: {rel_path}"
+            
+            # Log to console output (debug level to avoid spamming the main console too much, 
+            # but you can change this to self.logger.info if preferred)
+            self.logger.debug(copy_msg)
+            
+            # Log heavily to the persistent build log
+            with open(logpath, "a") as build_log:
+                build_log.write(f"{copy_msg}\n")
                 
-                self.logger.info(f"[{self.build_id}] Running waf build...")
-                self.__run_cmd(["python3", "waf", self.info.vehicle_id], repo, log)
-                # --- MODIFICATION END ---
+            return shutil.copy2(src, dst)
 
-            self.bm.update_build_progress_state(self.build_id, BuildState.SUCCESS)
+        try:
+            # dirs_exist_ok=True allows merging into an existing tree
+            shutil.copytree(overlay_dir, build_src_dir, dirs_exist_ok=True, copy_function=verbose_copy)
+            
+            success_msg = "Custom overlays applied successfully."
+            self.logger.info(success_msg)
+            with open(logpath, "a") as build_log:
+                build_log.write(f"{success_msg}\n")
+                build_log.flush()
+                
         except Exception as e:
-            self.logger.error(f"[{self.build_id}] Build failed: {e}")
-            self.bm.update_build_progress_state(self.build_id, BuildState.FAILURE)
-
-    # --- MODIFICATION START ---
-    # Reason: The builder container needs this loop to stay alive and poll 
-    # Redis for jobs. The web app also checks for this method during startup.
-    def run(self):
-        """Main worker heartbeat loop."""
-        self.logger.info("Worker online. Waiting for builds from Redis...")
-        
-        while True:
+            error_msg = f"Failed to apply custom overlays: {e}"
+            self.logger.error(error_msg)
+            with open(logpath, "a") as build_log:
+                build_log.write(f"{error_msg}\n")
+                build_log.flush()
+            raise
+    # =========================================================================
+    # MODIFICATION END: 1. Added verbose custom overlay method
+    # =========================================================================
+
+    def __build(self, build_id: str) -> None:
+        """
+        Executes the actual build process for a build.
+        This should be called after preparing build source code and
+        extra_hwdef file.
+
+        Parameters:
+            build_id (str): Unique identifier for the build.
+
+        Raises:
+            RuntimeError: If source directory or extra hardware definition
+            file does not exist.
+        """
+        if not os.path.exists(self.__get_path_to_build_dir(build_id)):
+            raise RuntimeError("Creating build before building.")
+        if not os.path.exists(self.__get_path_to_build_src(build_id)):
+            raise RuntimeError("Cannot build without source code.")
+        if not os.path.exists(self.__get_path_to_extra_hwdef(build_id)):
+            raise RuntimeError("Cannot build without extra_hwdef.dat file.")
+
+        build_info = bm.get_singleton().get_build_info(build_id)
+        source_repo = ap_git.GitRepo(self.__get_path_to_build_src(build_id))
+
+        # Checkout the specific commit and ensure submodules are updated
+        source_repo.checkout_remote_commit_ref(
+            remote=build_info.remote_info.name,
+            commit_ref=build_info.git_hash,
+            force=True,
+            hard_reset=True,
+            clean_working_tree=True,
+        )
+        source_repo.submodule_update(init=True, recursive=True, force=True)
+
+        # Apply custom overlays after git checkout/submodules, but before waf configure
+        self.__apply_custom_overlays(build_id)
+
+        logpath = bm.get_singleton().get_build_log_path(build_id)
+        with open(logpath, "a") as build_log:
+            # Get vehicle object
+            vehicle = vehm.get_singleton().get_vehicle_by_id(
+                build_info.vehicle_id
+            )
+
+            # Log initial configuration
+            build_log.write(
+                "Setting vehicle to: "
+                f"{vehicle.name.capitalize()}\n"
+            )
+            build_log.flush()
+
             try:
-                build_id = self.bm.get_next_build_id()
-                if build_id:
-                    self.logger.info(f"Job received: {build_id}")
-                    self.build(build_id=build_id)
-                
-                time.sleep(2) # Prevent CPU spiking
-            except Exception as e:
-                self.logger.error(f"Error in worker loop: {e}")
-                time.sleep(5)
-    # --- MODIFICATION END ---
+                # Run the build steps
+                self.logger.info("Running waf configure")
+                build_log.write("Running waf configure\n")
+                build_log.flush()
+                subprocess.run(
+                    [
+                        "python3",
+                        "./waf",
+                        "configure",
+                        "--board",
+                        build_info.board,
+                        "--out",
+                        self.__get_path_to_build_dir(build_id),
+                        "--extra-hwdef",
+                        self.__get_path_to_extra_hwdef(build_id),
+                    ],
+                    cwd=self.__get_path_to_build_src(build_id),
+                    stdout=build_log,
+                    stderr=build_log,
+                    shell=False,
+                    timeout=CBS_BUILD_TIMEOUT_SEC,
+                )
+
+                self.logger.info("Running clean")
+                build_log.write("Running clean\n")
+                build_log.flush()
+                subprocess.run(
+                    ["python3", "./waf", "clean"],
+                    cwd=self.__get_path_to_build_src(build_id),
+                    stdout=build_log,
+                    stderr=build_log,
+                    shell=False,
+                    timeout=CBS_BUILD_TIMEOUT_SEC,
+                )
+
+                self.logger.info("Running build")
+                build_log.write("Running build\n")
+                build_log.flush()
+                build_command = vehicle.waf_build_command
+                subprocess.run(
+                    ["python3", "./waf", build_command],
+                    cwd=self.__get_path_to_build_src(build_id),
+                    stdout=build_log,
+                    stderr=build_log,
+                    shell=False,
+                    timeout=CBS_BUILD_TIMEOUT_SEC,
+                )
+                build_log.write("done build\n")
+                build_log.flush()
+            except subprocess.TimeoutExpired:
+                self.logger.error(
+                    f"Build {build_id} timed out after "
+                    f"{CBS_BUILD_TIMEOUT_SEC} seconds."
+                )
+                build_log.write(
+                    f"Build timed out after {CBS_BUILD_TIMEOUT_SEC} seconds.\n"
+                )
+                build_log.flush()
+
+    def shutdown(self) -> None:
+        """
+        Request graceful shutdown of the builder.
+        """
+        self.logger.info("Shutdown requested")
+        self.__shutdown_requested = True
+
+    def run(self) -> None:
+        """
+        Continuously processes builds in the queue until shutdown is requested.
+        Completes any build that has been popped from the queue before
+        checking shutdown status.
+        """
+        self.logger.info("Builder started and waiting for builds...")
+        while not self.__shutdown_requested:
+            build_to_process = bm.get_singleton().get_next_build_id(
+                timeout=5
+            )
+            if build_to_process is None:
+                # Timeout occurred, no build available
+                # Loop will check shutdown flag and continue or exit
+                continue
+
+            # We got a build from queue, process it regardless of shutdown
+            # This ensures we complete any work we've taken responsibility for
+            self.logger.info(f"Processing build {build_to_process}")
+            self.__process_build(build_id=build_to_process)
+
+        self.logger.info("Builder shutting down gracefully")

+ 19 - 19
docker-compose.yml

@@ -1,44 +1,44 @@
 services:
   redis:
-    image: redis:7.4.2-alpine
-    restart: always
-    volumes:
-      - ./redis_data:/data:rw
-    command: redis-server
-
-  app:
+    image: redis:7.2.4
+    ports:
+      - "127.0.0.1:6379:6379"
+  web:
     build:
       context: .
       dockerfile: ./web/Dockerfile
     environment:
       CBS_REDIS_HOST: redis
       CBS_REDIS_PORT: 6379
-      CBS_BASEDIR: /workdir
+      CBS_BASEDIR: /base
+      CBS_LOG_LEVEL: ${CBS_LOG_LEVEL:-INFO}
       CBS_ENABLE_INBUILT_BUILDER: 0
+      CBS_GITHUB_ACCESS_TOKEN: ${CBS_GITHUB_ACCESS_TOKEN}
+      CBS_REMOTES_RELOAD_TOKEN: ${CBS_REMOTES_RELOAD_TOKEN}
+      PYTHONPATH: /app
+      CBS_BUILD_TIMEOUT_SEC: ${CBS_BUILD_TIMEOUT_SEC:-900}
     volumes:
-      - ./base/ardupilot:/workdir:rw
-      - ./build_archive:/app/build_archive:rw
-      - ./custom_overlays:/app/patches:rw
-      - ./custom_overlays:/app/overlay:rw 
+      - ./base:/base:rw
     depends_on:
       - redis
     ports:
-      - "0.0.0.0:11080:8080"
-
+      - "0.0.0.0:${WEB_PORT:-8080}:8080"
   builder:
     build:
       context: .
       dockerfile: ./builder/Dockerfile
     restart: always
+    stop_grace_period: 5m
     environment:
       CBS_REDIS_HOST: redis
       CBS_REDIS_PORT: 6379
-      CBS_BASEDIR: /workdir
+      CBS_BASEDIR: /base
+      CBS_LOG_LEVEL: ${CBS_LOG_LEVEL:-INFO}
+      PYTHONPATH: /app
+      CBS_BUILD_TIMEOUT_SEC: ${CBS_BUILD_TIMEOUT_SEC:-900}
     volumes:
-      - ./base/ardupilot:/workdir:rw
-      - ./custom_overlays:/app/patches:ro
-      - ./custom_overlays:/app/overlay:ro
-      - ./build_archive:/app/build_archive:rw
+      - ./base:/base:rw
+      - ./custom_overlays:/app/custom_overlays:ro
     depends_on:
       - redis