Ros2_main_dev_container

main_ros_docker_setup

I’ve here simply copied the Readme from my corresponding Git repository Suggestions are welcome!

Main docker setup we use for our robots. As it’s now primarily designed to
be used on the robots themselves, it contains all the packages needed for
the robots to work properly.

Getting startet:

This repository provides a Docker image (mamut_ros) pre-configured for ROS2 Humble
development. It includes all the necessary tools and libraries to build and run ROS2
packages.

Prerequisites

  • Docker installed on your system. You can follow the official Docker installation guide for your
    operating system.
  • Basic understanding of ROS2 concepts.
  • if not already done: Docker needs to be added to your usergroup with:
sudo groupadd docker

your user must be added to the docker group:

sudo usermod -aG docker $USER

after this, you need to log out and in.

  • Please grant execution access to the files with sudo chmod 777 ./ inside the folder.

Building the Docker Image

The Docker image is built using the provided Dockerfile. You can build it using the following
command inside the folder:

./buildScript.sh

This command builds the image with the tag mamut_ros based on the Dockerfile in the current
directory.

Running the Container (Using docker run)

To start the container for the first time and gain access to it, use the following command
inside the folder:

./entrypoint.sh

Later, you can enter the container by finding it’s name with:

docker ps

Under NAMES' you'll see the name of your container, and under IMAGES’
which docker image or Dockerfile it comes from. Then you can enter the container:

docker exec -it $NAME bash

The Files in Detail:

Dockerfile

This Dockerfile creates a container environment for ROS2 Humble development. This Dockerfile
provides a ready-to-use development environment with ROS2 Humble installed and configured.
You can modify it to fit your specific needs, like installing additional ROS2 packages or
cloning your own ROS projects into the source directories.

Setting Up

  • It defines two arguments:
    • ROS_DISTRO Defaults to “humble” which specifies the ROS2 distribution.
    • DEBIAN_FRONTEND Sets the package installer to non-interactive mode.
  • The base image is set to the official ROS image for the chosen ROS_DISTRO
  • It keeps the DISPLAY environment variable from the host machine (useful for GUI applications).

Installing ROS2 and Dependencies

  • It updates the package lists and installs core ROS2 packages for the
    desktop environment (ros-${ROS_DISTRO}-desktop-full).
  • It removes unnecessary package lists to minimize image size.
  • It updates and upgrades the system to ensure latest packages.
  • Then it installs various development tools and libraries needed for building ROS2 packages:
    • Build essentials (build-essential)
    • Security certificates (ca-certificates)
    • Download tools (curl, wget)
    • Version control system (git)
    • Security tools (gnupg2)
    • File system tools (fuse, libfuse2)
    • Programming language development tools (libclang-dev, python3-pip, etc.)
  • Additionally, it installs specific ROS2 packages like navigation, robot localization, bag recording tools, parameter tools, etc.
  • Finally, it uses pip to install Python libraries useful for ROS2 development like ros2-numpy, colcon-meson,
    colcon-gradle, etc.
    It also installs tools like pytest for testing and retrieves some libraries directly from GitHub using git+https URLs.

Setting Up Environment:

  • It adds the Cargo bin directory to the system path (PATH) for accessing Rust tools.
  • It configures shell environment by adding several lines to the user’s .bashrc file:
    • Sets up autocompletion for ROS2 and colcon commands.
    • Defines aliases for sourcing ROS environment setups from different workspaces (instal/setup.bash, /opt/ros/humble/setup.bash).
    • Sets up environment for user-defined workspaces (/ros_ws, /microros_ws, /ros_debug_ws).
  • It installs additional Rust development tools using cargo
  • It creates empty source directories for potential ROS workspaces (/ros_ws/src, etc.)
  • Finally, it clones some essential ROS2 related repositories from GitHub into the default source directory (/ros_default/src).

Building ROS Workspace

This section demonstrates how to build a ROS workspace but might be commented out in the original Dockerfile.

  • It sources the ROS setup script (. /opt/ros/humble/setup.sh).
  • It uses the colcon build command to build a ROS workspace located in /ros_ws. The options specify build and install locations,
    and base paths for searching for packages.

compose.yaml

this compose.yaml file defines a single service named my-container that utilizes the
“mamut_ros” Docker image. It grants the container high privileges (be cautious!), shares
network and allows interactive use. It also mounts your ROS workspace directories and the
X11 Unix socket for proper functionality within the container.

Important Note: Similar to the docker run command, this configuration uses privileged: true.
This can be a security risk, so avoid it unless absolutely necessary and understand the
implications.

Version

The used docker compose version is version: "3.9". This specifies the Docker Compose version
this configuration file adheres to.

Services

services:: This section defines the services that make up your application. Here, you only have
one service named my-container.

Service Configuration

  • my-container: This block defines the configuration for the my-container service.
    • image: mamut_ros:Dockerfile: This specifies the Docker image to use for this service.
      It should be the same image you built earlier (mamut_ros:Dockerfile).
    • privileged: true: Similar to the docker run command, this grants the container elevated
      privileges. Use this with caution as it’s a security risk.
    • network_mode: "host" This sets the container’s network mode to the host’s network namespace.
      The container will share the network configuration of the host machine.
    • stdin_open: true: This keeps the standard input (stdin) of the container open, allowing you to
      interact with the container’s processes.
    • tty: true: This allocates a pseudo-terminal (TTY) for the container, which is useful for
      interactive use.
    • environment: DISPLAY: $DISPLAY: This exposes the DISPLAY environment variable from the host
      machine to the container, crucial for graphical applications.
    • volumes:: This section defines volume mounts for the container:
      • - ~/ros2_ws/:/ros_ws: This mounts your ROS workspace directory on the host (~/ros2_ws)
        to the /ros_ws directory inside the container. Similar volume mounts are specified for
        microros_ws and ros2_debug directories.
      • - /tmp/.X11-unix:/tmp/.X11-unix: This maps the X11 Unix socket on the host (/tmp/.X11-unix)
        to the same location within the container for graphical applications.

buildscript.sh

This script creates directories for potential ROS workspaces on the host machine and then builds
a Docker image named “mamut_ros” using the Dockerfile in the current directory. It’s important
to note that the script itself doesn’t build anything within the container. You would likely
need a separate script or manual commands to build ROS workspaces after starting the container
built by this script.

  • The script then uses the docker build command to build a Docker image. Let’s break down the
    options used:
    • -t "mamut_ros:Dockerfile" This assigns a tag “mamut_ros” to the image being built.
      The “:Dockerfile” part specifies the Dockerfile to use for building (which is assumed
      to be named “Dockerfile” in the current directory).
    • --cpuset-cpus 0-3: This option restricts the container to use only CPU cores 0 to 3 on
      the host machine. You might want to adjust this depending on your needs and available
      resources.
    • .: The final dot (.) specifies the context for the build. In this case, it means the
      Dockerfile and all its dependencies in the current directory will be used to build the
      image.

entrypoint.sh

It’s a single line demonstrating a docker run command that utilizes the image built earlier
(mamut_ros:Dockerfile). Let’s break down the options used:

  • xhost local:root: This command, likely executed on the host machine before running the
    container, allows the container to access the X11 server running on the host. This is necessary
    for graphical applications within the container to display on your screen.
  • docker run: This initiates a new Docker container.
  • -it: This option provides an interactive terminal within the container (-i) and allocates a
    pseudo-TTY (-t).
  • --privileged: This grants the container extensive privileges, which should be used with caution
    as it can be a security risk. It’s generally not recommended for production use.
  • --net host: This option sets the container’s network mode to the host’s network namespace.
    Essentially, the container will share the network configuration of the host machine.
  • --ipc host: Similar to the network mode, this option sets the container’s IPC (Inter-Process
    Communication) mode to the host’s namespace. The container will share processes and resources with
    the host.
  • -e DISPLAY=$DISPLAY: This exposes the environment variable DISPLAY from the host machine to the
    container. This is crucial for graphical applications within the container to function properly.
  • -v ~/ros2_ws:/ros_ws: This mounts the host directory ~/ros2_ws (which the buildscript.sh created)
    to the container’s directory /ros_ws. This allows you to share your ROS workspace directory between
    the host and container. Similar volume mounts are specified for microros_ws and ros2_debug
    directories.
  • -v ~/ros2_ws/src:/ros_ws/src: This is an additional volume mount that specifically maps the src
    subdirectory within your ros2_ws on the host to the /ros_ws/src directory inside the container.
    This ensures the source code for your ROS packages is accessible within the container.
  • -v /tmp/.X11-unix:/tmp/.X11-unix: This volume mount maps the X11 Unix socket on the host
    (/tmp/.X11-unix) to the same location within the container. This allows graphical applications
    inside the container to communicate with the X11 server on the host for displaying graphics.
1 Like