For starters, Noetic is the only ROS 1 release to officially support Python 3!
See the Noetic Migration Guide for breaking changes, and the changelogs on individual packages to see what new features they have.
Whatâs in Noetic?
369 packages are included in this initial release of Noetic, compared to 2709 currently in ROS Kinetic and 1939 currently in ROS Melodic. navigation and ros_control have been released to Noetic, while MoveIt has not.
Also, 32-bit ARM (armhf) packages are available on Ubuntu Focal, and 64-bit ARM (aarch64) packages are available for Ubuntu Focal and Debian Buster up to ros-noetic-desktop.
What if the packages I need arenât available?
This is just the initial release!
Packages can be added to ROS Noetic until it reaches End-of-Life.
Itâs a Long Term Support (LTS) release, meaning it will be supported until May 2025.
The implicit latest tag is still pointing to melodic and will soon be bumped to the ROS2 release Foxy
Building tools (rosdep, compilers, git, rosinstall etc) are now in ros:noetic-ros-base and not part of the ros-core image anymore
Reduced image size:
All apt calls are now using --no-install-recommends to install only required dependencies
The ros-core image is now free of libboost-all-dev , weâre continuing effort to remove unused dependencies and encourage anyone coming across packages depending on boost to help narrow down the dependency list.
These improvements resulted in a 28% image size reduction between melodic and noetic ros-core images. Big thanks to @mikaelarguedas !
we historically just piggybacked on the release announcement but itâs true that this specific release saw more change than the usual âDocker images are out as usualâ.
I am planning on posting a follow-up on [RFC] Restricting the size of ROS docker images that outlined the envisioned changes for the ROS/Gazebo images. If you believe itâs better as a new topic I can do that instead and link to the previous thread.
Please do post the update about the Docker images in a separate post.
The changes to ros-core seem to have large repercussions, as several CI setups have depended on those images containing the build tools (perhaps this was not a âwiseâ thing to do, but it is as-it-is right now).
The changes to ros-core seem to have large repercussions, as several CI setups have depended on those images containing the build tools (perhaps this was not a âwiseâ thing to do, but it is as-it-is right now).
Totally agree. Itâs also worth pointing out that the build tools and rosdep cache were removed from the kinetic and melodic images as well, so CI builds start failing there. This is what I had to do in my Dockerfiles:
(The diff above could probably be reduced a bit; Iâm not 100% sure if I need build-essential and cmake explicitly or if rosdep would pull them in anyway.)
Iâm still in favor of these changes though, itâs just that people should be notified.
@Martin_Guenther , any insight as why ros-core tag was chosen for the build stage vs only using it for your runtime stage? We are currently pushing rosdep to support multistage builds easier by enabling rosdep to export a list of runtime dependencies to install for smaller deployment images. E.g.
ARG ROS_TAG=noetic
ARG OVERLAY_WS=/opt/overlay_ws
FROM ros:$ROS_TAG as builder
ARG OVERLAY_WS
WORKDIR $OVERLAY_WS
COPY ./package.xml ./src/packagename
RUN . /opt/ros/$ROS_DISTRO/setup.sh && \
apt-get update && rosdep install -q -y \
--from-paths src \
--ignore-src \
&& rm -rf /var/lib/apt/lists/*
RUN . /opt/ros/$ROS_DISTRO/setup.sh && \
rosdep install --simulate \
--from-paths src \
--ignore-src \
--reinstall \
--dependency-types exec_depend > exec_install.sh
# Listing of all exec dependencies not implemented yet
COPY ./ ./src/packagename
RUN . /opt/ros/$ROS_DISTRO/setup.sh && \
catkin_make install
FROM ros:$ROS_TAG-ros-core as runner
ARG OVERLAY_WS
WORKDIR $OVERLAY_WS
COPY --from=builder $OVERLAY_WS/exec_install.sh ./
RUN apt-get update && ./exec_install.sh \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder $OVERLAY_WS/install ./install
ENV OVERLAY_WS $OVERLAY_WS
RUN sed --in-place \
's|^source .*|source "$OVERLAY_WS/install/setup.bash"|' \
/ros_entrypoint.sh
Would it be possible in future releases to have a docker image available thatâs like noetic-freeze or <distro>-freeze prior to initial release but after the freeze? It would be really helpful for the awkward period of time between new distros when we need to bootstrap CI for a new master or devel branch. Then once official images are out, we can move to those.
For some repos, we either just failed to have CI during that period, built from nightly images that could very well include commits âpastâ the new release, or have to have a hacky work-around manual image. I think for this round of noetic / foxy releases, I had repos in all 3 states.
Would it be possible in future releases to have a docker image available thatâs like noetic-freeze
I think that type of temporary tag would have to be self hosted somewhere, as they wouldnât be something we could push upstream into the official library.
For some repos, we either just failed to have CI during that period
This should get easier with the upcoming of the rolling release distro, as a nice middle ground between the volatility of nightly and the lag of the latest stable release. We are hoping to host the rolling release as a docker library tag as well.
One issue we faced for providing earlier images was the absence of released packages / synced packages.
For previous ROS releases there was an âalphaâ and a âbetaâ period during which all of packages up to desktop(_full) were released and synced to main periodically.
This allows us to build and host or even submit to the official library some âearly imagesâ.
For this set of ROS distros it was not possible because for various reasons the beta phase has been shortened to non-existent (Noetic packages didnt hit the main repo until release day, and for Foxy packages images are based on like ros_core are still unreleased to this date).
In the future with the new release schedule, the absence of new ROS1 releases and the rolling release for ROS 2; early availability of the packages should be more likely.
At that point once the API freeze is done we could roll out images with the debs built after the freeze.
We also discussed a couple days ago providing a dedicated ros:rolling-* set of tags that will track the rolling release.
Weâll need to sort out a couple issues to make sure these images stay up-to-date but it should be possible.
An alternative is to use the CI provided by the ROS buildfarm. It has access to the all released packages and is always up-to-date. This is a good way to get PR / source testing for the main targeted platform for these early phases (and in general).
Wow, that is really cool, and Iâll keep this in mind. Youâre 3 steps ahead of me though. I only use the Dockerfile I posted in the CI for rospy_message_converter and others, and its only purpose is to check that all dependencies are properly listed in the package.xml, build the package, and run the tests. I donât really care about the runner image size. Thatâs why I prefer to use ros-core for the build stage (such that only minimal dependencies are already installed, and to force my package to install its own build dependencies).
Is there a way to check the status of Python3 support of released Noetic packages ?
I tried to use dynamic_reconfigure scripts today and it failed because of some Python 2 shebang lines. While my issue is dynamic_reconfigure specific, this general shebang problem seems to be happening on other packages as well (1, 2, 3, 4, 5) so I was wondering if there was a more general way to show or check that released packages have been ported to Python3 (using some metrics for the common porting issues that could be âall installed programs can be rosrun'dâ âthe code can be parsed by a Python3 interpreterâ or similar).
We are discussing the idea of adding something like osrf/ros:<distro>-devel, that would sort of mimic what weâve been doing with osrf/ros2:devel, but be distro specific, where the common build tools, repos and dev environment are preconfigured, but with no ros distro packages installed:
Didnât take it that way - Figured you were a lot more onboard the ROS than I and some people have dealt with this type of controller. I was just curious if you every used one of these controllers in ROS verses the programing that comes with the unit.