ROS 1 on ARM64

Buckaroo Bonzai: Adventures in ROS ARM64 Land

We all were excited by the prospect of getting our ROS 1 robots working with the newer releases of small form factor GPU enabled ARM architecture computers like the Raspberry PI 4 and the Jetson Nano. The small size and considerable power of these systems mean difficult issues like visual slam, obstacle avoidance, facial recognition should be obtainable on even small robots. Unfortunately, building these systems is not simply a matter of burning an SD image and a sudo apt install.

Part 1: The Rpi 4

The first problem you encounter is which OS to install if you want to run the RPI at 64 bit. There are several to choose from, and so far none of them support the raspicam MMAL interface. I’ve tried several and the ubuntu-mate-20.04.1-beta2-desktop-arm64+raspi.img.xz seemed to work the best. This along with a Noetic full desktop install seemed to work, the raspicam could be accessed via usb_cam, but cheese wouldn’t work, and raspistill could not be built. As of this writing there is no ARM64 Userland, and several attempts to build via script have failed, even with the Raspian version. I heard there is a new ubuntuunity release for the Pi 4, but after 3 different OS attempts, I am reluctant to bother.

Part 2 The Jetson Nano 4Gig

The stable release via Jetpack is still Ubuntu 18.04, So Melodic (based on Python 2.7) is the ROS version. This causes many headaches since all the AI (CUDA, YOLO) is based on Python3, OpenCV4.
Once you get Yolo and OpenCV4 working, ROS Melodic cv_bridge and vision_opencv fail. I am not sure when a stable release on Ubuntu 20.04 will be finalized, and if it will fix these issues. Also though the raspicam is recognized by the Jetson (following the online Jetsonhacks tutorial) the usb_cam launch fails with a missing V4L error 25 message.

The woods are lovely, dark and deep,
but I have miles to go before I sleep,
miles to go before I sleep

2 Likes

It’s not raspi cam cheap but Intel D435 realsense depth camera works on the configuration you named running ubuntu mate 20.04 with noetic, and mavros. Wish I could have run that stuff on Raspian instead but it’s a great economical platform. The only thing I wanted to get running on that platform that I haven’t got running so far is QGroundControl on the raspi 4b. There were some issues with compiling qt5 on that platform which meant I couldn’t run mavros through qground control on a companion computer, but it wasn’t due to ros in any way.

@Aaron_Sims I used the RealSense 435i on a RPi4. This script https://github.com/IntelRealSense/librealsense/blob/master/scripts/libuvc_installation.sh worked for me using Ubuntu 18.04LTS desktop and ROS Melodic. Note: you must use a USB 3.0 connector for the D435i to work properly.

@Aaron_Sims I also managed to get the D435i to compile on Raspbian using this script:
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_raspbian.md.
The only problem is that you have to use the 32 bit Raspbian image as one of the pre-built libraries that is linked against is 32 bit.

Hi Alan, thanks for your post! I also had trials for ROS1 on Raspi 4 in the past and now trying again for
Rapi4 Head - ROSSerial - Arduino Motors control Setup, thanks for the hints!

I didn’t even try on Ubuntu 18.04 since I was using raspberry pi 4b. That’s good to know. Are you on 4B?

@Aaron_Sims Yes, I used a RPi 4B. As there is no official 18.04 support, I used this image
https://github.com/TheRemote/Ubuntu-Server-raspi4-unofficial/releases/download/v27/ubuntu-18.04.3-preinstalled-desktop-arm64+raspi4.img.xz
for Ubuntu 18.04LTS. I noticed that there is a later release that might be better to use (not tested by me).

I installed the image like this:

/usr/bin/xzcat ubuntu-18.04.3-preinstalled-desktop-arm64+raspi4.img.xz | sudo dd of=/dev/mmblk0 bs=32M status=progress oflag=sync 

Do you rely on raspistill to work, or would just streaming images to ROS be the final goal? Using the image_publisher to stream images from a raspicam (HQ Camera) has been working fine for me with RPI4B + Ubuntu 20.04 (arm64) + noetic.

I did try ROS on Rpi4 + ubuntu 20 + neotic and it works well for me.

1 Like

Thanks, that is good to know🙂 also now I understand if what ROS Discourse is for.

Aaron

Thanks for this tip🙂 this is where I am in my project I was going to research this today and you posted it so I appreciate it. I wanted to set up video streaming and the PX4 obstacle avoidance libraries and you just saved me some time on the video streaming.

Aaron

Can you stream image publisher on UDP via like h264 or something like that?

It looks like image_publisher basically reads images from file and publishes them to a ros image topic. This is probably useful for cameras that provide output to file, but are hard to interface with programmatically. Image topics usually emit raw YUV images or compressed JPEG images over ROS, not over some other UDP / RTP / RTMP sort of protocol. Downstream nodes seem to mostly be able to accept raw YUV and compressed JPEG.

For RPI, the usb_cam package is able to read the raspicam. The raspicam is not USB, but is accessible via video 4 linux 2 (v4l2). Usb_cam can read from v4l2 cameras and hence it can read the raspicam.

This does come with some drawbacks, though. The raspicam can normally emit motion vectors (used for video encoding and potentially robot motion tracking, etc). The raspicam also has support for ISO adjustment and some other similar features. These don’t seem to be available through v4l2 / usb_cam. If you just need the images then usb_cam is probably sufficient. But if you need the extended controls or motion vectors then usb_cam / v4l2 is not sufficient.

Raspi4B + ubuntu20.04 + ros:noetic works okay, that is we do use usually.

@CCM - the Intel realsense d435 is intended for obstacle avoidance, as well as pointcloud mapping, and IR in no light conditions. It would be a complete waste to add another camera when there is already a functional operating camera onboard. I’d like to stream the d435 rgb camera, or any visual topic I chose from a ros topic via h264, h265, or rtsp with subsecond latency for a close to live view.

@AndyBlight - I compiled on Ubuntu 20.04 against Raspberry Pi. Unfortunately it didn’t work out of the box and I found a non-standard way to compile it I don’t recall how I did it though and I don’t think I compiled against a Raspberry Pi 32-bit library though. If I recall I found a forum link that somebody had suggested there was a back door way to compile it and I followed some special instructions. Sorry I don’t have more information.

Aaron

1 Like

I thought there was a gstreamer node, but I can’t find it. I did find gscam, but it does the inverse of what you want. gscam captures directly from camera and outputs to an image topic. What you want is image topic to h264, etc.

Our software does that: image topic to streamed, low-latency live view. (https://bthere.ai. You can also email me at stuart@bthere.ai.) Our software can pull directly from device or can pull from image topics. In the latter case we encode to h264 and stream to our web console. We don’t currently support h265 or rtsp, but we do provide a low latency live view.

In general, pulling frames off an image topic and encoding to h264 will consume CPU and add a bit of latency. It’s still very feasible to get subsecond latency - just not quite as low as with pulling h264 directly off a device (when supported). We’d be happy to assist if you’d like to try our software. It’s free for one robot at moderate usage levels.

Stuart

Using Yocto, I could run ROS1 on rpi3 in 32 bit and 64 bit mode 2 years ago and it ran smoothly.
I ran ROS natively and in containers, at the time with kinetic, some tweaks needed to be done on the system and using some libraries, like cartographer to get similar performance than on PC.

The only issue I had at the time was with the uvc library or kernel module (can’t remember) in 64 bits. Switching camera fixed the issue though.

Hi @Alan_Federman

we have just added a lot of ARM builds of ROS noetic to our collection of conda packages. With these it’s as simple as installing ubuntu binaries for ROS.

The fastest way to install the packages is using micromamba:

wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-aarch64/latest | tar -xvj bin/micromamba

./bin/micromamba shell init -s bash -p ~/micromamba
source ~/.bashrc

micromamba activate
micromamba create noetic -c robostack -c conda-forge ros-noetic-ros-base
micromamba activate noetic

Which will give you a good base install of ros packages. You can find the available packages here: http://anaconda.org/robostack/

Please come over to https://gitter.im/RoboStack/Lobby to chat with us if packages that you need are missing.

The benefit is that you can also install machine learning libraries from anaconda or conda-forge, and use different Linux OS’s.

Also we have packages for Windows and OS X :slight_smile: and they can be installed using the same, or similar commands.

Hey @Aaron_Sims, that’s exactly what I’d like to do: streaming h264 color frames (and save them to rosbags) and streaming LZ4 (or anything lossless) depth frames and saving them as well. I’m reading that mmal and omx are required to have h264 encoding in hardware for the Raspberry Pi 4, did you manage to achieve something like that?