ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

ROS 1 on ARM64

@Aaron_Sims I also managed to get the D435i to compile on Raspbian using this script:
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_raspbian.md.
The only problem is that you have to use the 32 bit Raspbian image as one of the pre-built libraries that is linked against is 32 bit.

Hi Alan, thanks for your post! I also had trials for ROS1 on Raspi 4 in the past and now trying again for
Rapi4 Head - ROSSerial - Arduino Motors control Setup, thanks for the hints!

I didn’t even try on Ubuntu 18.04 since I was using raspberry pi 4b. That’s good to know. Are you on 4B?

@Aaron_Sims Yes, I used a RPi 4B. As there is no official 18.04 support, I used this image
https://github.com/TheRemote/Ubuntu-Server-raspi4-unofficial/releases/download/v27/ubuntu-18.04.3-preinstalled-desktop-arm64+raspi4.img.xz
for Ubuntu 18.04LTS. I noticed that there is a later release that might be better to use (not tested by me).

I installed the image like this:

/usr/bin/xzcat ubuntu-18.04.3-preinstalled-desktop-arm64+raspi4.img.xz | sudo dd of=/dev/mmblk0 bs=32M status=progress oflag=sync 

Do you rely on raspistill to work, or would just streaming images to ROS be the final goal? Using the image_publisher to stream images from a raspicam (HQ Camera) has been working fine for me with RPI4B + Ubuntu 20.04 (arm64) + noetic.

I did try ROS on Rpi4 + ubuntu 20 + neotic and it works well for me.

1 Like

Thanks, that is good to know🙂 also now I understand if what ROS Discourse is for.

Aaron

Thanks for this tip🙂 this is where I am in my project I was going to research this today and you posted it so I appreciate it. I wanted to set up video streaming and the PX4 obstacle avoidance libraries and you just saved me some time on the video streaming.

Aaron

Can you stream image publisher on UDP via like h264 or something like that?

It looks like image_publisher basically reads images from file and publishes them to a ros image topic. This is probably useful for cameras that provide output to file, but are hard to interface with programmatically. Image topics usually emit raw YUV images or compressed JPEG images over ROS, not over some other UDP / RTP / RTMP sort of protocol. Downstream nodes seem to mostly be able to accept raw YUV and compressed JPEG.

For RPI, the usb_cam package is able to read the raspicam. The raspicam is not USB, but is accessible via video 4 linux 2 (v4l2). Usb_cam can read from v4l2 cameras and hence it can read the raspicam.

This does come with some drawbacks, though. The raspicam can normally emit motion vectors (used for video encoding and potentially robot motion tracking, etc). The raspicam also has support for ISO adjustment and some other similar features. These don’t seem to be available through v4l2 / usb_cam. If you just need the images then usb_cam is probably sufficient. But if you need the extended controls or motion vectors then usb_cam / v4l2 is not sufficient.

Raspi4B + ubuntu20.04 + ros:noetic works okay, that is we do use usually.

@CCM - the Intel realsense d435 is intended for obstacle avoidance, as well as pointcloud mapping, and IR in no light conditions. It would be a complete waste to add another camera when there is already a functional operating camera onboard. I’d like to stream the d435 rgb camera, or any visual topic I chose from a ros topic via h264, h265, or rtsp with subsecond latency for a close to live view.

@AndyBlight - I compiled on Ubuntu 20.04 against Raspberry Pi. Unfortunately it didn’t work out of the box and I found a non-standard way to compile it I don’t recall how I did it though and I don’t think I compiled against a Raspberry Pi 32-bit library though. If I recall I found a forum link that somebody had suggested there was a back door way to compile it and I followed some special instructions. Sorry I don’t have more information.

Aaron

1 Like

I thought there was a gstreamer node, but I can’t find it. I did find gscam, but it does the inverse of what you want. gscam captures directly from camera and outputs to an image topic. What you want is image topic to h264, etc.

Our software does that: image topic to streamed, low-latency live view. (https://bthere.ai. You can also email me at stuart@bthere.ai.) Our software can pull directly from device or can pull from image topics. In the latter case we encode to h264 and stream to our web console. We don’t currently support h265 or rtsp, but we do provide a low latency live view.

In general, pulling frames off an image topic and encoding to h264 will consume CPU and add a bit of latency. It’s still very feasible to get subsecond latency - just not quite as low as with pulling h264 directly off a device (when supported). We’d be happy to assist if you’d like to try our software. It’s free for one robot at moderate usage levels.

Stuart

Using Yocto, I could run ROS1 on rpi3 in 32 bit and 64 bit mode 2 years ago and it ran smoothly.
I ran ROS natively and in containers, at the time with kinetic, some tweaks needed to be done on the system and using some libraries, like cartographer to get similar performance than on PC.

The only issue I had at the time was with the uvc library or kernel module (can’t remember) in 64 bits. Switching camera fixed the issue though.

Hi @Alan_Federman

we have just added a lot of ARM builds of ROS noetic to our collection of conda packages. With these it’s as simple as installing ubuntu binaries for ROS.

The fastest way to install the packages is using micromamba:

wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-aarch64/latest | tar -xvj bin/micromamba

./bin/micromamba shell init -s bash -p ~/micromamba
source ~/.bashrc

micromamba activate
micromamba create noetic -c robostack -c conda-forge ros-noetic-ros-base
micromamba activate noetic

Which will give you a good base install of ros packages. You can find the available packages here: http://anaconda.org/robostack/

Please come over to https://gitter.im/RoboStack/Lobby to chat with us if packages that you need are missing.

The benefit is that you can also install machine learning libraries from anaconda or conda-forge, and use different Linux OS’s.

Also we have packages for Windows and OS X :slight_smile: and they can be installed using the same, or similar commands.

Hey @Aaron_Sims, that’s exactly what I’d like to do: streaming h264 color frames (and save them to rosbags) and streaming LZ4 (or anything lossless) depth frames and saving them as well. I’m reading that mmal and omx are required to have h264 encoding in hardware for the Raspberry Pi 4, did you manage to achieve something like that?

If you need a bit of a workaround I managed to get picamera working under Docker on a Pi running the 64bit os. I’d like to get a camera node running in a docker container talking to ROS on the host but I’ve not got around to it yet. The same idea should let you run code that needs PiCamera and MMAL inside a container though.

neaveeng/simple-streameye-streamer (docker.com)

Picam will work on ubuntu 20.04 Noetic Rpi Arm64 using V4L2 and usb_cam. Won’t work is mmal which is required for the Ubiquity robotics raspicam_node.

@Alan_Federman did you have a chance to try the RoboStack packages? Our noetic support is really good these days: RoboStack: Using ROS alongside the Conda and Jupyter Ecosystems on any Linux, macOS, Windows & ARM