Introducing ROSboard: Web-based visualizations for ROS1 and ROS2

Hi everyone,

I’m the author of ROSshow (GitHub - dheera/rosshow: Visualize ROS topics inside a terminal with Unicode/ASCII art), which lets you visualize ROS topics with ASCII art.

I’m introducing ROSboard (GitHub - dheera/rosboard: ROS node that turns your robot into a web server to visualize ROS topics), which simply runs on your robot as a ROS node, and serves up live-streamed visualizations on https://your-robot-ip:8888/

This has been a long-running project of mine (I started working on it before WebViz) but I’ve been looking to pick it up again due to various inadequacies in WebViz.

A couple of the most important things I’m hoping to achieve with this:

  • ROS1/ROS2 compatible – it should work in both ROS versions! Tested in noetic, foxy, galactic, it should work in kinetic and melodic as long as you pip3 install rospkg. By the way, it makes use of my library “rospy2” which allows for the same code to work in ROS1 and ROS2.

  • Mobile-friendly – one of my preferred ways of debugging (especially outdoor) robots is to walk around with the robot and a phone.

  • Easily extensible – creating a custom visualization involves only adding ONE .js file and adding a reference to it in the main .js file.

Roadmap for things I hope to do in the future. Collaboration and suggestions welcome! I’d also love to hear more about what the community finds lacking in the current state of local visualization tools. This is a FOSS project, BSD licensed.

  • (DONE) visualizations for OccupancyGrid, LaserScan, PointCloud2, DiagnosticArray, Shape
  • (DONE) diagnostics aggregation
  • (DONE) visualizations for Odometry, Point, Point32, Pose, PoseStamped, PoseWithCovariance, PoseWithCovarianceStamped
  • (near future) Path, Imu, MagneticField, Trajectory
  • (near future) time series plots of diagnostic data and of topics with multiple fields
  • (near future) rosbag v1 support
  • (near future) throttling options
  • (future) TF tree, URDF visualization, etc.
  • (future) rosbag v2 support
  • (future) bandwidth detection and automatic throttling
  • (future) publishers from the web browser
48 Likes

Looking good. How far off is the PointCloud2 integration?

Shouldn’t be too hard honestly, I’m just playing with a few different WebGL-based frameworks for it, and also exploring literature on lossy compression of PointClouds.

2 Likes

This is super slick! I am really excited by all of the new ROS user interface stacks that have been released lately. Making ROS 2 more accessible to a general audience is incredibly inportant right now.

4 Likes

What framework are you using for ROSboard? if you are using React, feel free to check out react-ros and react-ros-three. both are not necessarily “feature complete” but are the beginning to something that I think could be extremely useful to people (like you).

2 Likes

This is very cool! I’m not currently using React, it’s vanilla JS + jQuery at the moment – I know jQuery is kind of “old” but it was very intuitive compared to React. I’m not inherently opposed to learning React or migrating this to some better framework – but I want to avoid this project ending up becoming dependent on something huge like npm / NodeJS that aren’t usually already installed on robots, and be able to be self-contained except for a couple of minimal pippable dependencies, that way it’s easy to install and run this on a RPi or other limited hardware. (I’m not even opposed to going the other direction and making it all vanilla JS without jQuery either)

I see you did write a PointCloud viewer, I’ll check it out! I did see you used three.js, which seems rather heavy (640kB minified I think) and has lots of features not necessary for ROS data visualizaiton. I was looking at litegl.js (175 kB) and some other alternatives. But I just may end up settling for three because it is well-maintained, but it is really quite heavy …

4 Likes

most definitely agreed! react-ros and react-ros-three are meant for a web hosted sites mostly with a robot connecting via ros ridge…not really installed on the robot itself. So probably a different use case here….sorry I probably shouldn’t have recommended those.

1 Like

No problem at all!! I’m glad to have known about them, if I was making a hosted website I would definitely look into learning React at this point. I’ll keep them in mind for future projects :slight_smile:

1 Like

This is stunning and the ideal software for my custom controller! Is it possible to add controls to the ui too? Buttons that send messages for example?

1 Like

Yes, it should be possible. I haven’t create a publisher framework but it should be relatively straightforward to add to the websocket protocol. An on-screen joystick to send /cmd_vel, and maybe a way to click on images and send the clicked point as feedback would probably be at the top of my list. I didn’t quite get how you plan to use your hardware with this though – does your controller present itself as a standard joystick that would be visible through e.g. the HTML5 gamepad API?

The controller itself is a Raspberry Pi with a touchscreen, the joysticks and switches are running on a Teensy that’s acting as a custom joystick. I’ve a node running already that sends Joy messages to the robot over wifi so it wouldn’t hook in to your UI directly. What I don’t yet have is a UI for it, I can stream video to the browser easy enough but I’m not far enough in to my journey with ROS to get to grips with UI stuff yet. I’m a Unity dev by trade though so I’ll be playing around with ROS and that before long.

My robot is essentially a 35cm tall clone of Johnny Five, I can control everything manually with the joysticks but it’s more than a bit clunky, I’d like to be able to send messages to move the arms to preset positions or routines. I’m working on an action server for that at the minute.

I’ve not done JavaScript in a good long while but I have played around with WebSockets in the past for a previous non-ROS robot. I’ll have a look at the code to see if I can get my head around it. :slight_smile:

Quick demo of the robot, I’ll be doing a longer overview/what went wrong with PiWars video for it shortly if you’re interested in learning more.

5 Likes

Hi @dheera ,

Thanks for the demo of ROS Board at the Web Tools WG meeting today. Looks really nice and easy to use!

You mentioned that you built it (in part) due to:

Can you elaborate what those are? I’m trying to figure out under which circumstances one should use one vs. the other.

Thanks!

1 Like

Hi @chfritz, here are some of my main reasons for wanting to build something different:

  • ROS2 support
  • Mobile-friendliness – I wanted this to be easy to use on a phone or tablet and walk with it next to the robot while visualizing topics, and be light-weight enough on the UI side to not bog down a phone browser
  • Instant gratification of clicking a topic and seeing stuff – the split layout thing is nice if you have a pre-defined layout you want to keep re-using, but less nice if you just want to get visualizing ASAP, I found often in WebViz it took a lot more clicks and fumbling (e.g. creating the wrong type of panel for the wrong topic) until gratification
  • Can be run on a robot as a node and just sit there, ready to use at any time by going to robot-ip:8888 (and therefore: built with static files and a websocket backend, doesn’t depend on a node/npm-dependent JS framework like React)
  • Easy extensibility – It wasn’t clear how to add a new custom visualization in WebViz; the codebase looked really hard to edit and add onto, especially for most roboticists who wouldn’t know React. I wanted it to be a matter of just adding 1 file to add a custom visualization.
  • Can be put into the build farms so it’s possible to just apt-get it and then rosrun / ros2 run it in the future
  • The rosbridge-suite is great, but not easy to convince people to install a non-standard mod of to achieve things like server-side lossy compression, and automatic bandwidth detection and throttling, particularly for live streaming use cases.
  • Live viewing of system info such as dmesg, top, CPU/GPU loads, etc.

This isn’t to knock on WebViz, I think it’s also a great tool as well especially when you have a single visualization layout you want to set up in advance and keep reusing over and over again on a pile of .bag files. Many times though I want to just visualize something quickly, like “what happens to the motor current if I put a load on top of the robot” or “does the lidar get affected if I shine a light into it” or “does the CPU usage shoot up if I unplug this camera” and those are the sorts of things I really want to be right next to the robot with a visualization in one hand and a tool in the other. So my focus really was on live streaming data for the most part.

Native support for dropping v1 or v2 .bag files into ROSboard is also on my to-do list (likely with some help from the MIT-licensed rosbag.js and rosbag2 libraries at WebViz and Foxglove, respectively), but in terms of motivation, replaying bags was less my intention with this tool, but nevertheless would like to support it as well.

4 Likes

Hi @horto,
I have added an experimental PointCloud2 integration in the ‘dev’ branch. Would be great if you could help test on any bags/equipment you have. I’ve only tested it with Velodyne data so far, I’ll be trying a few other sensors over the next few days.

I ended up going with litegl.js for now since for the sake of visualizing ROS topics it seemed more than enough; three.js was just way more JS bloat with all kinds of texture/raytracing stuff that is not needed for this project. I was able to stream Velodyne clouds to my phone, although admittedly I do have a very recent (Pixel 5) phone.

I did a compression hack on the PointCloud data where I map all float32 values to uint16 between the max and min value in the dataset. For a 100m range velodyne that gives ~3mm precision which is more than what you actually can visualize in the browser. Also if there are more than 64K points it will randomly subsample the points and display a warning indicating that it is doing this. (Also, I imagine when I add rosbag support I won’t be employing all these lossy compression tactics, but for streaming live data it’s quite necessary.)

What’s still sorely needed is to properly detect a lagging, low-bandwidth client and throttle messages on the server-side accordingly, those kind of problems show especially much for things like PointCloud2 messages, I’ll be trying to attack that in the next few days.

All:
The ‘main’ branch now supports LaserScan, and many other UI improvements including tree view of topics, better sub/unsub/reconnection logic, etc.

1 Like

@dheera ive got a robosense rs-16 lidar … have created a ROS2 driver for it

@dheera have you tried using a voxel grid filter to down sample the point cloud to reduce the number of points? Downsampling a PointCloud using a VoxelGrid filter — Point Cloud Library 0.0 documentation

@horto Oh right, I should have thought of that. Thanks! I think it’s possible to implement that in numpy efficiently, let me give it a shot.

@dheera have a look at the algorithm in this code here. Its written in c++ but easy to follow … https://github.com/ucanbizon/downsampling-point-cloud/blob/master/downsample.cpp