This has been a long-running project of mine (I started working on it before WebViz) but I’ve been looking to pick it up again due to various inadequacies in WebViz.
A couple of the most important things I’m hoping to achieve with this:
ROS1/ROS2 compatible – it should work in both ROS versions! Tested in noetic, foxy, galactic, it should work in kinetic and melodic as long as you pip3 install rospkg. By the way, it makes use of my library “rospy2” which allows for the same code to work in ROS1 and ROS2.
Mobile-friendly – one of my preferred ways of debugging (especially outdoor) robots is to walk around with the robot and a phone.
Easily extensible – creating a custom visualization involves only adding ONE .js file and adding a reference to it in the main .js file.
Roadmap for things I hope to do in the future. Collaboration and suggestions welcome! I’d also love to hear more about what the community finds lacking in the current state of local visualization tools. This is a FOSS project, BSD licensed.
(DONE) visualizations for OccupancyGrid, LaserScan, PointCloud2, DiagnosticArray, Shape
(DONE) diagnostics aggregation
(DONE) visualizations for Odometry, Point, Point32, Pose, PoseStamped, PoseWithCovariance, PoseWithCovarianceStamped
Shouldn’t be too hard honestly, I’m just playing with a few different WebGL-based frameworks for it, and also exploring literature on lossy compression of PointClouds.
This is super slick! I am really excited by all of the new ROS user interface stacks that have been released lately. Making ROS 2 more accessible to a general audience is incredibly inportant right now.
What framework are you using for ROSboard? if you are using React, feel free to check out react-ros and react-ros-three. both are not necessarily “feature complete” but are the beginning to something that I think could be extremely useful to people (like you).
This is very cool! I’m not currently using React, it’s vanilla JS + jQuery at the moment – I know jQuery is kind of “old” but it was very intuitive compared to React. I’m not inherently opposed to learning React or migrating this to some better framework – but I want to avoid this project ending up becoming dependent on something huge like npm / NodeJS that aren’t usually already installed on robots, and be able to be self-contained except for a couple of minimal pippable dependencies, that way it’s easy to install and run this on a RPi or other limited hardware. (I’m not even opposed to going the other direction and making it all vanilla JS without jQuery either)
I see you did write a PointCloud viewer, I’ll check it out! I did see you used three.js, which seems rather heavy (640kB minified I think) and has lots of features not necessary for ROS data visualizaiton. I was looking at litegl.js (175 kB) and some other alternatives. But I just may end up settling for three because it is well-maintained, but it is really quite heavy …
most definitely agreed! react-ros and react-ros-three are meant for a web hosted sites mostly with a robot connecting via ros ridge…not really installed on the robot itself. So probably a different use case here….sorry I probably shouldn’t have recommended those.
No problem at all!! I’m glad to have known about them, if I was making a hosted website I would definitely look into learning React at this point. I’ll keep them in mind for future projects
This is stunning and the ideal software for my custom controller! Is it possible to add controls to the ui too? Buttons that send messages for example?
Yes, it should be possible. I haven’t create a publisher framework but it should be relatively straightforward to add to the websocket protocol. An on-screen joystick to send /cmd_vel, and maybe a way to click on images and send the clicked point as feedback would probably be at the top of my list. I didn’t quite get how you plan to use your hardware with this though – does your controller present itself as a standard joystick that would be visible through e.g. the HTML5 gamepad API?
The controller itself is a Raspberry Pi with a touchscreen, the joysticks and switches are running on a Teensy that’s acting as a custom joystick. I’ve a node running already that sends Joy messages to the robot over wifi so it wouldn’t hook in to your UI directly. What I don’t yet have is a UI for it, I can stream video to the browser easy enough but I’m not far enough in to my journey with ROS to get to grips with UI stuff yet. I’m a Unity dev by trade though so I’ll be playing around with ROS and that before long.
My robot is essentially a 35cm tall clone of Johnny Five, I can control everything manually with the joysticks but it’s more than a bit clunky, I’d like to be able to send messages to move the arms to preset positions or routines. I’m working on an action server for that at the minute.
I’ve not done JavaScript in a good long while but I have played around with WebSockets in the past for a previous non-ROS robot. I’ll have a look at the code to see if I can get my head around it.
Quick demo of the robot, I’ll be doing a longer overview/what went wrong with PiWars video for it shortly if you’re interested in learning more.
Hi @chfritz, here are some of my main reasons for wanting to build something different:
ROS2 support
Mobile-friendliness – I wanted this to be easy to use on a phone or tablet and walk with it next to the robot while visualizing topics, and be light-weight enough on the UI side to not bog down a phone browser
Instant gratification of clicking a topic and seeing stuff – the split layout thing is nice if you have a pre-defined layout you want to keep re-using, but less nice if you just want to get visualizing ASAP, I found often in WebViz it took a lot more clicks and fumbling (e.g. creating the wrong type of panel for the wrong topic) until gratification
Can be run on a robot as a node and just sit there, ready to use at any time by going to robot-ip:8888 (and therefore: built with static files and a websocket backend, doesn’t depend on a node/npm-dependent JS framework like React)
Easy extensibility – It wasn’t clear how to add a new custom visualization in WebViz; the codebase looked really hard to edit and add onto, especially for most roboticists who wouldn’t know React. I wanted it to be a matter of just adding 1 file to add a custom visualization.
Can be put into the build farms so it’s possible to just apt-get it and then rosrun / ros2 run it in the future
The rosbridge-suite is great, but not easy to convince people to install a non-standard mod of to achieve things like server-side lossy compression, and automatic bandwidth detection and throttling, particularly for live streaming use cases.
Live viewing of system info such as dmesg, top, CPU/GPU loads, etc.
This isn’t to knock on WebViz, I think it’s also a great tool as well especially when you have a single visualization layout you want to set up in advance and keep reusing over and over again on a pile of .bag files. Many times though I want to just visualize something quickly, like “what happens to the motor current if I put a load on top of the robot” or “does the lidar get affected if I shine a light into it” or “does the CPU usage shoot up if I unplug this camera” and those are the sorts of things I really want to be right next to the robot with a visualization in one hand and a tool in the other. So my focus really was on live streaming data for the most part.
Native support for dropping v1 or v2 .bag files into ROSboard is also on my to-do list (likely with some help from the MIT-licensed rosbag.js and rosbag2 libraries at WebViz and Foxglove, respectively), but in terms of motivation, replaying bags was less my intention with this tool, but nevertheless would like to support it as well.
Hi @horto,
I have added an experimental PointCloud2 integration in the ‘dev’ branch. Would be great if you could help test on any bags/equipment you have. I’ve only tested it with Velodyne data so far, I’ll be trying a few other sensors over the next few days.
I ended up going with litegl.js for now since for the sake of visualizing ROS topics it seemed more than enough; three.js was just way more JS bloat with all kinds of texture/raytracing stuff that is not needed for this project. I was able to stream Velodyne clouds to my phone, although admittedly I do have a very recent (Pixel 5) phone.
I did a compression hack on the PointCloud data where I map all float32 values to uint16 between the max and min value in the dataset. For a 100m range velodyne that gives ~3mm precision which is more than what you actually can visualize in the browser. Also if there are more than 64K points it will randomly subsample the points and display a warning indicating that it is doing this. (Also, I imagine when I add rosbag support I won’t be employing all these lossy compression tactics, but for streaming live data it’s quite necessary.)
What’s still sorely needed is to properly detect a lagging, low-bandwidth client and throttle messages on the server-side accordingly, those kind of problems show especially much for things like PointCloud2 messages, I’ll be trying to attack that in the next few days.