I see this being useful to people where gscam makes it hard to have multiple topics produced from one gstreamer pipeline, or where applications that support gstreamer as a plugin need to access ROS topics (OBS studio, gst-launch, etc).
It cross-compiles for the Raspberry Pi Zero W, and can be used to make OBS studio (web streaming software) subscribe to a ROS image topic.
The pipeline package hosts a gstreamer pipeline generated from a config yaml at launch, and exposes any GStreamer element properties it finds as sensibly named and typed ROS parameters that can be updated at runtime - This works on the rpicamsrc element to adjust H264 bit-rate and shutter speed on the fly.
This code is not production ready, but itās at a point where Iām looking for criticism.
Specifically, Iād like to open a discussion how ROS clocks can or should be mapped to Gstreamer clocks; this gets really twisty when sim-time and accelerated playback gets involved.
Hi. Nice project.
Is there any plans to port to ROS1?
Can it be used to get video from ROS image topic, encode it in H264 and send back to ROS video topic (e.g. kinesis_video_msgs/KinesisVideoFrame)?
Iām not planning on porting to ROS1, but Iād be happy to maintain a port.
I have been thinking about transporting H264 using DDS, but the easiest way seems to be through an element that deals with raw bytestreams like gstreamerās udpsink, and emitting std_msgs/String or similar
A byte/string sink would let people use pocketsphinx easily for voice commands, and it would allow DDS to carry arbitrary compressed data.
At the moment, Iām running H264 over UDP using RTP to frame the data (all constructed from launch files): rpicamsrc bitrate=10000000 preview=0 ! video/x-h264,width=640,height=480,framereate=10/1,profile=high ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=<target_ip> port=<target_port>
and receiving with: udpsrc port=<target_port> caps="application/x-rtp, media=video, encoding-name=H264, payload=96" ! rtph264depay ! avdec_h264 ! autovideosink
In this case, thereās no reason you canāt replace autovideosink with a rosimagesink and watch in rqt, a byte src&sink would let DDS carry the compressed data
Iām not sure how Iād design an API to cope with things like KinesisVideoFrame, itās too specialised to be a dependency, but so much of the logic would be shared by existing elements so it should be a sub-class if not a plugin.
Iām planning a significant refactor already, Iāll see how far I can prune the message-specific code so you can sub-class from something.
kinesis video encoder is perfectly ok for most cases, except using it in Nvidia Jetson/Xavier with hardware encoder, which is currently supported by gstreamer only.
Also, gstreamer solution can give more flexibility.
Actually itās not gstreamer only. Gstreamer uses the h264omx encoder to do the encoding, and kinesis encoder has a ros param that lets you select an ffmpeg encoder. You should be able to set that to h264_omx and take advantage of hardware encoding
You can select another encoder, but again - hardware H264 encoding is not supported with ffmpeg on NVIDIA Jetson
Only decoding is supported.
To do hardware encoding you need to use either GStreamer or NVIDIA Multimedia API