Coordinate different companies to get a common view on a complex matter is always hard, and in this case, there are multiple design options leading to different tradeoffs.
Our submission (RTI, Twin Oaks & eProsima) already aligns the views of three different DDS vendors, and sure we will try to incorporate ideas of your submission.
Our submission tries to accomplish the following:
Propose a familiar model to the final user, making use of the DDS object model and specifications: Serialization (CDR), representation (XML-DDS), and some ideas of WS-DDS mapping
Neither the protocol or the API is designed to save every possible bit, but to have something robust, flexible and easy to use.
We coded a PoC of our submission, and during the presentation, you asked about numbers. At that point, with no optimizations at all, and in debug mode we answered 43Kb. I asked my team to optimize a little bit the code, and here are the numbers we have now for the client:
Total Memory Use: 8 Kb
But we could squeeze that even a little more. We are releasing this as Open Source (Apache 2) so anyone can review the results.
But again, we are not aiming to be as small as possible. We are covering all the requisites of the DDS-XRCE RFP, and testing our solution in what we consider typical microcontrollers today.
Regarding the process at the OMG meeting, we (RTI, Twin Oaks & eProsima) didn’t want to confront both specifications and choose one of them, but have the time to incorporate ideas from your submission, and that is why now we have an extended deadline.
Let me join in the fray. I’m the other author of the PrismTech submission and the one who built our tiny prototype. I wasn’t present at the OMG meetings, and I won’t waste any words on what may or may not have happened there.
Firstly, I don’t think a contest of bytes of RAM adds real value to the discussion, although of course it is an honourable contest in itself I am surprised that you, @Jaime_Martin_Losa, had never even had a proper look at the memory use of your implementation given the purpose of the exercise in the first place, but if it is 8kB now then it is much better already — if still 7kB overweight In any case, memory use is determined more by the implementation than by the protocol messages.
The precise overhead on the wire is of more interest, as this is fundamental to the protocol. BLE gives you 20 bytes to play with, and a difference of a few bytes of header adds up in that context. Furthermore, as Angelo pointed out, we have customers to whom 8 bytes is too many already. Yet even that is not of such great interest to me in this discussion.
What really matters in my opinion is a difference in philosophy. The two proposals suggest very different views of what one would ideally want to accomplish.
The RTI/TwinOaks/eProsima proposal is limited to providing a means for performing DDS operations remotely, and it don’t see how it can do anything other than that. In a sense, it is just a hand-crafted alternative to CORBA with a lower overhead. (Simply using CORBA actually “just works” if the DDS implementation is faithful to the IDL interface mappings, even if it is ugly.) To me, this route is a pragmatic way of going about satisfying the RFP, but at the same time, an uninteresting one. (Sorry @Jaime_Martin_Losa and others …)
We chose to design a compact protocol that can support what amounts to performing DDS operations remotely, but doesn’t limit itself to it. Instead, it also supports a DDS-like peer-to-peer network with vastly lower overhead, and, in many ways a level of flexibility in specifying what data is of interest (through URIs and selections) that DDS doesn’t natively support.
All of that would be of little value if it doesn’t perform well or doesn’t scale well. Just like the code size and memory use are mostly determined by the implementation, so is maximum sustainable performance more determined by implementation than by the details of the protocol headers. Size-wise, my prototype can run as a client on an Arduino Uno (8-bit CPU, 2kB RAM). A small test application using the same implementation configured as a peer easily sends ~700k 8-byte msgs/s from one RPi3 to 3 others (CPU is << 100%, network load ~75% of Fast Ethernet, so I really should investigate why it isn’t faster), and goes another order of magnitude faster when run over local loopback on my MBP. That’s better than your typically DDSI implementation. Is this relevant? That depends on whether you have high rate, tiny messages …
Now my test application doesn’t implement all of DDS — not even close — and this is another significant reason why it can do this with only a few kB of code and RAM. At the same time, this is, I believe, where it gets interesting for ROS2.
As ROS2 has its own middleware abstraction layer that uses only a fraction of the DDS feature set, putting ROS2 directly on our protocol would get you a smaller and a faster system. Smaller and faster usually allows doing more interesting things, even if I can’t say what exactly those interesting things will be.
Disclaimer: I can’t do run ROS2 over it today, there’s more work to be done on my prototype before it supports all that is required. And I wish none of you would have to take my word for the data I mentioned, but that is not something that is in my power to solve today.
It looks like I started something of a minor storm moments before starting a holiday…
I’m thankful that the DDS vendors, PrismTech, RTI, Twin Oaks and eProsima, are all engaged enough in the ROS community to be present on the Discourse board. It is encouraging to future adopters of ROS 2.
I wasn’t implying religious believe. It’s an simply expression to describe someone making an assertion. I fully agree that PrismTech’s submission has much smaller messages than the RTI/Twin Oaks/eProsima submission based on the two presentations alone.
While this is true, the other submission has managed to define an object model as well, and in addition kept it close to the existing DDS one.
It is quite common to use flags. TCP is full of them. My concern is that the protocol is, in my opinion, undoubtedly complex and, based solely on the presentations, more complex than the other submission. Complexity and size are often a balance and in this situation we appear to have one submission at each end of the balance.
With the sorts of overhead you are achieving, I’m not surprised performance is amazing. I’d still like to see numbers, though.
As was stated elsewhere in this thread, eProsima’s implementation wasn’t optimised. Since you said you are trying to bring yours to market already and eProsima claimed theirs was a tech demo, I’m not surprised yours is more optimised and thus smaller. Of course, the numbers are definitely in your favour for RAM usage. But I’m curious how much program memory each implementation requires, too. This is often the limiting factor on embedded microprocessors rather than the RAM usage.
I didn’t catch that even once during the presentation. Next time, put such an important motivating factor in your slides. RTI and co were much better at motivating their design decisions, and that put a positive spin on their submission.
You probably also should have put that requirement in the RFP, if it’s that important. The other submission cannot aim for a requirement they are not aware of.
Since the submissions have not gone to the AB yet, as far as I know, then there should not be any official AB reviews, which suggests to me that RTI asked for unofficial reviews from AB members. This may be why they are not public?
In that case I am very interested in seeing what these AB members wrote.
That may have been RTI’s reason, but the reason the rest of us present voted no is because we still have two vastly different submissions with no apparent readiness to work towards a single one. PrismTech even behaved in their presentation as if they are expecting RTI, Twin Oaks and eProsima to through away their submission and go with PrismTech’s.
“Cutting down”, not “cutting out”. I meant that you are trying to reduce the size of the messages on the wire as much as possible, at the expense of possibly needing more complex parsing code. I didn’t mean to say that you are cutting out features. It was clear from the presentation that PrismTech supports more features and has more flexibility than the other submission. But there are trade-offs involved.
Neither of these are required by the RFP. The RFP heavily directs the submitter towards the style of architecture that RTI/Twin Oaks/eProsima provided.
Yes, this is something that I was disappointed about. Hearing RTI’s presentation talk about TCP/IP only seemed to rule out using it on things like Zigbee. But on the other hand, perhaps it’s readily adaptable?
Well, it is DDS-XRCE, is it not?
It was very useful. I wish I had had this information during the presentation. I still have not had time to read the submissions in detail and unfortunately will not be able to do so before November, but fortunately we now have until February next year to try and resolve this situation.
And, as @vmayoral said, having code available would make a difference to how well we can judge things like implementation complexity.
While this is true, we are operating at a standardisation organisation, not a rubber stamp provider. There are interested parties beyond just the implementers. We would prefer not to just hold a vote on which submission to go forward with. We would prefer the submitters to actually work together and produce a single submission that combines the best of both without any technological compromises (yes, I’m aware how hard that is).
Or, you could provide the answers to those questions, along with the equivalent answers for your submission, so we can see and compare the data to back up your claims.
This is the strongest impression I got from the presentation, as I said in my own notes. The data model is similar to DDS, which makes adoption by existing DDS users easier, and the ability to implement the protocol in a relatively simple piece of code (which makes it easier to verify and certify) was considered as important as saving bytes on the wire. I’m not sure where the correct balance is between these two requirements, but the PrismTech implementation really gave the impression of being at one extreme.
This is something that I think is really relevant but that PrismTech have not addressed at all, and RTI/Twin Oaks/eProsima have not addressed enough. What are the typical microcontrollers in use today? What are the target environments for this protocol to be used in?
The RFP explicitly says this:
Both submissions fit within both the RAM usage (with much room to spare) and the protocol overhead.
More specifically, the actual mandatory requirement is:
Again, this should have been in the RFP if it is so important. That would have saved a lot of trouble. All we got was an evaluation criteria:
This is a miserably small set of criteria for a complex design space. Even you, @eboasson, say that protocol overhead is not as important as the design philosophy.
Which I agree is fundamentally different between the two proposals, and that this is where the root cause lies in the failure to reconcile them.
The RFP not-so-subtly pushes submitters in this direction. You cannot fault them for taking it at face value.
I will read both submissions and when I do, I will report back with more technically-informed comments.
And you see this as a positive aspect? Our model is simpler and more user friendly. For instance, how many people can digest DDS partitions? That said, we have a well defined mapping between XRCE resources and DDS topics.
Are you an OMG member? If so I’ll forward you the reviews. Both submissions went to the AB and the reviews were posted both on email@example.com and firstname.lastname@example.org. If you have access to those mailing list you’ll be able to see them. I also recommend you take a look at those.
It was impossible as the other vendors did not want to agree on such a low bound. The 24 bytes was the least we could agree on. This is why there is an evaluation on wire-efficiency. This matters were discussed at length, but again I don’t think you attended those meetings, thus you are missing part of the history and the context. In any case, all of those documents are on the OMG archives, thus if of interest you to reconstruct it. Just search for presentation I did on XRCE for almost a year. starting from 2015!
Yes, that is correct as it is since the very beginning that we are trying to do a joint submission. They’ve refused with futile arguments – if you ask me. We have put lots of effort to trying to join but that has not been corresponded. A pity that you were not in the Bruxelles meeting, otherwise you would have had a taste of it.
Again, you did not attend the end-less arguments we had during the RFP drafting. RTI does not want peer-to-peer in XRCE because they fear it could become as substitute for DDSI-RTPS. Again, this is not something I am inferring, but something that was openly debated during the RFP drafting. We don’t have any issue with that as we think that having a more efficient protocol than DDSI-RTPS for some use cases would be extremely useful.
For me that disqualifies completely the submission as in LowPAN environments nobody can afford to use TCP/IP…
I am glad that this helped clarifying the situation, please don’t hesitate to ask any other question. Concerning the code availability we are working on that. I’ll keep you posted.
I wanted to let you know that we have just released under Apache 2 a peer-to-peer implementation of our zenoh protocol called Zeno-He (Zenoh Helium). This implementation fits in about 1KByte of RAM and has 4 bytes wire overhead. This implementation not only is incredibly resource efficient but it is also blazing fast as it delivers incredible point-to-point throughput and low latency.
We will be releasing a brokering system by the end of the year, likewise we be glad to help-out integrating zenoh as one of the protocols supported by ROS2. This could allow to bring ROS2 on micro-controllers!
N.B. For those of you that are familiar with XRCE, zenoh is the protocol we are proposing for standardisation. But as the standard is not finalised yet, we will keep referring it as zenoh.
Thank you for publishing the Zeno-He library so we can all begin to interact with it. It is especially useful for the ROS 2 user community to be aware of the effort since it implements the ATLab XRCE proposal.
I know I would be extremely interested in someone benchmarking Zeno and the proposed epromisa XRCE implementation, and perhaps can find time to do that.
I’ve finally found the time to read through both submissions (and just in time, to, with the OMG meeting next week!). Here are my notes that I took for myself as I was reading through them, along with some thoughts at the end.
Client-server protocol to allow a resource-constrained device to interact with a DDS domain via a gateway (the “Agent” server).
Use of the client-server (broker) architecture is what allows the low resource usage.
The specification defines a simplified object model that acts as a facade to the standard DDS object model, enabling lower resource use to access a DDS domain.
Most DDS configuration is assumed to be doable on the agent (DDS side), so configuration options on the XRCE side are limited. This contributes to the simplified object model.
Access control, access rights, and managing disconnected clients are new features (over base DDS?) included in the facade object model.
Management of disconnected devices is handled using a session concept that persists across connections between the client and the server (e.g. when the client goes to sleep).
A pull mode is available for clients that do not want data coming in randomly. The client can query the object model on the server rather than changes being updated and pushed out in real-time.
The specification can be used for anything from extremely simple, pre-configured clients up to fully capable DDS devices (why these cannot just use DDS is not made clear).
The object model is resource-based: DDS-XRCE types, DataWriters, DataReaders and so on are represented as resources with a name, properties and behaviour.
Resource implementation is outside the scope of this document.
Resources may be shared or dedicated. e.g. Multiple clients might share a single DataWriter on the server.
Clients can only talk to each other via the DDS domain. i.e. Client 1 -> Server -> DDS domain -> Server -> Client 2. Multiple servers may also be involved.
Data can be sent as a single sample, a sequence of samples, either of these with metadata, or packaged data.
References to objects on the server can be made using a name (but it must be pre-defined?), an XML string, or a binary XCDR-serialised reference (although this is not available for all object types).
Clients can choose a QoS profile that is pre-defined on the server using a named reference. Or they can provide a QoS profile via DDS-XML that they wish the server to use for them. A combination of these is also possible.
All operations on the server are authenticated, and require a ClientKey. This is also used to identify clients.
Obviously authentication could be as broad as “anyone welcome”.
Creation and configuration of the ClientKey is out of scope (not great for interoperability).
Although the specification calls for authentication, it may be easy for a developer who is not careful and uses credentials widely to create clients that step on each other, messing with each other’s objects on the server.
In many ways this specification feels like a remote control for DDS, rather than a low-resource protocol and middleware in its own right.
The protocol is targetted at networks with a minimum of 40 Kbps of bandwidth, so you can give up on your 14.4 Kbps modem now.
A design goal of the protocol was that a complete implementation require “less than 100 KB of code”.
Clients absolutely cannot operate on their own; they must have the server available to function. No peer-to-peer communication is possible.
No vendor-neutral API is proposed.
The transport requirements are fairly strict. Fortunately most transports these days provide them. The requirements include:
Must be able to deliver messages of 64 bytes.
Message integrity must be guaranteed (but not reliability; messages may be dropped).
Must provide transport level security.
The protocol consists of a session, which carries one or more message streams with independent reliability settings. Each stream consists of ordered messages with sequence numbers so dropped messages can be detected and message order can be restored if the transport changes it.
The reliability setting of a stream is determined by the stream ID, rather than being a separate flag header or something like that. Streams with an ID in a certain range have a certain type of reliability. (Effectively the first bit of the session ID is a flag for reliable or not.)
Each message contains one or more sub-messages.
This structure reduces some resource usage, e.g. a single header can apply to many sub-messages, or a single message can operate on multiple resources on the server.
The payloads of most submessages are XCDR-encoded binary data.
The payload can be up to 32 KB.
Message overhead is between 8 and 12 bytes, with an additional 4 bytes for every additional sub-message.
The interaction model is purposely simple, allowing for pre-configuration to replace DDS’s discovery, etc. It is possible to rapidly initiate a session and begin writing data, assuming the server is available, configured correctly and connected to the DDS domain.
A fairly well-thought-out heartbeat system is available to maintain reliable communication.
The discussion of overhead should have also considered low-overhead transports such as IEEE 802.15.4-based transports. TCP may be an average case, a good case, or a bad case for relative overhead but because no data is provided it is hard to say. (My own brief research suggests that TCP is not a good choice for evaluation.) Message overhead should be compared to the commonly expected payload size rather than the transport size, since the transport used is up to the implementer.
Some of the arguments against reducing overhead are not strong. Reducing the number of possible stream IDs (and thus the number of possible streams) is arguably not a problem; how many streams is a small device likely to need in the common use cases? 256 seems like a lot of data for a device when the common example of a DDS-XRCE device given is “a temperature sensor”. Needing 8 bits for the sub-message type to allow future evolution of the protocol smells like aligning things on an 8 bit boundary; dropping 4 bits would certainly leave only two slots for new sub-message types, but dropping 3 would leave 18 and dropping 2 would leave 50.
Ultimately the message overhead discussion comes down to knowing what the use case is. Does an extra byte here or there matter that much? For ROS, possibly not.
Sample message sizes:
30 bytes to initiate a session.
13 bytes to request to read a single sample of data, followed by 15 bytes reply for the (4 byte) sample.
23 bytes to request multiple samples.
47 bytes to receive a sequence of two 4-byte samples with meta-data (12 bytes per sample).
Although XML is syntactically more exact, a more compact and easier to process representation such as JSON have been used instead. But, as noted, there is an existing DDS-XML specification so reusing it makes sense.
The demonstration implementation requires a microcontroller with 256 KB of RAM and running an operating system (NuttX). No demo with an OS-less microcontroller is mentioned. You won’t be running this on an Arduino.
The protocol is small and simple. It would be easy to implement (they state less than 2000 lines of code). It provides access to the entirety of DDS capabilities, which may be important for ROS, but it does so at the expense (in hardware and run-time costs) of needing a gateway server.
Despite appearing to be a more complex protocol during the presentations in September, the specification itself is half the length. Less diagrams?
This submission is much more formalised than the other.
The three main goals of this submission are extremely low footprint (an Arduino Uno is cited), extremely efficient wire protocol (overhead of just a few bytes), and supporting devices that regularly sleep.
This submission pays no attention to the API. It is only interested in the wire protocol.
Discovery is supported, and is also a separate compliance point so vendors don’t have to implement it if their target platform is too small.
Static configuration is possible.
Resources are used to represent information to be exchanged, with properties of these available. Resources are identified by a URI; the properties are always accessed via a /property postfix to the URI.
Reliable is the default setting.
Durable and transient resources are also available.
A query syntax that allows filtering resources is provided. For example, all resources where a data member(?) is above a given value. This is equivalent to the DDS filter expression topic subscription.
This submission uses an interaction model fundamentally similar to DDS, with DDS-XRCE participants reading and writing data in a data space.
An implementation can use a set of brokers, or a pure peer-to-peer infrastructure, or a mixture. XRCE clients can exist and function without any kind of special server.
The message header is a single byte, with 5 bits for message ID. This allows up to 32 message types.
Messages may be decorated with additional markers.
Variable length encoding is used for things like message length and integers.
Sequences and strings also have an encoding specified; XCDR is apparently not used.
Message payload may be any size within the limits of the transport.
Following discovery (or startup for a static configuration), a session is established between every pair of XRCE applications talking to each other. Part of opening a session includes ensuring that both sides can handle the same range of sequence sizes to avoid sequence number roll-over problems.
Sessions are kept alive as long as a message is exchanged during the specified lease period. There is a keepalive message that can be used when nothing else is sent. Both sides must actively maintain the session.
Sessions can exist across multiple transports, so it is possible to have multiple connections at the transport level using different transports and merge them into a single session, allowing the best transport at the time to be used (e.g. UDP for best-effort data and TCP for reliable data).
Multiple sessions cannot exist on the same connection because sessions are uniquely identified by the locator (i.e. address of the client). However since multiple readers and writers can exist within a single session this is not a significant limitation.
Authentication is included in the protocol, but the details are left up to the implementation.
After establishing a session, resources can be created using special messages. An atomic approach is supported, with all resources being requested and then a final commit message being sent to actually trigger their creation.
Data samples can be sent singly, in a stream, or in batches.
Data can be pulled or pushed.
Data fragmentation is supported allowing samples of arbitrary size.
There is a message available for round-trip latency estimation.
It is not clear how sleep cycles combine with the peer-to-peer operation mode. If one client sleeps, then wakes up and asks for data from another client (which it couldn’t receive earlier due to being asleep) but the publisher of that data is asleep, the system will deadlock.
Sample message sizes:
3 bytes for discovery probe.
4 bytes plus data size for a data sample.
The PrismTech submission is undoubtedly more complex, but it is also undoubtedly more powerful - although how much more depends on your use case. Most significantly, it supports discovery and DDS-XRCE applications do not need a server running to communicate even amongst themselves. The RTIandCo submission, on the other hand, is simpler but does not support any form of P2P communication, requiring a server to always exist even if you only have DDS-XRCE applications. Both would require some kind of gateway (which is explitly present in the RTIandCo submission) to talk to DDS-RTPS, but while the PrismTech one would require the data to be unpacked and repacked, the RTIandCo one probably would not because it uses XCDR for DDS-XRCE.
The PrismTech submission is superior for tiny-scale devices. There are many examples of these in use today, such as sensor motes. But for the ROS use case, are such tiny devices relevant? Regarding which is more suitable for ROS, this is not a straightforward question. PrismTech’s submission is more suited to implementing ROS on top of as a standalone rmw implementation because it would not require that a server always be present. On the other hand, it lacks a lot of the QoS capability of DDS, which the RTIandCo submission supports. But the RTIandCo submission is more like rosserial, rather than the fully decentratlised communications middlewhere that the PrismTech submission is. This doesn’t mean that an rmw could be built on top, but it would not be as straight forward to use, requiring additional functionality in roslaunch.
Based on the presentation, I got the impression that the PrismTech submission was very complex with many branching paths in processing a message, and the RTIandCo submission is relatively simple. Reading the specifications made clear that the RTIandCo submission is simple: it’s a simple protocol for a single task (proxying data between a DDS domain and a device). It would be easy to implement, but has drawbacks like needing a server for it to work at all. On the other hand, reading the PrismTech submission made clear that their protocol is not that complex. It’s not as simple as RTIandCo’s, but it’s straightforward, well thought out, and clearly designed for very small scale devices. Its decentralised nature would make it easier to use in a system where it is the only protocol in use, but if you want to mix RTPS and XRCE then you would need a gateway, and the gateway would necessarily be less efficient than that in the RTIandCo proposal. However, it would also be much less of a single point of failure.
A relevant question is, given that the PrismTech submission doesn’t support aspects of DDS like QoS (except for reliability), what is the benefit (aside from overhead) compared with using a subset of RTPS?
Thanks for the detailed comparison and the kind words
If I may give some more context to a few of our choices:
Our proposal deliberately only specifies the encoding for the message headers, and nothing for the payload. The reason is that we want the protocol to be as widely applicable as possible, and mandating a single payload encoding would work against that. Obviously one needs to agree at some point what that encoding should be, but it could be negotiated or configured. As you noted, when interoperating with DDS, XCDR would be a sensible choice, but it is not necessarily the only sensible one: for example, OpenSplice has nicely integrated support for Google Protocol Buffers, and so deciding to send GPB encoded data could also be a good choice.
Regarding QoS, the intent is that the protocol is limited to those settings that matter at the protocol level, and durability and reliability are the only ones of the DDS QoS for which this is the case — e.g., history and deadline are really handled locally. All these other QoS can be specified as properties, so that the requested-offered model of DDS can be maintained in the bridging to DDS.
Peer-to-peer and sleep cycles pose a bit of a problem indeed, and we haven’t really paid much attention to the combination. Still, there are many examples of gossip protocols that do just that by adjusting their cycles to stay in sync, and you could build an implementation of this protocol in peer-to-peer mode that does the same thing. Whether that would be worth the bother is anyone’s guess.
We are really interested in doing RMW directly on top of our protocol, but we haven’t gotten around to it yet. I guess assuming an extremely restricted environment like my implementation does makes everything just a little harder …
For all those who are interested, we have continued working on the our protocol specification and the current version is now included in our repository.
@gbiggs, thanks for taking the time of reading and analysing both submission. I wish more OMG attendees would follow your example
There are a few observations I’d like to make beside to what Erik has just made.
One of he main focus on wire-efficiency for use was to be able to support properly transport that have very small MTU or low byte-rate per hour. One example is BLE (Bluetooth Low Energy), which on most devices fixes the MTU to 20 bytes (and sometimes cannot be negotiated upward). The other example is LoRA in which 400 bytes can be sent at most per hour! Thus, it is not as if we are obsessed with efficiency, it is really a matter of relevance and applicability.
I think that to understand our proposal you have to look at the mechanism we provide to implement the DDS mapping and DDS-like behaviour. In essence our properties mechanism allow us to map DDS QoS and for an XRCE implementation targeting the DDS interop, arguably QoS like deadline, transport priority, etc. etc. can all be implemented. It is in fact worth to remark how if you look at the DDSI-RTPS specification it does not specify how QoS are implemented – beside Realiability. Durability, Group Coherency and all the other are implemented at DCPS level. The same is true for XRCE. Do you start to see the analogy now?
As you could see from http://zenoh.io our code is less then 2000 lines of code and can run OS-less. Thus if we compare protocol complexity, if as you mention RTI implementations is also around 2000 lines of code… Perhaps our protocol is not so much more complicated to implement In fact, I’d argue that for those interested in implementing only the client-to-broker protocol the complexity is similar.
In any case, thanks again for your throughout analysis… And if you have any questions or curiosity on why certain things are how they are in XRCE, please don’t hesitate to ask.
P.S. Did you realise why declaration can be are atomic? I guess with DDS you have experienced the challenge of needing a series of entities to be declared and having partial failures… Well that was one of the things we wanted to prevent.
P.P.S Notice that Resource ID can be arbitrarily small or big, and that there is a one resource ID identifies a resource constraint, meaning that multiple ID can be associated with the same resource… That has nice implications too…
I wanted to share with the ROS community the proposal we made yesterday afternoon to other DDS vendors, the OMG MARS Taskforce and the XRCE evaluation team on a possible way forward.
Before articulating the proposal let me give some context.
As a result of the XRCE standardisation process we need to select one of the proposals. @gbiggs provides above a good and independent analysis on the two proposals with @eboasson and @kydos clarifying a few points. Thus if you have not read those I suggest you do before continuing reading things.
With reference to @gbiggs analysis, we have one proposal (ours) that is perceived as being slightly more complex but that supports peer-to-peer as well as client-to-broker and is more suited for constrained environments. The other (RTIandCo) which appears to be simpler but only supports client-to-broker and carries more overhead.
If along with this information, we take into account that we (ADLINK):
have made available our XRCE implementation as Open Source under Apache 2 as part of the project zenoh.io, and
are going to release a C++ broker by the end of the year (we already have a Swift and a Scala broker implemented – some folks have seen these in actions at various demonstrations), and
are committed to make zenoh.io the XRCE reference implementation, both in terms of standards as well as quality
Now that the context is given I can enunciate the proposal I made to other vendors, task force & co:
Adopt our proposal and join forces, around the newly established open source project (zenoh), to accelerate the establishment of the XRCE standard in constrained environments.
The advantages of my proposal are several:
End-users such as the ROS community would get access to an implementation of the standard much more quickly – essentially now.
Other DDS vendors could have immediately constrained connectivity by simply integrating their DDS implementation on the zenoh.io broker.
We would have an open source implementation of XRCE supported by all DDS vendors, which means no interoperability issues, faster evolution, and faster adoption.
We would have a protocol that can do peer-to-peer as well as brokered communication, which is good for some use cases – most notably in robotics.
We would have a protocol that could be deployed down to the sensors. Imagine for a moment having ROS-enabled sensors talking XRCE via low-power protocols or anything else that suits them.
As the one protocol everyone uses and supports is open-source we would facilitate adoption immensely.
Collaboration can bring us much more further away than competition. What has made humans excel is our ability to collaborate not so much that to compete… Thus, why not in this case?
I am looking forward to hear comments from the ROS community. Please speak-up.
Micro-RTPS is the base for the project micro-ROS (eProsima, BOSCH, Acutronic Robotics, PIAP and FIWARE Foundation), a project to extend ROS2 to microcontrollers following the ROS2 principals.
We will be presenting the project in the industrial ROS conference next Tuesday, Dec 12. See here:
@kydos (Angelo): We have not only a complete Open Source implementation, but a joint submission with the main DDS/RTPS providers (RTI, Twin Oaks, eProsima), and an ongoing project with some of the main ROS contributors: micro-ROS. What I was planning is to get some of the good ideas you have in your submission and incorporate those to the join submission, always following the OMG process we have to create a new standard. Let’s organize the necessary meetings to get you on board.
@Jaime_Martin_Losa, you are just following our foot-steps. Just check dates on repositories, check numbers of supporters, quantity of contribution, etc. etc… Then the real questions is why should we select the protocol that does less and takes more resources… I don’t find it a good technical argument.
You may think it is a question of ego, but I’d argue you should ask yourself the same. Our proposal is more general, more wire efficient and memory efficient. Thus technically, a rational thinker would join ours.
But again, ego and politics spoil rational thinking. But it is not too late for you to take the right choice
We have been working in this for more than one year now. We have shown prototypes even before you published your alternative. Not only at the OMG, but within the ROS ecosystem, with already some success cases, and now we have the first alpha of a complete product: Code, Examples, Comprehensive doc, videos, etc.
Three different DDS providers are working in our direction, and you have already several assessments here and at the OMG indicating you the pitfalls of your submission, so please consider the possibility you could be wrong, or partially wrong.
Now, the process for me is clear: The OMG Evaluation team has asked for more information regarding our submissions. Please adhere to the process.
@Jaime_Martin_Losa you may have been working on this for a year. But you are very well aware that we have demonstrated prototypes ages ago – as an example look for the Huawei Eurpope Connect… In any case, for those interested in the actual history the Internet is fairly good at keeping track of it.
I’d be happy to hear from you what are the pitfalls of our submission. Thus far, all points raised, including those from the evaluation team were coming from either not reading all the document or assuming a restrictive interpretation.
But if you have a real comments, you are welcome. I’d be happy to have a technical discussion.
I’ll state it again and wait for you to prove differently, but with objective and provable facts, our submission does more than yours and is more wire efficient!
Please if you feel to reply to this email do it only with technical matters.
@kydos it seems to me that you are the one bringing up non technical matters here regarding who did what first. It is also you who is making unsubstantiated statements about the relative capabilities and performance.
You may not like the points raised by the evaluation team but claiming that they are “not reading all the document” or their interpretation is “restrictive” is hardly an objective statement. Moreover it is disrespectful of the effort the independent evaluation team has put into the review and feedback.
I do agree it does not make sense to have this kind of discussion here it is not a technical discussion as you stated. The right forum for the technical discussion is the OMG evaluation team and task force.
Please stop trying to externalize and politicize the process.
Please, refer to the content below for a peek into a preliminary architecture of the micro-ROS European project that @Jaime_Martin_Losa brought up above (completely inspired in the work the OSRF is doing with ROS 2):