OK so I like following target use cases:
- consumer robots (Ingo)
- warehouse logistics robots (David)
- mobile manipulation robots (Victor)
- autonomous driving robots/cars (Dejan, Geoff)
They are real and will keep us focused. The requirements of these use cases obviously differ, but there can be a lot of commonality in the hardware implementation. I’d propose that we agree a model HW architecture that can usefully span all those 4 domains.
All of these systems will basically consist of
- uC attached to sensors (I assume that this is always required for any sensor)
- uC attached elsewhere in the network (e.g for bridging one network type to another)
- uC attached to actuators
c) Networking infrastructure
d) Main host computer
For real time system purposes we only really care about b,c,d. So I’d propose we agree what model HW we’d work towards. Something like
b) uC : STM32 running Nuttx
c) 1)AVB Ethernet & 2) point to point serial line comms (could be any underlying physical layer - which physical layer is used is not relevant to real time)
d) 1) i86 & 2) ARM
I’d suggest that we target the elements that definitely span all 4 domains in the beginning namely b and c2. I respect @iluetkeb desire not to get in to d in the first instance. We’ll probably get more mileage by sorting out b and c2 first, but the way I see the world - eventually we are going to need real time in d even for relatively low cost applications like warehouse.
Not only the transport layer but everything above as well, from the data link layer (OSI layer 2) up to the ROS 2 rcl (OSI layer 7, and going through all middle layers including the network and transport layers (which we typically refer to as the networking stack), the communication middleware (e.g. DDS), etc.).
The network infrastructure is key. I picked AVB because it is widely used in automotive and widely available. Also in it most of the OSI layers are already real time capable. In particular you have to have bandwidth guarantees (possible in AVB and implied in any point to point protocol) without which I don’t see how we can make a RT system. If you have a sensor that transmits directly on to a broadcast network (e.g. plain vanilla ethernet) that doesn’t have its own individualized bandwidth allocation - I don’t see how you can make that system RT unless you can also guarantee that it is the only thing that is doing so on the parts of the network it is using.
I put serial in there, because well, you are always bound to have some kind of serial comms going on somewhere.
This doesn’t prevent us from expanding with other connectivity choices or uC later on - it just means that we’ll design it with this stuff in mind and can provide a template to build hardware for eventual testing.
@davecrawley the problem here is that if the sensor does not speak DDS (and hence we can not use DDS’ data model flow) - then we are actually solving a use case specific approach (e.g. is this sensor connected over AVB Ethernet or CAN, which RTOS are we running, do we have a regular network stack or a TSN, …). I am not opposed to jumping on this but lets first decide on whether we do use case specific or generic work.
Sure! Whatever sensor you hook up will have to connect to a uC that speaks DDS and connects to our RT middleware fabric. I’d propose we create / define a standard setup for that uC but not really get in to the connection between the uC and the sensor. I think we have to assume that whatever the connection between the uC and the sensor it is deterministic. Most of the sensors I use hook directly in to a uC that I control anyways. It only gets messy when you want to connect a sensor that connects to a shared communications fabric out of the box and that sensor doesn’t speak DDS or respect determinism. For example, an ethernet LIDAR. But there is no way around it! Such a sensor will inject data with non deterministic timing in to a shared and finite communications fabric and as such cannot be deterministic as long as it commingles with other non-deterministic data while it does so. You have to put it in to our real time middleware layer before it comingles with any other non-real time data and so that either means re-programming whatever uC is on the sensor or using a bridge. That bridge will mostly have the same setup as the standard uC discussed above. So I think we just define/agree one standard uC setup.
AVB has the property that it can handle RT and non-RT data streams simultaneously. Obviously we have to figure out how our middleware is going to talk to it though and make sure that it allocates the right amount of bandwidth, for a sensor for example, to ensure the guaranteed quality of service.