What are the benefits and challenges of using Event Cameras (neuromorphic cameras) within aerial robotics applications? I am working with the Event Camera Community Group (GitHub - Event-Camera-Working-Group-ECWG/community: Community Goals, Project Scope, Milestones, and Meetings), and we are interested in learning more about the community’s feedback and opinions on practical use cases and potential concerns for adoption.
To give an example of something applicable to the ROS aerial community, we have demoed the Prophesee metavision event markers code, which uses active markers to compute pose as shown in the video below. We are looking for more ideas/applications for using event cameras in specifically aerial robotics
For clarity, we do not have an Event Camera Working Group. The term “working group” has a very specific meaning wtihin the OSRA. The website you linked to would be more accurately termed an Event Camera Community Group. See questions 40 and 41 in the OSRA FAQ.
Thank you for letting me know. I was unaware of the differences in term definitions, and the Open Source Robotics Alliance FAQ file you shared will help me learn more about the community and appropriate protocols. I have updated my question to reflect the change from “Working” to “Community” and will request an update to the linked GitHub page.
Hi!
I am a researcher on aerial robotics and event cameras. I think ECs have two clear applications:
- Using them as payload sensors: event cameras have a higher dynamic range than traditional cameras. Therefore, you can use them in situations where HDR is required. I wrote a paper on orthomapping with ECs last year, showing that ECs are helpful in situations where you have a lot of sunlight or in low-light conditions. High-resolution event cameras are preferred for this application thanks to their spatial resolution. If you are interested, we compiled a high-res event camera dataset two years ago with sequences mounted on a UAV flying at low altitude.
- Using them as autonomy sensors: Here, the high temporal resolution of event cameras can be helpful for obstacle avoidance and state estimation. In general, you would prefer low-res ECs that can bound the number of generated events. I did some work on avoidance and also catching using event cameras in the past. There is also quite impressive work on feature tracking. Also, HNU released a dataset on state estimation for high-speed maneuvers recently.
The main challenge of event cameras mounted on UAVs is the high number of egomotion events, i.e., events that are generated by the background. For example, in this sequence, all the pedestrians appear clearly at the beginning. As soon as the camera starts moving, a lot of background events are generated. Separating background from other events is still challenging.
Also, the data stream of event cameras depends significantly on your scene, and it can vary from a few thousand to millions of events per second (see Fig. 3). This is a clear distinction compared to regular cameras, where the amount of data to process is dictated by the frame rate.
A good source of information for ECs is this survey paper. Also, the workshop on event-based vision at CVPR.
I hope it helps, and I am glad to share any more thoughts on particular applications.