Edge AI Applications for ROS on Motorized Robot

Hello ,
I am making a Motorized robot and wanted to discuss on the various (or completely new idea) of Application using Edge AI .
The robot is having 4 toy wheels (toy car base) equipped with low resolution camera,depth sensor.It has also IMU and Digital mic .

Below is the list of applications:
1- Tile Mat type detection (IMU and Depth Sensor data)
2 - Recognizing left and right direction (Camera Data)
3.uphill and downhill (IMU data or Current data)
4. Audio Speech recognition
is there any other class of application i can target? Feel free to comment.
Regards,
R

1 Like

Your list is certainly a good start, but perhaps is skipping a prioritized list of mobile robot sensing priorities.

Example - I could tell you that you could use your AI enabled mobile bot with IMU, Depth Sensor, and camera to “learn” the robot’s effective wheel diameter and wheel base, but you would not have a clue why I am suggesting this application of AI.

Most of my mobile robots suffer from robot reality such that they cannot reliably and accurately know where they are in relation to their most desired location - their dock needed to recharge their battery.

They also suffer from varying wheel contact with the floor making their wheel base vary as the wheel turns preventing accurate track regardless of how accurately the PID tries. They suffer from dust on the floor causing wheel slippage which further limits position estimation from the encoders.

Additionally they suffer from deep grout joints and a rough floor tile surface.

Limited IMU accuracy (~5%) complicates and limits heading accuracy - your recognizing left and right perturbations may allow more accurate heading estimations.

In fact I have come to the conclusion that other than battery voltage and current, my robots could probably get along without proprioceptive sensors (encoders, IMU). They do much better with AprilTags, LIDAR, and ultrasonic ranger for “seeing” what LIDAR cannot (black trash cans, the highly reflective dishwasher except at 90-degrees, the black UPS, the black filing cabinet, my black floor standing computer, the black chair legs, and obstacles above and below the LIDAR plane).

Another very high priority mobile robot problem that is often not handled in home built robots - stalling. When a robot gets wedged beneath a kitchen cabinet, or stuck between a dining room chair leg and the table leg (e.g. when backing up to avoid a forward detected obstacle), the robot may spin its wheels leaving black marks on the floor until the battery dies, or if the wheels do not spin the motors may draw excess current causing power loss to the processor. Not having 360 degree 3D obstacle sensing may be a robot’s second highest priority problem, (after not being able to find and mate to its dock).

I absolutely would love to be able to use vSLAM with stereo depth clouds but between RMW issues and RTABmap taking 100% of my RaspberryPi5 when the data does get through, my robot have to settle for going very cautiously around the house and using an AprilTag for fine navigation back to the dock, assisted by physical wheel guides for the exact electrical mating to the dock.

I believe the most reported problem with the amazing Amazon Astro robot (which had nearly unlimited AI, wide wheelbase with large diameter non-slip wheels, visual and depth mapping) was periodic “got lost, didn’t return to base”

You mentioned Speech Reco: I will say that my most “personable” robot CARL (Cute And Real Lovable) runs two speech reco engines - Nyumaya listening for “Hey Carl” that takes less than 25% of his Raspberry Pi 3B+ processor, and Vosk-API running a grammar to allow commanding and information requests that briefly takes another 25% of CPU only when analyzing the speech after the “Hey Carl” trigger.

Carl is 7 years old and my only non-ROS robot

Another home mobile robot priority may be to “Stay Off The Carpets!” - know why? I had to send one of my robots back because it left marks on the carpets.

1 Like

Hello @RobotDreams ,
Thanks for putting your thoughts. This seems like a real problem you have faced .
Finding “robot’s effective wheel diameter and wheelbase” using AI seems really an interesting problem.
but you would not have a clue why I am suggesting this application of AI. > Ofcourse no clue :blush:
But how finding these can solve the problem you mentioned?
For speech reco i dont have much commands but basic ones to move fwd ,bkwd and rt,left and stop(except i have to run after it to say these :))
But thanks for steering the discussion with your experience and thoughts !
Best !

1 Like

My robots publish an estimated pose using encoder data. x clicks of a wheel means the wheel turned so many degrees, and the wheel diameter determines the distance traveled by that wheel if there is no slip and the surface is flat. Heading is estimated by using x clicks of the left and y clicks of the right wheel and the wheel base - ah yes - my robot is differential drive with only two wheels.

Four wheel robots with steering can use encoders on the rear wheels to estimate distance and heading, but often do not use encoders at all, in which case knowing wheel diameter and wheel base are of no use.

1 Like

Great project! Beyond your listed ideas, consider adding real-time object detection (Edge AI with camera) to avoid obstacles or follow people. Environmental sound classification (using the mic) can detect alarms or spoken commands. Depth + vision can enable edge-based SLAM for indoor navigation. You can also explore anomaly detection (via IMU and motor behavior) for fault prediction. Gesture recognition via audio/IMU is another niche. Finally, integrate emotion or intent detection using voice for interactive behaviors. These enhance autonomy, safety, and HCI.

2 Likes