Your list is certainly a good start, but perhaps is skipping a prioritized list of mobile robot sensing priorities.
Example - I could tell you that you could use your AI enabled mobile bot with IMU, Depth Sensor, and camera to “learn” the robot’s effective wheel diameter and wheel base, but you would not have a clue why I am suggesting this application of AI.
Most of my mobile robots suffer from robot reality such that they cannot reliably and accurately know where they are in relation to their most desired location - their dock needed to recharge their battery.
They also suffer from varying wheel contact with the floor making their wheel base vary as the wheel turns preventing accurate track regardless of how accurately the PID tries. They suffer from dust on the floor causing wheel slippage which further limits position estimation from the encoders.
Additionally they suffer from deep grout joints and a rough floor tile surface.
Limited IMU accuracy (~5%) complicates and limits heading accuracy - your recognizing left and right perturbations may allow more accurate heading estimations.
In fact I have come to the conclusion that other than battery voltage and current, my robots could probably get along without proprioceptive sensors (encoders, IMU). They do much better with AprilTags, LIDAR, and ultrasonic ranger for “seeing” what LIDAR cannot (black trash cans, the highly reflective dishwasher except at 90-degrees, the black UPS, the black filing cabinet, my black floor standing computer, the black chair legs, and obstacles above and below the LIDAR plane).
Another very high priority mobile robot problem that is often not handled in home built robots - stalling. When a robot gets wedged beneath a kitchen cabinet, or stuck between a dining room chair leg and the table leg (e.g. when backing up to avoid a forward detected obstacle), the robot may spin its wheels leaving black marks on the floor until the battery dies, or if the wheels do not spin the motors may draw excess current causing power loss to the processor. Not having 360 degree 3D obstacle sensing may be a robot’s second highest priority problem, (after not being able to find and mate to its dock).
I absolutely would love to be able to use vSLAM with stereo depth clouds but between RMW issues and RTABmap taking 100% of my RaspberryPi5 when the data does get through, my robot have to settle for going very cautiously around the house and using an AprilTag for fine navigation back to the dock, assisted by physical wheel guides for the exact electrical mating to the dock.
I believe the most reported problem with the amazing Amazon Astro robot (which had nearly unlimited AI, wide wheelbase with large diameter non-slip wheels, visual and depth mapping) was periodic “got lost, didn’t return to base”
You mentioned Speech Reco: I will say that my most “personable” robot CARL (Cute And Real Lovable) runs two speech reco engines - Nyumaya listening for “Hey Carl” that takes less than 25% of his Raspberry Pi 3B+ processor, and Vosk-API running a grammar to allow commanding and information requests that briefly takes another 25% of CPU only when analyzing the speech after the “Hey Carl” trigger.
Carl is 7 years old and my only non-ROS robot
Another home mobile robot priority may be to “Stay Off The Carpets!” - know why? I had to send one of my robots back because it left marks on the carpets.