ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Autoware without vector maps

I was intrigued to know the influence vector map has on the overall framework. So, I just did a simple test in the quick start examples by removing the lines that load the CSV files in my_map.launch. Since the vehicle can localize itself only using the point cloud map. my hypothesis was that, without the vector map information, the vehicle would fail to stop at traffic lights and not follow the lanes. But to my surprise, the vehicle was able to stop at traffic lights.

So, my guess is that a point cluster classification algorithm is being used to detect the traffic lights. hence, this brings up the question of what part of the algorithm/framework that would be affected if vector maps aren’t available?

Looking forward to a discussion on this topic!


@Vigneshwar_Elango are you referring to the ROSBag demo example within Autoware? if so, you don’t even need to have the point cluster classification running to see the “vehicle stopping” since it is just replaying the rosbag data not actually giving commands to a vehicle

1 Like

@sgermanserrano thanks for the response. Indeed, I am using the demo bag. Oh yes, I just noticed the topic relating to twist information and steering information is being published from the said bag file. So in any case, the absence of vector maps will affect the waypoint calculations. So, I would be happy if you could let me know what influence vector map has on the overall framework and is there some way to avoid using the proprietary vector maps and use some other open-source map formats?

@Vigneshwar_Elango the participants of the Autoware Maps WG would be able to provide further insight into the matter

@sgermanserrano Thanks gain.

Looking forward to the discussion with the group members.

@Vigneshwar_Elango The point cloud map is only used for localisation, all other map information comes from the vector map: traffic light position, reference lanes, stop lines etc.

However, Autoware.AI 1.13 was released with most HD map features implemented using both Vector Maps (proprietary) and Lanelet2 (open source) map formats, and at run time you can choose which one to run (if you have maps in your format of choice). If your only concern is regarding the proprietary nature of Vector Maps, then consider using Lanelet2.


Hi @simon-t4 thanks the pointers. I also have follow up question sort of relating to the same topic we are discussing.

Currently, I’m exploring/playing with the sample data provided by autoware to better understand its architecture.

I referred the following demo:

We can see that moriyama_path.txt provides the waypoints that the ego vehicle must follow and these waypoints are defined only for a portion of the map and further these waypoints are lying away from the initial position of the vehicle.

My understanding is that A* can be used to navigate to the closest waypoint that lies away from the initial position and the algorithm can be used to navigate further provided waypoint information is available. But what will happen, if the vehicle enters the scenario where the waypoint information is not available as in the case of sample data. But from the demo, I can see that the vehicle is able to navigate in the absence of waypoints. Although I can understand that localization can still be achieved, but I cannot understand how the vehicle proceeds to move without waypoint information.

So how the demo exactly work. As @sgermanserrano mentioned, we are just seeing the rosbag’s position information that is being published the rviz? If that is the case, how is the information that is computed in real-time in simulation environment is being used?


The demo is replaying the sensor data and state information from a captured path from a real world journey. Because it is it captured data, the demo can only playback the captured car motion (otherwise sensor data would not be consistent. In real time, the demo is localising the vehicle (matching LiDAR data against the point cloud map), and computing a control based on current position and reference path: The white circular arc drawn at cars location shows the curvature generated by pure pursuit algorithm.

Of course these computed controls cannot be executed because the sensor data is captured before hand, and you see the car moving only because the localisation process keeps updating the cars position.

1 Like

@simon-t4 Thanks for the detailed explanation. :slight_smile:

Just to make sure it’s clear: there is no simulator involved in that demo at all. It is only replaying data from the bag file and running the various algorithms on it.

1 Like

May I also ask a follow up question. We can see that the Autoware framework is designed to be modular. So, there are certain situations where all of the *.csv file that can be generated by Tier4’s tools may not be available.

For instance, in this implementation, we can see that the author says that some features cannot be converted but can replicate most of the important features such as lanes, stop lines, points that are necessary for waypoint creation and waypoint following.

So, I guess is that we have developed the autoware framework such a way that all the files generated by the tier4’s tools arent necessary for waypoint creation and following, making it more modular!? If that is the case, where are the “must-have” *.csv files for the “core planning” module?

Any leads would be much appreciated! :slight_smile:


1 Like