Navigation for precision farming in open fields

We have experience with the ROS navigation stack and with Navigation 2. Together with some developers of robots for agriculture we are looking into the precision navigation in the open fields. Experiments with RTK GPS showed that the precision was not enough. Moreover, the typical 2D lidar based approaches do not work in the fields. Eventually, the robots must be able to navigate over small (visible) tracks in the fields.

The robots we are looking at have 3D camera’s and 2D lidars. I saw the frameworks from automotive sector, such as Autoware and NVIDIA DRIVE, that have nice processing of such data. Therefore, I was wondering if it would be a good idea to integrate functionality from such frameworks for navigation in agriculture.

Do others have experience with precision navigation in open fields and want to share their experiences with us?

4 Likes

What kind of precision did you get with RTK GPS? Where you far from the base ?

We observed errors up to 20cm. The base was about 20-40 meters away at the side of the field.

Hi there, there are several types of RTK correction … Which one are you using fixed or floating …? The connection is constant …? All of this is very sensitive to the quality of the antennas as well. I use the swiftnav product and it looks very solid. Bye…

2 Likes

If the question is purely about localization the best low cost GPS we have found is the F9P. It is able to hold a position within 2.5cm.

Check out this product from Ardusimple https://www.ardusimple.com/new-product-launch-simplertk2b-v3-zed-f9p-and-zed-f9r/

We have also found that using a survey grade antenna helps with occlusion. https://www.ebay.com/itm/BEITIAN-NEW-3V-18V-CORS-RTK-GNSS-Survey-Antenna-High-precision-gain-ZED-F9P-GPS/223658578365?hash=item34131711bd:g:A7kAAOSwjZFdd2I6

I have had a difficult time using lasers in an open field. There aren’t enough returns to get an accurate position estimate. Also, using an absolute position system like an RTK GPS prevents accumulative position error.

Is your goal to navigate based on these visible tracks and not necessarily base on absolute position?

3 Likes

You might want to reach out to Univeriste Laval, they do large-scale outdoor mapping: https://norlab.ulaval.ca/publications/penality-icp/ .

Thank you for these pointers.

For the short term, we need to adjust to visible tracks and visible rows of crops. This is because the current rows are not planted automatically and they do not match to the planned absolute positions of the rows according to the map/plan. Eventually, we expect that robots will also saw and plant the crops. In that situation, the absolute position of the rows is known. I assume 2.5 cm would eventually be enough for following the tracks.

1 Like

Did you think about putting a robotic total station to the field? The robotic ones can track crystals and provide you with milimeter precision. But that, of course, expects direct visibility of the tractors all the time…

But even with some occlusions it could be an interesting problem - whenever there is direct visibility, the total station would tell the robot where it is, and when visual contact is lost, the robot would guide the total station to find it :slight_smile: (providing it its pose estimated from the onboard mapping or row-following algorithm).

We have thought about a total station for experimental settings, e.g. for validating another solution. Is there a total station approach that is aimed at mobile robots? For smaller fields an approach with “normal” high definition cameras might also be a solution. However, ideally we do not use external infrastructure that has to be installed and maintained.

Interesting. Currently, I am designing a seeding robot for my senior design project and the problem that we are facing is to have a precise localization system on an open field.

Can you provide more information about GPS error rate and the correction approach since most of the GPS has high error rate. Also did you experiment any other lower cost GPSs?

Thank you.

1 Like

How accurate of a position do you need? How much do you want to spend?

We are able to get 2.5 cm with the F9P from ArduSimple and a survey-grade antenna in the open. You will need correction data either from a service or provide the corrections yourself with a base station.

Precision and cost are exponentially related.

3 Likes

I have no idea how you’d combine RTK with the visual crop row approach, but seems interesting. There are almost always crops in the fields GitHub - PRBonn/visual-crop-row-navigation: Visual-servoing based navigation for monitoring row-crop fields.

2 Likes

@samuk I went through the documentation of the package it seems promising. But it can only help in moving within a row we wont be able to track the global position of the robot in case it wants to go home or something. To my understanding this can be used with some other global localisation module. Even if the module has low resolution, they both can do the job.

2 Likes

Thanks Robin I think we need at least Nav2/Waypoint follower to complement it.

I’m not sure if Teb Local planner adds some value.

Then I think we’d need RTK for location

There is a ROS2 package for ZED-F9P

There is some Python RTK stuff from the Acorn project that might be helpful

1 Like

Yes this is exactly what i was referring to. But I have one doubt.
How far these way points needs to be in order for the agribot planner to start execution. I mean if we have multiple rows in the field of view of the robot then which one will the controller choose? Is there any selection criteria or do we assume that there will only be one row at a time?

1 Like

Have a look at the ROS1 Agribot docs for how they handle turning.

I’m not sure how that would translate into ROS2, there might be some clues here

2 Likes

Can you elaborate a bit more about you plans @Robin_Tomar and @samuk ?

We have been thinking about defining “zones” in the field (based on RTK localisation) and based on the zones choose the correct controller (and maybe a planner). So, the “turning action” zone in the picture above would be the headland where we can freely navigate. By using such zones as condition nodes in the behavior three we can switch between visual row following and GPS. For the overall planning, I was thinking about a higher level planner that not only takes into account the layout of the field, but also of the tasks that must be performed. However, we did not work out these ideas yet.

2 Likes

@Wilco That sounds like a good approach. It’s a bit beyond my capabilities to contribute meaningfully, but I will certainly follow your developments.

Found this quite old video on how Naio do it:

A FarmOSmap > GeoJSON > PolygonStamped >
boustrophedon_planner approach might be interesting for establishing field patterns?

Then feed the waypoints to Nav2 How to send a goal path to Nav2 cc @harsha-vk