Navigation for precision farming in open fields

I think you must mention the source of images and material which you share !! which are not yours!

1 Like

you can take a look at our new released project code (native python) here :
visual-multi-crop-row-navigation

1 Like

Apologies, I assumed it was obvious as we were discussing your link which I’d provided two comments previously. That post is too old to allow editing now.

To be clear the image comes from GitHub - PRBonn/visual-crop-row-navigation: This is a visual-servoing based robot navigation framework tailored for navigating in row-crop fields. It uses the images from two on-board cameras and exploits the regular crop-row structure present in the fields for navigation, without performing explicit localization or mapping. It allows the robot to follow the crop-rows accurately and handles the switch to the next row seamlessly within the same framework.

They have a new video too:

Hi @samuk, I’m a bit late with the answer, but I hope it works for someone. I’ve released recently a repo about coverage path planning (you can use it to create the field patterns): GitHub - Fields2Cover/Fields2Cover: Robust and efficient coverage paths for autonomous agricultural vehicles

Right now, I’m working on a ROS bridge, but nothing I can show yet. For now you can use the project on C++17

3 Likes

Nice, thanks.

Might be of interest to GitHub - ClemensElflein/OpenMower: Let's upgrade cheap off-the-shelf robotic mowers to modern, smart RTK GPS based lawn mowing robots! too. I think they’re using a repurposed 3dprinting slicer at the moment

Thank you really much for your recomendation. I’ve talk with them and they are interested to at least give it a try.

1 Like

We did a ROS2 port of @Alireza_Ahmadi Visual crop row navigation here: GitHub - Agroecology-Lab/visual-multi-crop-row-navigation at ROS2

We don’t have it fully tested yet, but help hacking on it would be welcome.

1 Like

Hi, I don’t know if this is relevant here, feel free to move it to another thread if need be. I did my internship in embedded ML for precision viticulture application, and one of the aims of the internship was to be able to generate a map of the entire vineyard, with the vines tracked and identified. This was primarily done using Computer Vision and Machine Learning, but I guess the core concepts are domain independent.
One big problem I saw is that since we were using VSLAM as our primary method of localisation, the unevenness of the terrain proved to be a big challenge for us. We had to apply an EKF and eventually ended up boosting the camera IMU data with the GNSS receiver data in order to obtain a better estimate of where we are.

2 Likes

Hi, sampreets3. which VSLAM were you using? ORBSLAM or sth else. I was a little bit confused why we need to do the mapping if we already have path(crop) to follow. Would you mind explain a little bit. Thx

1 Like

Hi @Guanbin_Huang, sorry for the late reply. Well, we ended up using ORBSLAM eventually. The vine tracking was part of a different service offered by the company I was working for at that time. Initially these two services were developed separately, but used the same camera feed to do both the vine tracking as well as the localisation. Since we were moving in a very uneven terrain, we saw that we were losing the track using SLAM at many points. In order to help boost the estimation, we had to use external sensor data.

Hope this helps !

2 Likes