ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A

Precision Agriculture Simulation with a Quadcopter in Gazebo

Greetings everyone,

For a while I was thinking to develop a project related to Agriculture. Because of the intense labor required during the fruit picking (it worsens at taller trees), I thought it would be good idea to develop an autonomous drone that finds and picks fruits with a small arm attached to its base (or to any convenient location). Since I can’t afford the hardware currently, I consider simulation as a good start.

So, the software stack will roughly comprise:

  • Exploration
  • Tree/Fruit Detection
  • Fruit Manipulation

I started with Gazebo simulator. Nevertheless, it is very difficult to find high resolution fruit trees on the web. I created my own tree with Blender, but it is far away from being realistic. So, I’m looking for any contribution on this subject.

There are a lot of things to tell about the project since I have done quite a work, but to cut it to the chase I’m giving the link of the repository. Readme is a bit outdated, but it will be updated soon. I will try to post here about the project on a regular basis.

I’m looking for contributors on any part of the project. All the contributions are highly encouraged.

1 Like

I would start by just gathering photos of trees, and trying to detect apples on them.

That is definitely a stage, but latter in the project. Currently, I’m seeking the ways to do detection on simulation with mildly realistic trees at least.

You can see the version 1 of exploration in this video.

Currently, the volume of interest is specified with 6 constants - XMIN,XMAX,YMIN,YMAX,ZMIN,ZMAX. In future, it would be a good idea to support it to be specified in run time via graphical tools (e.g. hand-drawing through Rviz).

I’m using MoveIt! to exploit its Motion Planning pipeline in which the planners take the instant Octomap into consideration when properly integrated to the system. MoveIt! also supports Execute Path Service specification as a plugin (i.e. controller), but I only use SimpleActionClient currently by reading the waypoints into a vector of Pose’s and sequentially execute them. Dynamic Collision Checking and Replanning done with the callback attached to action client by checking the validity of path through isPathValid() method of PlanningScene during goal execution of action server. PlanningScene is fetched via /get_planning_scene service provided by the move_group.

Also, I want to list some issues that need to be resolved at version 2.

  • Orientation fixation of quadcopter before motion. Since Kinect (or any other stereo camera) does not have 360 degree of FOV, the velocity vector and the orientation of the camera should be equal. Otherwise, drone might not see its front exactly; thus couldn’t notice the obstacle in front and thinks the latched path is still valid, whereas it isn’t.

  • Implementation of a velocity controller or directly using existing alternatives, if any. By this, motions will be much smoother with respect to position control.

  • Implementation of a Frontier approach that determines the next goal from extracted frontiers. Currently, the goals are hardcoded in a way that traverses the faces and corners of the rectangular volume.

I haven’t recorded yet the version 2 of exploration stack.

All the To-do’s from version 1 are implemented mostly.

  • Orientation Fixation is now perfectly handled.
  • Velocity controller produces motion commands according to whole trajectory instead of separate waypoints.
  • 2 Frontier approaches. Closest Valid Frontier and Farthest Valid Frontier. Both approaches have their own problems. For example, first one suffers being trapped in a sub-region of the volume of interest. Even though the previously explored frontiers and their proximity are checked for to eliminate the problem, it stands still. Latter one suffers a similar problem in fact, which I call as “Return to Home Problem”. Since it searches for the farthest frontier, it generally returns back to the starting region and then pushes the true unknown field a little bit. In order to choose between them, I have made several experiments:

All the experiments are made in exactly 10 simulation minutes and the metrics is the explored percentage of volume of the VOI. There are two different configurations. First one is the fast-forward option, in which the candidate frontiers with the same distance are eliminated in one pass. The other one is the meaningful separation between candidate and registered frontiers.

    1.0 Uniqueness Range          No Fast Forward        Fast Forward 
    CVF                               18.55%               19.46%                                      
    FVF                               10.41%               22.50%

    2.0 Uniqueness Range           Fast Forward 
    CVF                               20.709%                                                   
    FVF                               20.314%

Fast forwarding definitely improves the exploration. However, naïvely increasing the range does not have the same effect on FVF. To be honest, either methods don’t possess a crucial supremacy over each other. Much more intelligent algorithms and approaches are required after this point. Version 3 of the exploration will focus on this feature.

You can see the version 3 of exploration in this video:

The problem diagnosed as “Return to Home Problem” is mitigated with the addition of randomization into the system. So that, it both curates the distant and closest frontiers of any degree. With a fully randomized decision making process, it naturally boosted up the performance and with this sole trick, the exploration rate has reached to 27-28%. Then, another problem is diagnosed. The frontiers were so granular that almost identical frontiers are behaved as different and drone visited same places repetitively. In order to resolve this issue a grid approach is embraced in which a cell could be visited only once. This approach increased the exploration rate a lot. For example in 25x25 case the exploration rate reached up to 32%. Actually, as long as the grid size is reduced, the exploration rate increases. However, of course there is a saddle point which results in the complete failure of the system regarding to new frontier discovery if overcrossed. Consider 1x1 case to understand that.

In total; 15x15, 14x14 and 13x13 cases are experimented. Their respective exploration rates were 35.6%, 37.4% and 40.7%. During experimentations, I have diagnosed another interesting feature of the problem. OctoMap is not uniformly investigated in terms of frontiers, therefore a side of the volume remains highly explored whereas the other does remain unexplored. In order to mitigate this problem, and have higher exploration rates in 10 minutes one can embrace a better, more advanced heuristics.

1 Like