ROS Resources: Documentation | Support | Discussion Forum | Service Status | Q&A answers.ros.org

Perception project

Hello,
After a meeting with the ROS Europe, to get us working we have decided. To start a perception project. For this We will try to get a neural network to to recognize plants using synthetic data, as this a an active research field.
Everyone is more than welcome to join. Here is doodle to organize an event to get together and discuss/ work together.

We will first try to work on gettign 3D realistic data from photogrammetry (https://alicevision.org/)
Then augment the data using a new framework called BlenderProc developped by the German Aerospace Center (DLR).
https://www.youtube.com/watch?v=1AvY_iS6xQA

This post will be used for any further updates

An approach using GAN (Generative neural networks) will also be used.
For those of you that speak french, here is presentation in french about using synthetic data for quality control :

Validation data, this will be updated soon

8 Likes

For those that don’t speak french - you can use the auto-translate functionality to get subtitles in many languages. Just click on the settings button in lower right hand corner of the video and select one that works better for you.

image

2 Likes

Hello, thanks for launching the discussion ! I am currently working a project to improve wheat head detection at Arvalis and INRAe (France) with 3D data so I would definitly like to join the discussion :slight_smile:

2 Likes

Hi EtienneDavid,

Have you seen the Kaggle competition on Wheat head detection? https://www.kaggle.com/c/global-wheat-detection Not 3d though.

Peiyuan Liao has published his amazing work here: https://github.com/liaopeiyuan/TransferDet

We organized the competition jointly with University of Saskatchewan :slight_smile:

2 Likes

Hello,

I wasn’t expecting this much feedback !
I’m glad to read you all. A lot of things are different than usual, it was a very short notice and I chose times outside of working hours, To my surprise they both worked out :).

The event will probably take place on Friday at 10 PM CET (22h GMT+1).

I will upload a Jitsi link here.

I’m looking forward on having a discussion about this and, hopefully getting something out of it !

For the agenda :

We will present our self, if we work in the agRobotics field etc.

Then we will discuss what are we expecting, and maybe discuss if there is a standard benchmark that needs to be achieved.
I will present the little progress I have made.

If you have any more ideas, please feel free to add them.

See you very soon.

1 Like

Hello,
A friendly reminder that the meeting will be held today in the evening.
Here is a link to the meeting (We will use Zoom for now as we didn’t test Jitsi to be sure that it’s reliable enough.

See you soon !
Best regards,
Ilias

links to papers that came up during the meeting:

From Amy Tabb

Paper- A Photogrammetry-based Framework to Facilitate Image-based Modeling and Automatic Camera Tracking:

Blender Photogrammetry Plugin:

2 Likes

Hello,
This is a summary of the last meeting that took place on the 4th of December 2020.
In my opinion, even if everyone the registered on the doodle didn’t show up, the meeting was a success and gave birth to nice discussion and conclusions.
Thanks to the participants.

We have subdivided the project into multiple smaller parts :

I.Validation data :

The first subject we discusses was to agree on testing datasets, for this, we chose one developed by the university of Bonn, as it is already segmented, the goal would be to get synthetic data that, once trained would as good as the real life one.

II.Synthetic data generation :

To get the synthetic data, we have multiple ways, we will focus on mainly two :
1.To get it from photogrammetry(using meshroom or other tools like blender), meaning a 3D scan from pictures, a 3D model of a small simple plant(with a simple sample video) is available on the git-hub account of the project linked further-down.
2.To model leaves (from a picture), then, using a mathematical model, “program” a 3D model of the plant.
Here is a link of a paper using this technic : https://arxiv.org/abs/1612.03019

III.Data augmentation :

The synthetic data is usually too noiseless to be usable as is. For this we need to augment the data.
For this, we have decided to focus on the BlenderProc objects, developed by the DLR :
https://github.com/DLR-RM/BlenderProc.
We also have to test if we need to simulate a soil background or should we completely randomize the background.

IV. Miscellaneous :
Another thing we could think of is, maybe have a neural network detect leaves only (so not plants as a whole) and then have another machine learning algorithm cluster the different leaves into plants.

Here is a link to the git-hub project, You will find a Kanban of the project, feel free to work on a task or add your own:

Please, if you work on this project, share your results, whether they are positive or negative, you will still help the community and probably get feedback.

V. Conclusion :

Improvements ideas for the next meetings :

  • Change the meeting time.
  • Notify the participant a bit more in advance.
  • Switch to Jitsi or BigBlueButton
  • Ask if a recording is necessary or are these summaries sufficient.

Thank you again to all the participants for attending the meeting and putting links in the post. Hope to read updates from you soon !
Have a nice week.
Ilias

5 Likes

3dmcap is an application for three-dimensional reconstruction of objects from digital images.


3dmcap_banner
Author: Thiago T. Santos, Embrapa Agricultural Informatics
1 Like

since we now seem to be collecting SfM tools in this thread, there’s also this person on youtube who did an overview of different tools and snap packages for some of them (from 2017)

Hello,
I wish you all a happy new year. After this winter break I hope we will progress fast :).
Thank you @droter, and @j-es for the links, they gives us a better insight of what else can be used.
On my side I advanced a bit on blenderProc, which is, to no ones’ surprise a functional !
I made two video streams of me going through the software, Please give me feedback on this. Should I continue doing it ? Or is useless ? If enough people are interested I could do the stream really live :slight_smile: .

Here are the two part videos :

I hope that by the end of the week I will have enough synthetically generated data !
Best regards,
Ilias

4 Likes

Hi @ilias,
Sorry for zoning out of the project for a while, how are things on the synthetic generated data end? Should we have another meeting and catch up with the existing state of work? Let me know.

Hello !
It’s been a while for me as well, I have to admit that, with a regular job and being alone on this project I didn’t do much, It’s not that fun to do things alone :/.
I didn’t drop the project though, As soon as I get to work on it more, I will update you !
Unless someone make progress on it !

1 Like

This topic was one of the motivation to our Auto(Bot)ware :slight_smile:, we will focus our efforts for this initiative :slight_smile:

1 Like