TY - GEN
T1 - Dynamic Object Removal from Unpaired Images for Agricultural Autonomous Robots
AU - Akada, Hiroyasu
AU - Takahashi, Masaki
N1 - Funding Information:
Acknowledgement. This work was supported by Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Agency (JST) [grant number JPMJCR19A1].
Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Recently, the demand for agricultural autonomous robots has been increasing. Using the technology of vision-based robotic environmental recognition, they can generally follow farmers to support their work activities, such as conveyance of the harvest. However, a major issue arises in that dynamic objects (including humans) often enter images that the robots rely on for environmental recognition tasks. These dynamic objects degrade the performance of image recognition considerably, resulting in collisions with crops or ridges when the robots are following the worker. To address the occlusion issue, generative adversarial network (GAN) solutions can be adopted as they feature a generative capability to reconstruct the area behind dynamic objects. However, precedented GAN methods basically presuppose paired image datasets to train their networks, which are difficult to prepare. Therefore, a method based on unpaired image datasets is desirable in real-world environments, such as a farm. For this purpose, we propose a new approach by integrating the state-of-the-art neural network architecture, CycleGAN, and Mask R CNN. Our system is trained with a human-tracking dataset collected by an agricultural autonomous robot in a farm. We evaluate the performance of our system both qualitatively and quantitatively for the task of human removal in images.
AB - Recently, the demand for agricultural autonomous robots has been increasing. Using the technology of vision-based robotic environmental recognition, they can generally follow farmers to support their work activities, such as conveyance of the harvest. However, a major issue arises in that dynamic objects (including humans) often enter images that the robots rely on for environmental recognition tasks. These dynamic objects degrade the performance of image recognition considerably, resulting in collisions with crops or ridges when the robots are following the worker. To address the occlusion issue, generative adversarial network (GAN) solutions can be adopted as they feature a generative capability to reconstruct the area behind dynamic objects. However, precedented GAN methods basically presuppose paired image datasets to train their networks, which are difficult to prepare. Therefore, a method based on unpaired image datasets is desirable in real-world environments, such as a farm. For this purpose, we propose a new approach by integrating the state-of-the-art neural network architecture, CycleGAN, and Mask R CNN. Our system is trained with a human-tracking dataset collected by an agricultural autonomous robot in a farm. We evaluate the performance of our system both qualitatively and quantitatively for the task of human removal in images.
KW - Agricultural autonomous robot
KW - CycleGAN
KW - Dynamic object removal
KW - Generative adversarial network
UR - http://www.scopus.com/inward/record.url?scp=85128742521&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85128742521&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-95892-3_48
DO - 10.1007/978-3-030-95892-3_48
M3 - Conference contribution
AN - SCOPUS:85128742521
SN - 9783030958916
T3 - Lecture Notes in Networks and Systems
SP - 641
EP - 653
BT - Intelligent Autonomous Systems 16 - Proceedings of the 16th International Conference IAS-16
A2 - Ang Jr, Marcelo H.
A2 - Asama, Hajime
A2 - Lin, Wei
A2 - Foong, Shaohui
PB - Springer Science and Business Media Deutschland GmbH
T2 - 16th International Conference on Intelligent Autonomous Systems, IAS-16 2020
Y2 - 22 June 2021 through 25 June 2021
ER -