TY - GEN
T1 - Task-oriented function detection based on operational tasks
AU - Ishikawa, Yuchi
AU - Ishikawa, Haruya
AU - Akizuki, Shuichi
AU - Yamazaki, Masaki
AU - Taniguchi, Yasuhiro
AU - Aoki, Yoshimitsu
N1 - Funding Information:
*This work was supported by any organization 1Department of Electronics and Electrical Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku Ward, Yokohama, Kanagawa, Japan {yishikawa, hishikawa}@aoki-medialab.jp, aoki@elec.keio.ac.jp 2 Deparment of Mechanical System Engineering, Chukyo University, 101-2 Yagotohonmachi, Showa Ward, Nagoya, Aichi, Japan s-akizuki@sist.chukyo-u.ac.jp 3 Honda RD Co., Ltd., 1-4-1, Chuo Wako, Saitama, Japan Masaki Yamazaki@n.f.rd.honda.co.jp, Yasuhiro Taniguchi@n.w.rd.honda.co.jp
Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - We propose novel representations for functions of an object, namely Task-oriented Function, which is improved upon the idea of Afforadance in the field of Robotics Vision. We also propose a convolutional neural network to detect task-oriented functions. This network takes as input an operational task as well as an RGB image and assign each pixel an appropriate label for every task. Task-oriented funciton makes it possible to descibe various ways to use an object because the outputs from the network differ depending on operational tasks. We introduce a new dataset for task-oriented function detection, which contains about 1200 RGB images and 6000 pixel-level annotations assuming five tasks. Our proposed method reached 0.80 mean IOU in our dataset.
AB - We propose novel representations for functions of an object, namely Task-oriented Function, which is improved upon the idea of Afforadance in the field of Robotics Vision. We also propose a convolutional neural network to detect task-oriented functions. This network takes as input an operational task as well as an RGB image and assign each pixel an appropriate label for every task. Task-oriented funciton makes it possible to descibe various ways to use an object because the outputs from the network differ depending on operational tasks. We introduce a new dataset for task-oriented function detection, which contains about 1200 RGB images and 6000 pixel-level annotations assuming five tasks. Our proposed method reached 0.80 mean IOU in our dataset.
UR - http://www.scopus.com/inward/record.url?scp=85084280598&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85084280598&partnerID=8YFLogxK
U2 - 10.1109/ICAR46387.2019.8981633
DO - 10.1109/ICAR46387.2019.8981633
M3 - Conference contribution
AN - SCOPUS:85084280598
T3 - 2019 19th International Conference on Advanced Robotics, ICAR 2019
SP - 635
EP - 640
BT - 2019 19th International Conference on Advanced Robotics, ICAR 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th International Conference on Advanced Robotics, ICAR 2019
Y2 - 2 December 2019 through 6 December 2019
ER -