TY - JOUR
T1 - Deployment strategies for crowdsourcing text creation
AU - Borromeo, Ria Mae
AU - Laurent, Thomas
AU - Toyama, Motomichi
AU - Alsayasneh, Maha
AU - Amer-Yahia, Sihem
AU - Leroy, Vincent
N1 - Publisher Copyright:
© 2017 Elsevier Ltd
PY - 2017/11
Y1 - 2017/11
N2 - Automatically generating text of high quality in tasks such as translation, summarization, and narrative writing is difficult as these tasks require creativity, which only humans currently exhibit. However, crowdsourcing such tasks is still a challenge as they are tedious for humans and can require expert knowledge. We thus explore deployment strategies for crowdsourcing text creation tasks to improve the effectiveness of the crowdsourcing process. We consider effectiveness through the quality of the output text, the cost of deploying the task, and the latency in obtaining the output. We formalize a deployment strategy in crowdsourcing along three dimensions: work structure, workforce organization, and work style. Work structure can either be simultaneous or sequential, workforce organization independent or collaborative, and work style either by humans only or by using a combination of machine and human intelligence. We implement these strategies for translation, summarization, and narrative writing tasks by designing a semi-automatic tool that uses the Amazon Mechanical Turk API and experiment with them in different input settings such as text length, number of sources, and topic popularity. We report our findings regarding the effectiveness of each strategy and provide recommendations to guide requesters in selecting the best strategy when deploying text creation tasks.
AB - Automatically generating text of high quality in tasks such as translation, summarization, and narrative writing is difficult as these tasks require creativity, which only humans currently exhibit. However, crowdsourcing such tasks is still a challenge as they are tedious for humans and can require expert knowledge. We thus explore deployment strategies for crowdsourcing text creation tasks to improve the effectiveness of the crowdsourcing process. We consider effectiveness through the quality of the output text, the cost of deploying the task, and the latency in obtaining the output. We formalize a deployment strategy in crowdsourcing along three dimensions: work structure, workforce organization, and work style. Work structure can either be simultaneous or sequential, workforce organization independent or collaborative, and work style either by humans only or by using a combination of machine and human intelligence. We implement these strategies for translation, summarization, and narrative writing tasks by designing a semi-automatic tool that uses the Amazon Mechanical Turk API and experiment with them in different input settings such as text length, number of sources, and topic popularity. We report our findings regarding the effectiveness of each strategy and provide recommendations to guide requesters in selecting the best strategy when deploying text creation tasks.
KW - Crowdsourcing
KW - Deployment strategies
KW - Text creation
UR - http://www.scopus.com/inward/record.url?scp=85026832860&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85026832860&partnerID=8YFLogxK
U2 - 10.1016/j.is.2017.06.007
DO - 10.1016/j.is.2017.06.007
M3 - Article
AN - SCOPUS:85026832860
SN - 0306-4379
VL - 71
SP - 103
EP - 110
JO - Information Systems
JF - Information Systems
ER -