Abstract: We explore how microwork platforms manage difficult tasks in paid crowdsourcing environments. We argue that as human computation becomes more prevalent, notably in the context of big data ecosystems, microwork platforms might have to evolve and to take a more managerial stance in order to provide the right incentives to online workers to handle difficult tasks. We illustrate this first through a name disambiguation experiment on Amazon Mechanical Turk (AMT), a well-known microwork platform, and second through direct analysis of the dynamics of task execution in a dataset of real microwork projects on AMT. We discuss the emergence of more specialised microwork platforms as an attempt to facilitate a better management of difficult tasks in the context of paid crowdsourcing.