Comparing Pineapples with Lilikois: An Experimental Analysis of the Effects of Idea Similarity on Evaluation Performance in Innovation Contests

AbstractIdentifying promising ideas from large innovation contests is challenging. Evaluators do not perform well when selecting the best ideas from large idea pools as their information processing capabilities are limited. Therefore, it seems reasonable to let crowds evaluate subsets of ideas to distribute efforts among the many. One meaningful approach to subset creation is to draw ideas into subsets according to their similarity. Whether evaluation based on subsets of similar ideas is better than compared to subsets of random ideas is unclear. We employ experimental methods with 66 crowd workers to explore the effects of idea similarity on evaluation performance and cognitive demand. Our study contributes to the understanding of idea selection by providing empirical evidence that crowd workers presented with subsets of similar ideas experience lower cognitive effort and achieve higher elimination accuracy than crowd workers presented with subsets of random ideas. Implications for research and practice are discussed.

Return to previous page