The number of alternatives on a choice task is a design dimension that has been found to affect respondents’ choices significantly. The preference matching effect suggests that offering more alternatives on a choice task increases the likelihood that respondents will find an attribute combination that matches their preferences better than on a choice task with fewer alternatives. However, an increasing number of alternatives can lead at the same time to increasing complexity as more comparisons are required from respondents.
As it is from the current literature not clear whether the positive or negative influences from increasing the number of alternative on a choice task predominate, we examine whether it is indeed beneficial to offer more alternatives on a choice task than is usually done. Compared to previous studies we apply a broader approach using split samples in order to compare five choice task formats that only differ with respect to the number of alternatives. It ranges across the treatments from two to six alternatives, always including a SQ alternative. The survey is concerned with the good environmental status of the Baltic Sea and the number of attributes, including cost, is six in all split samples: water clarity, fish, biodiversity, coastal protection, litter and cost. Respondents were assigned to one of the five treatments randomly and each faced eight choice tasks.
For investigating whether preference matching occurs or whether the choice task formats lead to preference changes, we present descriptive statistics about the observed choice patterns, estimate basic MNL and RPL models and apply likelihood-ratio tests to test the equality of the preference parameters, and finally simulate responses for the choice task formats with more than three alternatives based on the two alternative format as a reference. Overall, we find that the preferences elicited through the different choice task formats differ significantly. Adding more alternatives seems to result in different stated preferences indicating that the format of the choice task format has a clear effect on people’s choices.
Cognitive neuroscientists sometimes apply formal models to investigate how the brain implements cognitive processes. These models describe behavioural data in terms of underlying, latent, variables linked to hypothesized cognitive processes. A goal of model-based cognitive neuroscience is to link these variables to brain measurements, which can advance progress in both cognitive and neuroscientific research. However, the details and the philosophical approach for this linking problem can vary greatly. We propose a continuum of approaches which differ in the degree of tight, quantitative, and explicit hypothesizing. We describe this continuum using four points along it, which we dub "qualitative structural'', "qualitative predictive'', "quantitative predictive'', and "single model'' linking approaches. We further illustrate by providing examples from three research fields (decision making, reinforcement learning, and symbolic reasoning) for the different linking approaches.