Honing, H. (2007). Preferring the best fitting, least flexible, and most surprising prediction: Towards a Bayesian approach to model selection in music cognition. Proceedings of the Society for Music Perception and Cognition (SMPC)
Abstract
While for most scientists the limitations of evaluating a computational model by showing a
good fit with the empirical data (think of percentage correct, percentage variance accounted
for, or minimizing error) are clear cut, a recent discussion (cf. Honing, 2006) shows that this
wide spread method is still (or again) in the center of scientific debate. In the current paper, a
Bayesian approach to model selection in music cognition is proposed that tries to capture
the common intuition that a model's validity should increase when it correctly predicts an
unlikely event, rather than when it correctly predicts something that was expected anyway.
One of the strengths of the computational modeling approach to music cognition is that,
while a computational model may be designed and fine-tuned to explain one particular
phenomenon, it has the added advantage that it can, in principle, also say something about
the consequences for a related cognitive phenomenon. For example, it was shown in Honing
(2006) that a model, that was designed to capture categorization in rhythm perception, can
also be used to make predictions on the perception of ritardandi in music: i.e. how much
slowing down (or speeding up) still allows for an appropriate categorization of the
performed rhythm. Interestingly, this was not what the model was designed for to predict.
However, calculating the predictions of this model on the possible shapes of final ritardandi
turned out to be relatively surprising. In general, we would like to argue that the amount of
surprise in a model's predictions is more relevant to a model's validity than one that simply
makes a good fit with the data it was designed to fit.
In order to give some structure to the notion of what a surprising prediction of a model of
music cognition may be, a distinction will be made between possible, plausible, and predicted
observations, using a recent case study (Honing, 2006). These three notions will be used as a
starting point to define three hypotheses: H-possible, H-plausible and H-predicted, each
describing a surface (or intersection) of the predictions made by a model in a Bayesian
framework (cf. Sadakata, Desain & Honing, 2006). As an example, a first, yet crude attempt
to define a measure of surprise is to select the model that minimizes the intersection of H-
predicted with respect to H-plausible, while preferring the H-predicted that is least smooth.
As such, we will prefer a model that 1) fits the empirical data well (best fit), 2) makes limited
range predictions (least flexible), in addition to making 3) non-smooth, unexpected
predictions (most surprising).
Honing, H. (2006). Computational modeling of music cognition: a case study on model selection.
Music Perception, 24(1), 365-376.
Sadakata, M, Desain, P., & Honing, H. (2006). The Bayesian way to relate rhythm perception and
production. Music Perception, 23(3), 267-286.
|