How do belief bias effect reasoning




















Green, D. On the equivalence of two recognition measures of short-term memory. Psychological Bulletin , 66 3 , — New York: Wiley. Gronau, Q. Bridgesampling: An R package for estimating normalizing constants. Guyote, M. A transitive-chain theory of syllogistic reasoning. Cognitive Psychology , 13 4 , — Haigh, M. Reasoning as we read: Establishing the probability of causal conditionals.

Heathcote, A. The power law repealed: The case for an exponential law of practice. Heit, E. Traditional difference-score analyses of reasoning are flawed.

Cognition , 1 , 75— Iverson, G. The generalized area theorem in signal detection theory. In Choice, decision, and measurement: Essays in honor of R.

Duncan Luce pp. Johnson-Laird, P. Cambridge: Harvard University Press. Lawrence Erlbaum Associates, Inc. Judd, C. Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem.

Journal of Personality and Social Psychology , 1 , 54— Kass, R. Bayes factors. Journal of the American Statistical Association , 90 , — Katahira, K.

How hierarchical models improve point estimates of model parameters at the individual level. Journal of Mathematical Psychology , 73 , 37— Kaufmann, H. The effects of emotional value of conclusions upon distortion in syllogistic reasoning. Psychonomic Science , 7 10 , — Kellen, D. Evaluating models of recognition memory using first- and second-choice responses. Journal of Mathematical Psychology , 55 , — Discrete-state and continuous models of recognition memory: Testing core properties under minimal assumptions.

Elementary signal detection and threshold theory. Wixted Ed. Recognition memory models and binary-response ROCs: A comparison by minimum description length. Further evidence for discrete-state mediation in recognition memory.

Experimental Psychology , 62 , 40— Khemlani, S. Theories of the syllogism: A meta-analysis. Psychological Bulletin , 3 , — Killeen, P. Symmetric receiver operating characteristics.

Journal of Mathematical Psychology , 48 6 , — Kinchla, R. Klauer, K. Hierarchical multinomial processing tree models: A latent-trait approach. Psychometrika , 75 1 , 70— Toward a complete decision model of item and source recognition: A discrete-state approach. The flexibility of models of recognition memory: An analysis by the minimum-description length principle. Journal of Mathematical Psychology , 55 6 , — The flexibility of models of recognition memory: The case of confidence ratings.

Journal of Mathematical Psychology , 67 , 8— RT-MPTs: Process models for response-time distributions based on multinomial processing trees with applications to recognition memory. Journal of Mathematical Psychology , 82 , — On belief bias in syllogistic reasoning.

Psychological Review , 4 , — The abstract selection task: New data and an almost comprehensive model. Klugkist, I. The Bayes factor for inequality and about equality constrained models. Krauth, J. Formulation and experimental verification of models in propositional reasoning. The Quarterly Journal of Experimental Psychology , 34 2 , — Kruschke, J. London: Academic Press. Kunda, Z. The case for motivated reasoning.

Lee, M. Cambridge: Cambridge University Press. Lewandowski, D. Generating random correlation matrices based on vines and extended onion method. Journal of Multivariate Analysis , 9 , — Little, R. Lord, C. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology , 37 11 , — Macmillan, N.

New York: Lawrence Erlbaum associates. Malmberg, K. On the form of ROCs constructed from confidence ratings. The influence of averaging and noisy decision strategies on the recognition memory ROC. Markovits, H. The belief-bias effect in the production and evaluation of logical conclusions. Miller, M. Extensive individual differences in brain activations associated with episodic retrieval are reliable over time.

Journal of Cognitive Neuroscience , 14 8 , — Monnahan, C. Faster estimation of Bayesian models in ecology using Hamiltonian Monte Carlo. Moran, R. Thou shalt identify! The identifiability of two high-threshold models in confidence-rating recognition and super-recognition paradigms. Journal of Mathematical Psychology , 73 , 1— Morey, R. Problematic effects of aggregation in z ROC analysis and a hierarchical modeling solution.

Journal of Mathematical Psychology , 52 6 , — Morley, N. Belief bias and figural bias in syllogistic reasoning. Newell, A.

Mechanisms of skill acquisition and the law of practice. In Cognitive skills and their acquisition pp. Hillsdale, NJ: Erlbaum. Newstead, S. The source of belief bias effects in syllogistic reasoning. Cognition , 45 3 , — Nickerson, R. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology , 2 2 , — Nuobaraite, S. UK: Bachelor, Plymouth University. Oakhill, J. Believability and syllogistic reasoning. Cognition , 31 2 , — The effects of belief on the spontaneous production of syllogistic conclusions.

Oaksford, M. Oxford: Oxford University Press. Oberauer, K. Reasoning with conditionals: A test of formal models of four theories. Cognitive Psychology , 53 3 , — Working memory capacity and the construction of spatial mental models in comprehension and deductive reasoning. The Quarterly Journal of Experimental Psychology , 59 2 , — Osth, A. Sources of interference in item and associative recognition memory.

Psychological Review. Pazzaglia, A. A critical comparison of discrete-state and continuous models of recognition memory: Implications for recognition and beyond. Pennycook, G. Everyday consequences of analytic thinking. Current Directions in Psychological Science.

Is the cognitive reflection test a measure of both reflection and intuition? Behavior Research Methods , 48 1 , — Polk, T. Deduction as verbal reasoning. Pratte, M. Hierarchical single- and dual-process models of recognition memory. Journal of Mathematical Psychology , 55 1 , 36— Special Issue on Hierarchical Bayesian Models.

Separating mnemonic process from participant and item effects in the assessment of ROC asymmetries. Quayle, J. Working memory, metacognitive uncertainty, and belief bias in syllogistic reasoning.

Ratcliff, R. Modeling response times for two-choice decisions. Psychological Science , 9 5 , — Regenwetter, M. Transitivity of preferences. Psychological Review , 1 , 42— Rijmen, F. A nonlinear mixed model framework for item response theory. Psychological Methods , 8 2 , — Robert, C. Introducing Monte Carlo methods with R. Roberts, M. Belief bias and relational reasoning. Roser, M. Investigating reasoning with multiple integrated neuroscientific methods.

Frontiers in Human Neuroscience , 9. Rotello, C. When more data steer us wrong: replications with the wrong dependent measure perpetuate erroneous conclusions. Rottman, B. Do people reason rationally about causally related events? Markov violations, weak inferences, and failures of explaining away.

Cognitive Psychology , 87 , 88— Rouder, J. An introduction to Bayesian hierarchical models with an application in the theory of signal detection. A hierarchical process-dissociation model. Journal of Experimental Psychology: General , 2 , — Schafer, J.

New York: Chapman and Hall. Scheibehenne, B. Using Bayesian hierarchical parameter estimation to assess the generalizability of cognitive models of choice. Schielzeth, H. Conclusions beyond support: overconfident estimates in mixed models. Behavioral Ecology , 20 2 , — Schyns, P. Angry and Mr. Smile: when categorization flexibly modifies the perception of faces in rapid visual presentations.

Cognition , 69 3 , — Shadish, W. Houghton: Mifflin and Company. Shynkaruk, J. Confidence and accuracy in deductive reasoning. Simpson, A. What is the best index of detectability? Psychological Bulletin , 80 6 , — Singmann, H.

Behavior Research Methods , 45 2 , — Frontiers in Psychology , 5 , Probabilistic conditional reasoning: Disentangling form and content with the dual-source model. Cognitive Psychology , 88 , 61— New normative standards of conditional reasoning and the dual-source model.

Skovgaard-Olsen, N. The relevance effect and conditionals. Cognition , , 2— Skyrms, B. Choice and chance: An introduction to inductive logic. OCLC: Belmont CA. Smith, J. Assessing individual differences in categorical data. Snijders, T. Stan Development Team Version 2. Stanovich, K. Studies of individual differences in reasoning. Mahwah: Lawrence Erlbaum Associates. Cambridge: MIT Press.

Starns, J. Evaluating the unequal-variance and dual-process explanations of zROC slopes with response time data and the diffusion model. Cognitive Psychology , 64 1—2 , 1— Stupple, E. Belief-logic conflict resolution in syllogistic reasoning: Inspection-time evidence for a parallel-process model.

When logic and belief collide: Individual differences in reasoning times support a selective processing model. Journal of Cognitive Psychology , 23 8 , — Thompson, V. The task-specific nature of domain-general reasoning. Cognition , 76 , — Intuition, reason, and metacognition. Cognitive Psychology , 63 3 , — Syllogistic reasoning time: Disconfirmation disconfirmed. Toplak, M. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks.

Tourangeau, R. Trippas, D. Motivated reasoning and response bias: A signal detection approach. Doctoral dissertation. The SDT model of belief bias: Complexity, time, and cognitive ability mediate the effects of believability. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias. Using forced choice to test belief bias in syllogistic reasoning. Cognition , 3 , — Better but still biased: Analytic cognitive style and belief bias.

Van Zandt, T. ROC curves and confidence judgments in recognition memory. Vandekerckhove, J. Model comparison and the principle of parsimony. Busemeyer Ed. Oxford handbook of computational and mathematical psychology pp. Hierarchical diffusion models for two-choice response times. Psychological Methods , 16 1 , 44— Verde, M. Wagenmakers, E. On the interpretation of removable interactions: A survey of the field 33 years after Loftus. Wason, P. On the failure to eliminate hypotheses in a conceptual task.

Quarterly Journal of Experimental Psychology , 12 3 , — Foss Ed. New horizons in psychology Vol. Harmandsworth, England, Penguin. Reasoning about a rule. Quarterly Journal of Experimental Psychology , 20 3 , — Dual processes in reasoning? Cognition , 3 2 , — Whitehead, A. Meta-analysis of controlled clinical trials. Wiley: Chichester. Wickens, T. Psychological Review , 2 , — Wilkins, M. The effect of changed material on ability to do formal syllogistic reasoning.

Archives of Psychology , , Download references. We thank Evan Heit and Caren Rotello for providing us with raw data. Part of this work was presented at the International Conference on Thinking Open access funding provided by Max Planck Society.

You can also search for this author in PubMed Google Scholar. This graphical representation follows the conventions used by Lee and Wagenmakers : Discrete variables are displayed as squares and continuous variables as circles, the observed variables are displayed as shaded nodes, whereas the unobserved variables are non-shaded, and the double-bordered nodes represents variables that follow deterministically from other variables.

Hence, all estimated variables are single-bordered, round, and non-shaded. Finally, the plates display the hierarchical structure of the model and all bold variables are non-scalar values such as vectors or matrices. Below the graphical model, Fig. Note that in the figure the second parameter of Normal distributions is the standard deviation and not the variance. The only information missing is the exact specification of the SDT model which is given in Eqs.

We used Hamiltonian Monte Carlo methods to explore the joint posterior distribution, as implemented in Stan Carpenter et al. We employed this approach throughout and used completely non-informative priors for the correlation matrices, so-called LKJ priors with shape parameter 1 Lewandowski et al.

The priors for the variances were weakly informative, half Cauchy with location 0 and scale 4 Gelman et al. Most of the remaining priors were also weakly informative Cauchy priors. We analyzed the data from Study 14 Trippas et al. The model can be specified in a straightforward fashion using the rstanarm package:. A description of each variable is presented in Table 5. The random-effects are specified between brackets.

A covariance matrix capturing these random effects is implied. Finally, belief syll corresponds to a random per-syllogism deflection from the intercept and from the effect of Belief—once again together with a covariance matrix capturing these effects.

Weakly informative priors were set for all effects, with a normal distribution with mean 0 and standard deviation 4 and 16 being assigned to the intercept and slope coefficients, respectively. The analysis showed that there was a credible main effect of Logic,. There was also a main effect of Belief,. There was no effect of the CRT on the overall endorsement rate,. These main effects were qualified by several higher order interactions.

Reprints and Permissions. Characterizing belief bias in syllogistic reasoning: A hierarchical Bayesian meta-analysis of ROC data. Psychon Bull Rev 25, — Download citation. In this paper, we follow up on Heit and Rotello's work by describing three SDT indices which are designed to disentangle sensitivity and response bias, by explicitly estimating the various parameters of the underlying distributions of argument strength for each participant.

In the final part of this paper, we then apply these indices to a case study in which the examination of individual differences cognitive ability is illuminating: the role of perceptual fluency on belief bias in syllogistic reasoning. SDT indices for data resulting from binary decisions e. A problem with d ' and c is that they entail the assumption of equal variance between the target valid and nontarget invalid distributions, an assumption that is often violated.

Alternative indices which do not require the equal variance assumption are d a and c a e. In order to calculate d a and c a , one needs to estimate s , which represents the ratio of the variance of the noise nontarget and signal target distributions also referred to as the z-ROC slope.

In the ROC procedure, participants may be instructed to supplement each binary decision with a confidence rating. Combining the binary decisions and the three-point confidence scale yields six response classes Figure 1.

Figure 1. Argument strength distributions of valid and invalid problems demonstrating the link between confidence ratings and response criteria.

An ROC curve plots hits against false alarms at each confidence level. Using d a and c a , one can turn to a set of indices that are analogous to the traditional indices used to study belief bias but which are better justified given the empirically observed nature of evidence distributions Dube et al. Note that to use the following formulae, s needs to be estimate three times: once for the full ROC collapsed across believability, once for the believable condition, and once for the unbelievable condition:.

The SDT-logic index measures overall reasoning sensitivity. The SDT-belief index is the relative difference in response bias between the unbelievable and the believable condition. Higher values indicate a greater tendency to accept believable problems. Finally, the SDT-interaction index indicates the sensitivity difference between believable and unbelievable arguments, or the belief-accuracy effect.

We return to this point in the general discussion. An increasing body of evidence Klauer et al. Simulations have shown that using the traditional indices puts researchers at risk of inflated Type 1 error rates, something which can be avoided by applying formal modeling techniques such as SDT Heit and Rotello, Using formal modeling procedures for the study of belief bias can be impractical, however, because the use of more advanced experimental designs may entail the fit and comparison of models with an untenably large number of parameters.

In contrast, the SDT indices approach described here reconciles the SDT approach with the more classical approach previously offered by the traditional belief bias indices. Using the SDT-indices method is straightforward. First, when conducting a standard belief bias experiment, ensure to collect confidence ratings alongside each binary validity judgment. Note that this does necessitate the use of a three-point scale: estimates can always be recoded and collapsed afterwards.

Second, combine the validity judgments and the confidence ratings into the required number of bins six in our case and calculate conditional frequencies denoting how often each response is made per condition and participant [i. Third, use one of the available tools 1 to fit the unequal variance SDT UVSDT model to each participant's counts to estimate s total , s believable , and s unbelievable.

Occasionally, for some participants, the model will not produce a reliable fit. Alternatively, an average of s across all other participants can be used. Using the formulae outlined above, calculate d a total , c a total , d a believable , c a believable , d a unbelievable , and c a unbelievable for each participant. We now demonstrate the use of this technique by investigating the role of perceptual disfluency on belief bias in syllogistic reasoning.

Fluency, the ease of processing a stimulus, has been a focus of interest in a variety of cognitive domains. In memory, fluency has been shown to influence both response bias and sensitivity.

Some fluency-related memory illusions seem to be pure bias effects e. On the other hand, perceptual disfluency i. A similar effect of disfluency has been reported in reasoning. Alter et al. However, their finding has been difficult to replicate Thompson et al. As Dube et al. The studies of Alter et al. They also exclusively focused on the effect of accuracy, ignoring any potential effects of fluency on response bias.

This motivated our examination of the effect of disfluency on belief bias in syllogistic reasoning using the alternative SDT indices. Seventy-six undergraduate psychology students from Plymouth University UK participated in exchange for course credit.

The experiment was approved by the ethical committee of the Faculty of Science and Environment at Plymouth University. Logic valid vs. Following Trippas et al. In the fluent condition, arguments were presented on a p LCD monitor in an easy to read font Courier New, 18 pt.

In the disfluent condition, arguments were presented in a difficult to read font Brush Script MT, 15 pt. Thompson et al. We measured cognitive ability using the short form of Raven's advanced progressive matrices APM-SF which has a maximum score of 12 Arthur and Day, This is a sound instrument for assessing fluid cognitive ability in a short time frame Chiesi et al. Participants were tested on individual computers in small groups no larger than five. After signing a consent form they were presented with standard deductive reasoning instructions stating:.

In this experiment, we are interested in people's reasoning. For each question, you will be given some information that you should assume to be true. Please try to make use of all three confidence response categories.

After four practice trials one of each item type , participants were presented with the 64 reasoning problems. Upon completion of the reasoning task, the APM-SF was administered, after which participants were debriefed. To investigate the impact of perceptual disfluency on reasoning sensitivity, belief bias, and the effect of beliefs on accuracy, we calculated the SDT-logic, SDT-belief, and SDT-interaction indices. For each participant, we fit the UVSDT model with seven parameters five criteria, one mean, and one standard deviation, hereafter, s in three different ways.

For the SDT-logic index, the model was fit to the ROC collapsed across believability to provide an overall estimate of s total , allowing us to calculate d a. Next, we fit the same model separately for the believable condition and the unbelievable condition to estimate s believable and s unbelievable. Based on these fits, c a believable , c a unbelievable , d a believable , and d a unbelievable were calculated.

In turn, these values were used to calculate the SDT-belief and SDT-interaction indices using the formulas outlined above.

We inspected absolute model fits in terms of G 2 to ensure that our analyses are not affected by artifacts produced by ill-fitting models. For these participants we calculated the statistics assuming a z-ROC slope s of 1 i. We compared the three indices with 0 using a one sample t -test to investigate whether participants reasoned above chance, whether they showed the belief bias, and whether beliefs affected accuracy see, for instance, Evans and Curtis-Holmes, , for a similar approach using traditional belief bias indices.

Finally, we investigated whether fluency affected the belief-accuracy effect by analysing the SDT-interaction index. Means and standard errors can be found in Table 1. To ensure that our random assignment was successful, we compared cognitive ability between conditions using a two samples t -test. To test whether the effect of fluency on accuracy was moderated by cognitive ability, we performed a 2 condition: fluent vs. Follow up tests comparing the higher and lower ability groups revealed the following pattern.

Although SDT has taken a prominent role in the development of theoretical accounts in domains such as perception, categorization, and memory, its application to reasoning has been a fairly recent development e. As has been previously argued in the context of belief bias, failure to specify assumptions about the nature of evidence can lead to interpretations of data that are misleading or incorrect Klauer et al. The benefit of a formal model like SDT is in its specification of assumptions.

The provision of analytic tools that allow the separation of sensitivity and response bias is an added advantage when it comes to the study of phenomena like fluency effects, which are known in the domain of memory to potentially impact both.

Our findings from the manipulation of fluency in a reasoning task differed from those of two previous studies: we found that disfluency led to a reduction in reasoning sensitivity, in contrast to the improvement reported by Alter et al. While we can only speculate about the extent to which differences in measurement tools may have contributed to inconsistent findings, it is important that the SDT indices used here do not suffer from the theoretical shortcomings noted with traditional indices.

The selective scrutiny account claims that people focus on the conclusion and only engage in logical processing if this is found to be unbelievable; while the misinterpreted necessity account claims that subjects misunderstand what is meant by logical necessity and respond on the basis of believability when indeterminate syllogisms are presented.

Experiments 1 and 2 compared the predictions of these two theories by examining whether the interaction would disappear if only determinate syllogisms were used. It did, thus providing strong support for the misinterpreted necessity explanation.



0コメント

  • 1000 / 1000