A Misuse of IQ Scores: Using the Dual Discrepancy/Consistency Model for Identifying Specific Learning Disabilities
Abstract
:1. Introduction
1.1. Identification: Unexpected Underachievement
1.2. Pattern of Strengths and Weaknesses
The child exhibits a pattern of strengths and weaknesses in performance, achievement, or both, relative to age, state-approved grade-level standards, or intellectual development, which is determined by the group to be relevant to the identification of a specific learning disability, using appropriate assessments, consistent with §§ 300.304 and 300.305. (34 CFR. §00.309.a.2.ii).
2. Dual Discrepancy/Consistency
2.1. Level 1. Weaknesses or Deficits in One or More Areas of Academic Achievement
2.2. Level 2. Exclusionary Factors
2.3. Level 3. Weaknesses or Deficits in One or More Intelligence Attribute
A particularly salient aspect of the (DD/C) operational definition of SLD is that a weakness or deficit in one or more cognitive abilities or processes underlies difficulties in academic performance and skill development. Because research demonstrates that the relationship between the cognitive dysfunction and the manifest learning problems are causal in nature, data analysis at this level should seek to ensure that identified weaknesses or deficits on cognitive and neuropsychology tests bear an empirical relationship to those weaknesses or deficits on achievement tests identified previously.[38] (pp. 252–253)
2.4. Level 4. Pattern of Strengths and Weaknesses Meeting the Dual Discrepancy/Consistency Criteria
whether the pattern of results is marked by an empirical or ecologically valid relationship between the identified cognitive and academic weaknesses, whether the individual displays generally average ability to think and reason, whether the individual’s learning difficulty is domain specific, and whether the individual’s underachievement is unexpected.[38] (p. 254)
- The composite general ability score (g-Value) is in the average range or better. The g-Value is calculated using a proprietary algorithm in the X-BASS program.
- ≥1 norm-referenced IQ test score representing a CHC broad ability shows a normative weakness.
- There is a “statistically significant” and “rare” difference between the Facilitating Cognitive Composite (FCC) and any IQ test scores that demonstrated weaknesses (i.e., intelligence attribute weaknesses).8 The FCC is another proprietary composite score calculated by the X-BASS program.
- ≥1 norm-referenced score on an academic achievement test shows a normative weakness.
- There is a “statistically significant” and “rare” difference between the FCC and any achievement test score that demonstrated weaknesses.
- The academic weaknesses are consistent with the weaknesses in intelligence attributes (“below-average cognitive aptitude-achievement consistency”).
3. Research on the Dual Discrepancy/Consistency Method
3.1. Specific Learning Disability Research Desiderata
3.2. Arguments Used to Support the Dual Discrepancy/Consistency Method
3.2.1. Anecdotes
3.2.2. Reliance on Cattell–Horn–Carroll
3.2.3. Appeal to Legal Permissiveness
The Department [of Education] does not believe that an assessment of psychological or cognitive processing should be required in determining whether a child has an SLD. There is no current evidence that such assessments are necessary or sufficient for identifying SLD. Furthermore, in many cases, these assessments have not been used to make appropriate intervention decisions. However, § 300.309(a)(2)(ii) permits, but does not require, consideration of a pattern of strengths or weaknesses, or both, relative to intellectual development, if the evaluation group considers that information relevant to an identification of SLD. In many cases, though, assessments of cognitive processes simply add to the testing burden and do not contribute to interventions (emphasis added).[29] (p. 46651)
3.2.4. Correlations Thought to Imply Causality
A particularly salient aspect of the [DD/C] operational definition of SLD is that a weakness or deficit in one or more cognitive abilities or processes underlies difficulties in academic performance and skill development. Because research demonstrates that the relationship between the cognitive dysfunction and the manifest learning problems are causal in nature …data analysis at this level [Level 3] should seek to ensure that identified weaknesses or deficits on cognitive and neuropsychology tests bear an empirical relationship to those weaknesses or deficits on achievement tests identified previously.[38] (pp. 252–253, emphasis added)
3.2.5. Studies That Answer the Wrong Question
4. Problems Associated with the DD/C Method
4.1. Untenable Assumptions
4.1.1. Measurement
- If scores from nationally-normed and standardized IQ and achievement tests are made to have the same arbitrary mean and SD values, then scores from these tests are directly comparable.
- If the scores from nationally-normed IQ and achievement tests are standardized and put on a norm-referenced scale, then those scores have interval measurement properties.
- Because of assumptions 1 and 2, the observed differences in norm-referenced scores between nationally-normed IQ and achievement tests map directly onto the differences in the attributes represented by the test scores.
- If and (for any ordered series where and ),
- then [78].
4.1.2. Score Precision
4.1.3. Score Exchangeability
4.1.4. Functional
4.2. Experiments Not Supporting the Dual Discrepancy/Consistency Method
4.3. Cross-Battery Assessment Software System Software
4.4. Evidence-Based Assessment
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
DD/C | Dual discrepancy/consistency |
PSW | Pattern of strengths and weaknesses |
SLD | Specific learning disability |
XBA | Cross-battery assessment |
Appendix A
D/DC Method Decision | True Status | |
---|---|---|
Positive (SLD) | Negative (No SLD) | |
Positive (SLD) | True Positive ( Hits) | False Positive ( False Alarms) |
Negative (no SLD) | False Negative ( Misses) | True Negative ( Correct Rejections) |
References
- Binet, A.; Simon, T. Méthodes nouvelles pour le diagnostic du niveau intellectuel des anormaux. L’année Psychol. 1904, 11, 191–244. [Google Scholar] [CrossRef] [Green Version]
- Zenderland, L. Measuring Minds: Henry Herbery Goddard and the Origins of American Intelligence Testing; Cambridge University: New York, NY, USA, 1998. [Google Scholar]
- Wechsler, D. The Measurement of Adult Intelligence; Williams & Wilkins: Baltimore, MD, USA, 1939. [Google Scholar]
- Rapaport, D.; Gil, M.; Schafer, R. Diagnostic Psychological Testing; The Year Book Publishers: Chicago, IL, USA, 1945; Volume 1. [Google Scholar]
- Jastak, J. Problems of psychometric scatter analysis. Psychol. Bull. 1949, 46, 177–197. [Google Scholar] [CrossRef] [PubMed]
- Frank, G. The Wechsler Enterprise; Pergamon: New York, NY, USA, 1983. [Google Scholar]
- Cattell, R.B. The principles of experimental design and analysis in relation to theory building. In Handbook of Multivariate Experimental Psychology, 2nd ed.; Nesselroade, J.R., Cattell, R.B., Eds.; Springer: Boston, MA, USA, 1988; pp. 21–67. [Google Scholar] [CrossRef]
- Beaujean, A.A.; Benson, N.F. The one and the many: Enduring legacies of Spearman and Thurstone on intelligence test score interpretation. Appl. Meas. Educ. 2018. accepted. [Google Scholar]
- Watkins, M.W. IQ subtest analysis: Clinical acumen or clinical illusion? Sci. Rev. Ment. Health Pract. 2003, 2, 118–141. [Google Scholar]
- Matarazzo, J.D.; Prifitera, A. Subtest scatter and premorbid intelligence: Lessons from the WAIS—R standardization sample. Psychol. Assess. J. Consult. Clin. Psychol. 1989, 1, 186–191. [Google Scholar] [CrossRef]
- Mann, L.; Phillips, W.A. Fractional practices in special education: A critique. Except. Child. 1967, 33, 311–317. [Google Scholar] [CrossRef] [PubMed]
- Bannatyne, A. Diagnosing learning disabilities and writing remedial prescriptions. J. Learn. Disabil. 1968, 1, 242–249. [Google Scholar] [CrossRef]
- Kaufman, A.S. Intelligent Testing with the WISC-III; Wiley: New York, NY, USA, 1994. [Google Scholar]
- Smith, C.B.; Watkins, M.W. Diagnostic utility of the Bannatyne WISC—III pattern. Learn. Disabil. Res. Pract. 2004, 19, 49–56. [Google Scholar] [CrossRef]
- Watkins, M.W.; Kush, J.C.; Glutting, J.J. Prevalence and diagnostic utility of the WISC-III SCAD profile among children with disabilities. Sch. Psychol. Q. 1997, 12, 235–248. [Google Scholar] [CrossRef]
- Watkins, M.W.; Kush, J.C.; Glutting, J.J. Discriminant and predictive validity of the WISC—III ACID profile among children with learning disabilities. Psychol. Sch. 1998, 34, 309–319. [Google Scholar] [CrossRef]
- Flanagan, D.P.; Alfonso, V.C.; Sy, M.C.; Mascolo, J.T.; McDonough, E.M.; Ortiz, S.O. Dual discrepancy/consistency operational definition of SLD: Integrating multiple data sources and multiple data-gathering methods. In Essentials of Specific Learning Disability Identification, 2nd ed.; Alfonso, V.C., Flanagan, D.P., Eds.; Wiley: Hoboken, NJ, USA, 2018; pp. 329–430. [Google Scholar]
- Kranzler, J.H.; Benson, N.F.; Maki, K.E.; Floyd, R.G.; Eckert, T.L.; Fefer, S.A. National Survey of SLD Identification Practices in School Psychology. Manuscript in Preparation. 2018. [Google Scholar]
- Fletcher, J.M.; Lyon, G.R.; Fuchs, L.; Barnes, M. Learning Disabilities: From Identification to Intervention; Guilford: New York, NY, USA, 2007. [Google Scholar]
- Hammill, D.D. A brief look at the learning disabilities movement in the United States. J. Learn. Disabil. 1993, 26, 295–310. [Google Scholar] [CrossRef] [PubMed]
- Zumeta, R.O.; Zirkel, P.A.; Danielson, L. Identifying specific learning disabilities: Legislation, regulation, and court decisions. Top. Lang. Disord. 2014, 34, 8–24. [Google Scholar] [CrossRef]
- Education of handicapped children, Assistance to states, Proposed rulemaking, 41 Fed. Available online: http://cdn.loc.gov/service/ll/fedreg/fr041/fr041230/fr041230.pdf (accessed on 1 August 2018).
- Kirk, S.A. Behavioral diagnosis and remediation of learning disabilities. In Proceedings of the First Annual Conference on Exploration into the Problems of the Perceptually Handicapped; Fund for Perceptually Handicapped Children: Evanston, IL, USA, 1963; pp. 1–23. [Google Scholar]
- Kavale, K.A.; Forness, S.R. What definitions of learning disability say and don’t say: A critical analysis. J. Learn. Disabil. 2000, 33, 239–256. [Google Scholar] [CrossRef] [PubMed]
- Scruggs, T.E.; Mastropieri, M.A. On babies and bathwater: Addressing the problems of identification of learning disabilities. Learn. Disabil. Q. 2002, 25, 155–168. [Google Scholar] [CrossRef]
- Reynolds, C.R. Critical measurement issues in learning disabilities. J. Spec. Educ. 1984, 18, 451–476. [Google Scholar] [CrossRef]
- Stuebing, K.K.; Fletcher, J.M.; LeDoux, J.M.; Lyon, G.R.; Shaywitz, S.E.; Shaywitz, B.A. Validity of IQ-discrepancy classifications of reading disabilities: A meta-analysis. Am. Educ. Res. J. 2002, 39, 469–518. [Google Scholar] [CrossRef]
- Stanovich, K.E. Discrepancy definitions of reading disability: Has intelligence led us astray? Read. Res. Q. 1991, 26, 7–29. [Google Scholar] [CrossRef]
- Assistance to states for the education of children with disabilities and preschool grants for children with disabilities. Available online: https://www.gpo.gov/fdsys/pkg/FR-2006-08-14/pdf/06-6656.pdf (accessed on 1 August 2018).
- Fletcher, J.M. Classification and identification of learning disabilities. In Learning about Learning Disabilities; Wong, B., Butler, D.L., Eds.; Academic Press: Waltham, MA, USA, 2012; pp. 1–25. [Google Scholar]
- Bradley, R.; Danielson, L.; Hallahan, D.P. (Eds.) Identification of Learning Disabilities: Research to Practice; Routledge: New York, NY, USA, 2002. [Google Scholar]
- Texas Education Agency. Response to Intervention and Learning Disability Eligibility; Technical Report; Texas Education Agency: Austin, TX, USA, 1998.
- Hale, J.B.; Fiorello, C.A. School Neuropsychology: A Practitioner’s Handbook; Guilford Press: New York, NY, USA, 2004. [Google Scholar]
- Naglieri, J.A. The discrepancy/consistency approach to SLD identification using the PASS theory. In Essentials of Specific Learning Disability Identification; Flanagan, D.P., Alfonso, V.C., Eds.; Wiley: Hoboken, NJ, USA, 2011; pp. 145–172. [Google Scholar]
- Schultz, E.K.; Stephens, T.L. Core-selective evaluation process: An efficient & comprehensive approach to identify students with SLD Using the WJ IV. J. Texas Educ. Diagn. Assoc. 2015, 44, 5–12. [Google Scholar]
- Stuebing, K.K.; Fletcher, J.M.; Branum-Martin, L.; Francis, D.J. Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. Sch. Psychol. Rev. 2012, 41, 3–22. [Google Scholar]
- McGill, R.J.; Busse, R.T. When theory trumps science: A critique of the PSW model for SLD identification. Contemp. Sch. Psychol. 2016, 1, 1–9. [Google Scholar] [CrossRef]
- Flanagan, D.P.; Ortiz, S.O.; Alfonso, V.C. Essentials of Cross-Battery Assessment, 3rd ed.; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
- Schneider, W.J.; McGrew, K.S. The Cattell-Horn-Carroll model of intelligence. In Contemporary Intellectual Assessment, 4th ed.; Flanagan, D.P., McDonough, E.M., Eds.; Guilford: New York, NY, USA, 2012; in press. [Google Scholar]
- Schrank, F.A.; McGrew, K.S.; Mather, N. Woodcock-Johnson IV; Riverside: Rolling Meadows, IL, USA, 2014. [Google Scholar]
- Flanagan, D.P.; Ortiz, S.O.; Alfonso, V.; Mascolo, J.T. The Achievement Test Desk Reference (ADTR): Comprehensive Assessment and Learning Disabilities; Allyn & Bacon: Boston, MA, USA, 2002. [Google Scholar]
- Flanagan, D.P.; Schneider, W.J. Cross-battery assessment? XBA PSW? A case of mistaken identity: A commentary on Kranzler and colleagues’ “Classification agreement analysis of Cross-Battery Assessment in the identification of specific learning disorders in children and youth”. Int. J. Sch. Educ. Psychol. 2016, 4, 137–145. [Google Scholar] [CrossRef]
- Flanagan, D.P.; Alfonso, V.C.; Schneider, W.J.; McDonough, E.M. Using WISC-V and PSW for SLD identification amidst the controversy. In Mini-Skills Session Presented at the Annual Meeting; National Association of School Psychologists: Bethesda, MD, USA, 2018. [Google Scholar]
- Flanagan, D.P.; Alfonso, V.C. The SLD pattern of strengths and weaknesses and the WJ-IV. In Mini-Skills Session Presented at the Annual Meeting; National Association of School Psychologists: Bethesda, MD, USA, 2015. [Google Scholar]
- Flanagan, D.P.; Ortiz, S.O.; Alfonso, V.C. Cross- Battery Assessment Software System (X-BASS) (Version 2.0). Available online: https://www.wiley.com/WileyCDA/Section/id-829676.html (accessed on 1 August 2018).
- Cohen, J. The earth is round (p < 0.05). Am. Psychol. 1994, 49, 997–1003. [Google Scholar] [CrossRef]
- McFall, R.M.; Treat, T.A. Quantifying the information value of clinical assessments with signal detection theory. Annu. Rev. Psychol. 1999, 50, 215–241. [Google Scholar] [CrossRef] [PubMed]
- Choi, O.S. What neuroscience can and cannot answer. J. Am. Acad. Psychiatry Law Online 2017, 45, 278–285. [Google Scholar]
- Hunsley, J.; Bailey, J.M. The clinical utility of the Rorschach: Unfulfilled promises and an uncertain future. Psychol. Assess. 1999, 11, 266–277. [Google Scholar] [CrossRef]
- Kraemer, H.C. Evaluating Medical Tests: Objective and Quantitative Guidelines; Sage: Newbury Park, CA, USA, 1992. [Google Scholar]
- Scurfield, B.K. Multiple-event forced-choice tasks in the theory of signal detectability. J. Math. Psychol. 1996, 40, 253–269. [Google Scholar] [CrossRef] [PubMed]
- Altarac, M.; Saroha, E. Lifetime prevalence of learning disability among US children. Pediatrics 2007, 119, S77–S83. [Google Scholar] [CrossRef] [PubMed]
- Beaujean, A.A. Simulating data for clinical research: A tutorial. J. Psychoeduc. Assess. 2018, 36, 7–20. [Google Scholar] [CrossRef]
- Flanagan, D.P.; Fiorello, C.A.; Ortiz, S.O. Enhancing practice through application of Cattell-Horn-Carroll theory and research: A “third method” approach to specific learning disability identification. Psychol. Sch. 2010, 47, 739–760. [Google Scholar] [CrossRef]
- Fletcher, J.M.; Miciak, J. Comprehensive cognitive assessments are not necessary for the identification and treatment of learning disabilities. Arch. Clin. Neuropsychol. 2017, 32, 2–7. [Google Scholar] [CrossRef] [PubMed]
- Kranzler, J.H.; Floyd, R.G.; Benson, N.; Zaboski, B.; Thibodaux, L. Cross-Battery Assessment pattern of strengths and weaknesses approach to the identification of specific learning disorders: Evidence-based practice or pseudoscience? Int. J. Sch. Educ. Psychol. 2016, 4, 146–157. [Google Scholar] [CrossRef]
- Boccaccini, M.T.; Marcus, D.; Murrie, D.C. Allegiance effects in clinical psychology research and practice. In Psychological Science under Scrutiny: Recent Challenges and Proposed Solutions; Lilienfeld, S.O., Waldman, I.D., Eds.; Wiley: Malden, MA, USA, 2017; pp. 323–339. [Google Scholar]
- Sax, J.K. Financial conflicts of interest in science. Ann. Health Law 2012, 21, 291–327. [Google Scholar] [PubMed]
- Damer, T.E. Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Reasoning, 6th ed.; Wadsworth Cengage Learning: Belmont, CA, USA, 2009. [Google Scholar]
- Consortium for Evidence-Based Early Intervention Practices. Response to the Learning Disabilities Association of America (LDA) White Paper on Specific Learning Disabilities (SLD) Identification. Available online: https://ldaamerica.org/wp-content/uploads/2013/10/LDA-White-Paper-on-IDEA-Evaluation-Criteria-for-SLD.pdf (accessed on 1 August 2018).
- Yin, R.K. Case study methods. In APA Handbook of Research Methods in Psychology, Vol 2: Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological; Cooper, H., Camic, P.M., Long, D.L., Panter, A.T., Rindskopf, D., Sher, K.J., Eds.; American Psychological Association: Washington, DC, USA, 2012; pp. 141–155. [Google Scholar]
- Bakan, D. The general and the aggregate: A methodological distinction. Percept. Motor Skills 1955, 5, 211–212. [Google Scholar] [CrossRef]
- Jacobs, K.E.; Flanagan, D.P.; Alfonso, V.C. Evidence-based assessment and intervention for specific learning disability in school psychology. In Handbook of Australian School Psychology: Integrating International Research, Practice, and Policy; Thielking, M., Terjesen, M.D., Eds.; Springer: Cham, Australia, 2017; pp. 145–171. [Google Scholar]
- Dixon, S.G.; Eusebio, E.C.; Turton, W.J.; Wright, P.W.D.; Hale, J.B. Forest Grove School District v. T.A. Supreme Court Case: Implications for school psychology practice. J. Psychoeduc. Assess. 2010, 29, 103–113. [Google Scholar] [CrossRef]
- Hale, J.B.; Flanagan, D.P.; Naglieri, J.A. Alternative research-based methods for IDEA 2004 identification of children with specific learning disabilities. NASP Commun. 2008, 36, 1–17. [Google Scholar]
- Zirkel, P.A. The Hale position for a “Third Method” for specific learning disabilities identification: A legal analysis. Learn. Disabil. Q. 2013, 36, 93–96. [Google Scholar] [CrossRef]
- Jensen, A.R. The g factor and the design of education. In Intelligence, Instruction, and Assessment: Theory into Practice; Sternberg, R.J., Williams, W.M., Eds.; Erlbaum: Mahwah, NJ, USA, 1998; pp. 111–131. [Google Scholar]
- Horn, J.L. Models of intelligence. In Intelligence, Measurement, Theory and Public Policy; Linn, R.L., Ed.; University of Illinois Press: Urbana, IL, USA, 1989; pp. 29–73. [Google Scholar]
- Watkins, M.W.; Lei, P.W.; Canivez, G.L. Psychometric intelligence and achievement: A cross-lagged panel analysis. Intelligence 2007, 35, 59–68. [Google Scholar] [CrossRef] [Green Version]
- Pearl, J. Causality: Models, Reasoning, and Inference, 2nd ed.; Cambridge University Press: New York, NY, USA, 2009. [Google Scholar]
- Molenaar, P.C.M.; Campbell, C.G. The new person-specific paradigm in psychology. Curr. Dir. Psychol. Sci. 2009, 18, 112–117. [Google Scholar] [CrossRef]
- Vautier, S.; Lacot, É.; Veldhuis, M. Puzzle-solving in psychology: The neo-Galtonian vs. nomothetic research focuses. New Ideas Psychol. 2014, 33, 46–53. [Google Scholar] [CrossRef] [Green Version]
- Feifer, S.; Nader, R.G.; Flanagan, D.; Fitzer, K.; Hicks, K. Identifying specific reading disability subtypes for effective educational remediation. Learn. Disabil. Multidiscip. J. 2014, 20, 18–30. [Google Scholar] [CrossRef]
- Pedhazur, E.J. Multiple Regression in Behavioral Research, 3rd ed.; Harcourt Brace: Fort Worth, TX, USA, 1997. [Google Scholar]
- Jensen, A.R. Psychometric g and achievement. In Policy Perspectives on Educational Testing; Gifford, B.R., Ed.; Kluwer Academic Publishers: New York, NY, USA, 1993; pp. 117–227. [Google Scholar]
- Coombs, C.H.; Dawes, R.M.; Tversky, A. Mathematical Psychology: An Elementary Introduction; Mathematical Psychology: An Elementary Introduction; Prentice-Hall: Oxford, UK, 1970. [Google Scholar]
- Michell, J. An Introduction to the Logic of Psychological Measurement; Erlbaum: Hillsdale, NJ, USA, 1990. [Google Scholar]
- Michell, J. Quantitative science and the definition of measurement in psychology. Br. J. Psychol. 1997, 88, 355–383. [Google Scholar] [CrossRef]
- Peatman, J.G. On the meaning of a test score in psychological measurement. Am. J. Orthopsychiatr. 1939, 9, 23–47. [Google Scholar] [CrossRef]
- Frisby, C.L. General cognitive ability, learning, and instruction. In Meeting the Psychoeducational Needs of Minority Students; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
- Joint Committee for Guides in Metrology. JCGM 200:2012. International Vocabulary of Metrology—Basic and General Concepts and Associated Terms (VIM), 3rd ed.; Joint Committee for Guides in Metrology: Sévres, France, 2012. [Google Scholar]
- International Committee for Weights and Measures. The International System of Units (SI), 8th ed.; International Committee for Weights and Measures: Paris, France, 2006. [Google Scholar]
- Deviation, n. In Oxford English dictionary. Available online: http://www.oed.com/view/Entry/51460 (accessed on 1 August 2018).
- Michell, J. Alfred Binet and the concept of heterogeneous orders. Front. Psychol. 2012, 3, 1–8. [Google Scholar] [CrossRef] [PubMed]
- Michell, J. The psychometricians’ fallacy: Too clever by half? Br. J. Math. Stat. Psychol. 2009, 62, 41–55. [Google Scholar] [CrossRef] [PubMed]
- Conover, W.J.; Iman, R.L. Rank transformations as a bridge between parametric and nonparametric statistics. Am. Stat. 1981, 35, 124–129. [Google Scholar] [CrossRef]
- Roberts, F.S. Measurement Theory: With Applications to Decision Making, Utility, and the Social Sciences; Cambridge University Press: New York, NY, USA, 1985. [Google Scholar]
- Behrens, J.T.; Yu, C.H. Exploratory data analysis. In Handbook of Psychology, vol 2: Research Methods in Psychology; Schinka, J.A., Velicer, W.F., Eds.; Wiley: Hoboken, NJ, USA, 2003; pp. 33–64. [Google Scholar]
- Shah, P.; Michal, A.; Ibrahim, A.; Rhodes, R.; Rodriguez, F. What makes everyday scientific reasoning so challenging. In Psychology of Learning and Motivation; Brian, H.R., Ed.; Academic Press: San Diego, CA, USA, 2017; Volume 66, pp. 251–299. [Google Scholar]
- Wilcox, G.; Schroeder, M. What comes before report writing? Attending to clinical reasoning and thinking errors in school psychology. J. Psychoeduc. Assess. 2015, 33, 652–661. [Google Scholar] [CrossRef]
- Grove, W.M.; Meehl, P.E. Comparative efficiency of informal (subjective, impressionistic) and formal (mechanical, algorithmic) prediction procedures: The clinical-statistical controversy. Psychol. Public Policy Law 1996, 2, 293–323. [Google Scholar] [CrossRef]
- Dorans, N.J. Distinctions Among Classes of Linkages; Technical Report RN-11; The College Board: New York, NY, USA, 2000. [Google Scholar]
- Johnson, H.M. Some fallacies underlying the use of psychological ‘tests’. Psychol. Rev. 1928, 35, 328–337. [Google Scholar] [CrossRef]
- Beaujean, A.A.; Benson, N.F. Theoretically-consistent cognitive ability test development and score interpretation. Contemp. Sch. Psychol. 2018, in press. [Google Scholar] [CrossRef]
- Carroll, J.B. Human Cognitive Abilities: A Survey of Factor-Analytic Studies; Cambridge University Press: New York, NY, USA, 1993. [Google Scholar]
- Cepeda, N.J.; Blackwell, K.A.; Munakata, Y. Speed isn’t everything: complex processing speed measures mask individual differences and developmental changes in executive control. Dev. Sci. 2013, 16, 269–286. [Google Scholar] [CrossRef] [PubMed]
- Barrett, P. What if there were no psychometrics?: Constructs, complexity, and measurement. J. Personal. Assess. 2005, 85, 134–140. [Google Scholar] [CrossRef] [PubMed]
- Borsboom, D.; Mellenbergh, G.J.; van Heerden, J. The concept of validity. Psychol. Rev. 2004, 111, 1061–1071. [Google Scholar] [CrossRef] [PubMed]
- Floyd, R.G.; Bergeron, R.; McCormack, A.C.; Anderson, J.L.; Hargrove-Owens, G.L. Are Cattell-Horn-Carroll (CHC) broad ability composite scores exchangeable across batteries? Sch. Psychol. Rev. 2005, 34, 329–357. [Google Scholar]
- Miciak, J.; Taylor, W.P.; Denton, C.A.; Fletcher, J.M. The effect of achievement test selection on identification of learning disabilities within a patterns of strengths and weaknesses framework. Sch. Psychol. Q. 2015, 30, 321–334. [Google Scholar] [CrossRef] [PubMed]
- Hoskyn, M.; Swanson, H.L. Cognitive processing of low achievers and children with reading disabilities: A selective meta-analytic review of the published literature. Sch. Psychol. Rev. 2000, 29, 102–119. [Google Scholar]
- Stafford, A.L. Base Rates of Cognitive and Academic Weaknesses. Ph.D. Thesis, University of South Carolina, Columbia, SC, USA, 2016. [Google Scholar]
- Miciak, J.; Fletcher, J.M.; Stuebing, K.K.; Vaughn, S.; Tolar, T.D. Patterns of cognitive strengths and weaknesses: Identification rates, agreement, and validity for learning disabilities identification. Sch. Psychol. Q. 2014, 29, 27–37. [Google Scholar] [CrossRef] [PubMed]
- Kranzler, J.H.; Floyd, R.G.; Benson, N.; Zaboski, B.; Thibodaux, L. Classification agreement analysis of Cross-Battery Assessment in the identification of specific learning disorders in children and youth. Int. J. Sch. Educ. Psychol. 2016, 4, 124–136. [Google Scholar] [CrossRef]
- Kranzler, J.H.; Gilbert, K.; Robert, C.R.; Floyd, R.G.; Benson, N.F. Diagnostic utility of the XBA PSW approach to SLD identification: Replication and extension with the Woodcock-Johnson IV. Presented at the Annual Meeting of the National Association of School Psychologists, Washington, DC, USA, 13–16 February 2018. [Google Scholar]
- Miciak, J.; Taylor, W.P.; Stuebing, K.K.; Fletcher, J.M. Simulation of LD identification accuracy using a pattern of processing strengths and weaknesses method with multiple measures. J. Psychoeduc. Assess. 2018, 36, 21–33. [Google Scholar] [CrossRef]
- Alfonso, V.C. How X-BASS can improve assessment for SLD identification. Presented at the Annual Meeting of the New York Association of School Psychologists, New York, NY, USA, 19 October 2017. [Google Scholar]
- Flanagan, D.P. Case Study Applications of the WISC-V in Cross-Battery Assessment and SLD Identification Using X-BASS. [Video Webinar]. Available online: https://www.pearsonclinical.com/events/webinars/2017/case-study-applications-of-the-wisc-v-in-cross-battery-assessment-and-sld-identification-using-x-bass-092917.html (accessed on 29 September 2017).
- McGill, R.J.; Styck, K.M.; Palomares, R.S.; Hass, M.R. Critical issues in specific learning disability identification: What we need to know about the PSW model. Learn. Disabil. Q. 2016, 39, 159–170. [Google Scholar] [CrossRef]
- Hunsley, J.; Mash, E.J. Evidence-based assessment. In The Oxford Handbook of Clinical Psychology; Barlow, D.H., Ed.; Oxford University Press: New York, NY, USA, 2011; pp. 76–97. [Google Scholar]
- Watkins, M.W.; Youngstrom, E.A.; Glutting, J.J. Some cautions concerning cross-battery assessment. NASP Commun. 2002, 30, 16–20. [Google Scholar]
- Williams, J.; Miciak, J. Adoption costs associated with processing strengths and weaknesses methods for learning disabilities identification. Sch. Psychol. Forum Res. Pract. 2018, 12, 17–29. [Google Scholar]
- Hayes, S.C.; Nelson, R.O.; Jarrett, R.B. The treatment utility of assessment: A functional approach to evaluating assessment quality. Am. Psychol. 1987, 42, 963–974. [Google Scholar] [CrossRef] [PubMed]
- Stuebing, K.K.; Barth, A.E.; Trahan, L.H.; Reddy, R.R.; Miciak, J.; Fletcher, J.M. Are child cognitive characteristics strong predictors of responses to intervention? A meta-analysis. Rev. Educ. Res. 2015, 85, 395–429. [Google Scholar] [CrossRef] [PubMed]
- Burns, M.K.; Petersen-Brown, S.; Haegele, K.; Rodriguez, M.; Schmitt, B.; Cooper, M.; Clayton, K.; Hutcheson, S.; Conner, C.; Hosp, J.; et al. Meta-analysis of academic interventions derived from neuropsychological data. Sch. Psychol. Q. 2016, 31, 28–42. [Google Scholar] [CrossRef] [PubMed]
- Elliott, J.G.; Resing, W. Can intelligence testing inform educational intervention for children with reading disability? J. Intell. 2015, 3, 137–157. [Google Scholar] [CrossRef] [Green Version]
- Kearns, D.M.; Fuchs, D. Does cognitively focused instruction improve the academic performance of low-achieving students? Except. Child. 2013, 79, 263–290. [Google Scholar] [CrossRef]
- VanDerHeyden, A.M. Why do school psychologists cling to ineffective practices? Let’s do what works. Sch. Psychol. Forum Res. Pract. 2018, 12, 44–52. [Google Scholar]
- Gambrill, E. Critical Thinking in Clinical Practice: Improving the Quality of Judgments and Decisions, 3rd ed.; Wiley: Hoboken, NJ, USA, 2012. [Google Scholar]
- Lilienfeld, S.O.; Wood, J.M.; Garb, H.N. Why questionable psychological tests remain popular. Sci. Rev. Altern. Med. 2006, 10, 6–15. [Google Scholar]
- Schneider, W.J.; Kaufman, A.S. Let’s not do away with comprehensive cognitive assessments just yet. Arch. Clin. Neuropsychol. 2017, 32, 8–20. [Google Scholar] [CrossRef] [PubMed]
- Misuse, n. In Oxford English Dictionary. Available online: http://www.oed.com/view/Entry/120189 (accessed on 1 August 2018).
- Schmidt, F.L.; Oh, I.S.; Shaffer, J.A. The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings. Technical Report; University of Iowa: Iowa City, IA, USA, 2016; Available online: https://ssrn.com/abstract=2853669 (accessed on 1 August 2018).
- Kehle, T.J.; Bray, M.A. Individual differences. In The Oxford Handbook of School Psychology; Bray, M.A., Kehle, T.J., Eds.; Oxford University Press: New York, NY, USA, 2011; pp. 63–78. [Google Scholar]
1. | We use the term intelligence throughout this article to refer to the class of attributes within the general domain of cognitive ability, not any particular attribute within that domain. We use the term IQ test to refer to any instrument designed to assess intelligence attributes. |
2. | We use the term cognitive profile broadly to refer to intra-individual score patterns on one or more IQ tests. The patterns could be created directly from the norm-referenced scores or may be ipsatized (i.e., subtract each score from the average of the scores). Likewise, the patterns could be created from subtest scores or composite scores comprised of values from a subset of the subsets (e.g., index scores on the Wechsler scales). |
3. | We use Cattell’s [7] definition of experiment as a “recording of observations, quantitative or qualitative, made by defined and recorded operations and in defined conditions, followed by examination of the data, by appropriate statistical and mathematical rules” (p. 22). These studies may, or may not, involve variable manipulation. |
4. | Flanagan et al. [41] developed the method in 2002 as part of their guidelines for using XBA to identify SLD. Since then, they have not been consistent with their use of terms. Historically, it has been referred to as the “Operational Definition of SLD”, “CHC-based operational definition of SLD”, or has been used interchangeably with XBA. The term dual discrepancy/consistency (DD/C) was introduced in 2013, but there have still been inconsistencies in term usage. For example, Flanagan and colleagues named their 2013 test-analysis software Cross-Battery Pattern of Strengths and Weaknesses Analyzer and named the latest iteration of this software the Cross-Battery Assessment Software System. Flanagan and Schneider [42] requested that the term dual discrepancy/consistency be used, so we use it throughout this article. |
5. | In their most recent publication, Flanagan et al. [17] (p. 401) wrote that the DD/C method now requires that clinicians categorize all IQ and achievement test scores as either a “strength” or “weakness”. |
6. | Flanagan and colleagues are not consistent in their terms or definitions. We use the terms and definitions they provided in their most recent publication [17], which differ somewhat from previous publications [38] (pp. 242 & 244 (Table 4.3)). Moreover, they have also stated in DD/C workshops that clinicians can use whatever criteria they deem important for documenting a weakness [43]. |
7. | Elsewhere, Flanagan and Alfonso [44] have said that, for a reading SLD, the primary weakness cannot be Gc due to the overlap with achievement. The inconsistency in what can and cannot count as a primary weakness in intelligent attributes is problematic but is not the particular focus of this article. |
8. | Clinicians can opt to use the Alternative Cognitive Composite (ACC) instead of the FCC. The ACC is “any cognitive composite derived from an intelligence or cognitive ability battery that is a good estimate of overall cognitive ability and, in the evaluator’s judgment, is a better estimate than the FCC” [17] (p. 413). Thus, the determination of weaknesses in attributes related to intelligence and academic achievement can be idiosyncratic to each clinician. |
9. | This table could be generalized to the situation with >2 true statuses (e.g., dimensional models of SLD) as well as >2 outcomes (e.g., method produces an “uncertain” diagnosis) [51]. |
10. | There is not currently a “gold-standard” method for identifying SLD, so there is no way to establish universally someone’s true status in Table 2 when using data collected from actual individuals. Instead, it is usually defined on a study-by-study basis. With simulation studies, this is much less of a problem since individuals’ true status is part of the parameters specified in the simulation process [53]. |
11. | We state the DD/C developers profit because in order for clinicians to use DD/C method, they have to acquire DD/C books, attend DD/C workshops, or get “certified” in XBA, in addition to purchasing a license for the X-BASS program. Profiting off of clinicians’ choices is not unique to the DD/C developers, nor is it necessarily an unethical practice. |
12. | Some examples of major problems with their analysis are running regressions on samples of very small size, not assessing if they violated the assumptions inherent in regression, and completing regressions for each group separately [74]. |
13. | We acknowledge that is confusing to have an IQ scale that exists independent of IQ tests. We attempt to maximize clarity by using the term IQ scale whenever we are referring to scores from an instrument assessing some arbitrary attribute, and the term IQ test when referring to an instrument assessing some intelligence attribute. |
14. | The triple point of a substance is the temperature and pressure at which the its gas, liquid, and solid phases coexist in equilibrium. |
15. | Flanagan et al. [38] (p. 399) wrote that in developing the XBA guidelines—which are used in the DD/C method—they used results from factor analytic studies for their subtest classifications. Factor analysis is just a method of reducing the information in a set of correlations, however. The use of factor analysis cannot substitute for the lack of conceptual equivalence or strong theory about how why scores are related [97,98]. |
Cattell–Horn–Carroll | Reading Subtests 2 | |
---|---|---|
Broad Ability 1 | Narrow Ability 1 | |
Gc | Language development (LD) | Reading Comprehension |
Gc | Lexical knowledge (VL) | Reading Vocabulary |
Gc | Listening ability (LS) | Decoding Fluency |
Ga | Phonetic coding (PC) | Phonological Processing |
Glr | Naming facility (NA) | Word Recognition Fluency |
Glr | Associative memory (MA) | Nonsense Word Decoding |
Gwm | Memory span (MS) | Letter and Word Recognition |
Gwm | Working memory capacity (WM) | Silent Reading Fluency |
Gs | Perceptual Speed (P) |
D/DC Method Decision | True Status | |
---|---|---|
Positive (SLD) | Negative (No SLD) | |
Positive (SLD) | True Positive | False Positive |
Negative (no SLD) | False Negative | True Negative |
Percentile | Scaled Score Scale | IQ Scale |
---|---|---|
31 | 9 | 93 |
30 | 8 | 92 |
27 | 8 | 91 |
26 | 8 | 90 |
25 | 8 | 90 |
24 | 8 | 89 |
23 | 8 | 89 |
22 | 8 | 88 |
21 | 8 | 88 |
20 | 7 | 87 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Beaujean, A.A.; Benson, N.F.; McGill, R.J.; Dombrowski, S.C. A Misuse of IQ Scores: Using the Dual Discrepancy/Consistency Model for Identifying Specific Learning Disabilities. J. Intell. 2018, 6, 36. https://doi.org/10.3390/jintelligence6030036
Beaujean AA, Benson NF, McGill RJ, Dombrowski SC. A Misuse of IQ Scores: Using the Dual Discrepancy/Consistency Model for Identifying Specific Learning Disabilities. Journal of Intelligence. 2018; 6(3):36. https://doi.org/10.3390/jintelligence6030036
Chicago/Turabian StyleBeaujean, A. Alexander, Nicholas F. Benson, Ryan J. McGill, and Stefan C. Dombrowski. 2018. "A Misuse of IQ Scores: Using the Dual Discrepancy/Consistency Model for Identifying Specific Learning Disabilities" Journal of Intelligence 6, no. 3: 36. https://doi.org/10.3390/jintelligence6030036
APA StyleBeaujean, A. A., Benson, N. F., McGill, R. J., & Dombrowski, S. C. (2018). A Misuse of IQ Scores: Using the Dual Discrepancy/Consistency Model for Identifying Specific Learning Disabilities. Journal of Intelligence, 6(3), 36. https://doi.org/10.3390/jintelligence6030036