Serveur d'exploration sur l'Université de Trèves

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Factor and Subtest Discrepancies on the Differential Ability Scales

Identifieur interne : 001376 ( Istex/Corpus ); précédent : 001375; suivant : 001377

Factor and Subtest Discrepancies on the Differential Ability Scales

Auteurs : Shoshana Y. Kahana ; Eric A. Youngstrom ; Joseph J. Glutting

Source :

RBID : ISTEX:4408FD3559BF8E27046191B14AB80235D7D87C5A

Abstract

Past literature has largely ignored the population frequency of multivariate factor and subtest score discrepancies. Another limitation has been that statistical models imperfectly model the clinical assessment process, whereby significant discrepancies between both factors and subtests are included in predictions about an individual’s academic achievement. The present study examined these issues using a nationally representative sample (N = 1,185) completing the Differential Ability Scales. Results indicate that approximately 80% of children in a nonreferred sample show at least one statistically significant ability discrepancy. In addition, the global estimate of cognitive ability was the most parsimonious predictor of academic achievement, whereas information about ability discrepancies did not significantly improve prediction. Findings suggest that when predicting academic achievement, there is little value in interpreting cognitive scores beyond the global ability estimate.

Url:
DOI: 10.1177/1073191102009001010

Links to Exploration step

ISTEX:4408FD3559BF8E27046191B14AB80235D7D87C5A

Le document en format XML

<record>
<TEI wicri:istexFullTextTei="biblStruct">
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Factor and Subtest Discrepancies on the Differential Ability Scales</title>
<author>
<name sortKey="Kahana, Shoshana Y" sort="Kahana, Shoshana Y" uniqKey="Kahana S" first="Shoshana Y." last="Kahana">Shoshana Y. Kahana</name>
<affiliation>
<mods:affiliation></mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: syk4@po.cwru.edu</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>syk4@po.cwru.edu.</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Youngstrom, Eric A" sort="Youngstrom, Eric A" uniqKey="Youngstrom E" first="Eric A." last="Youngstrom">Eric A. Youngstrom</name>
<affiliation>
<mods:affiliation>Case Western Reserve University</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>Case Western Reserve University</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Glutting, Joseph J" sort="Glutting, Joseph J" uniqKey="Glutting J" first="Joseph J." last="Glutting">Joseph J. Glutting</name>
<affiliation>
<mods:affiliation>University of Delaware</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>University of Delaware</mods:affiliation>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">ISTEX</idno>
<idno type="RBID">ISTEX:4408FD3559BF8E27046191B14AB80235D7D87C5A</idno>
<date when="2002" year="2002">2002</date>
<idno type="doi">10.1177/1073191102009001010</idno>
<idno type="url">https://api.istex.fr/document/4408FD3559BF8E27046191B14AB80235D7D87C5A/fulltext/pdf</idno>
<idno type="wicri:Area/Istex/Corpus">001376</idno>
<idno type="wicri:explorRef" wicri:stream="Istex" wicri:step="Corpus" wicri:corpus="ISTEX">001376</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title level="a" type="main" xml:lang="en">Factor and Subtest Discrepancies on the Differential Ability Scales</title>
<author>
<name sortKey="Kahana, Shoshana Y" sort="Kahana, Shoshana Y" uniqKey="Kahana S" first="Shoshana Y." last="Kahana">Shoshana Y. Kahana</name>
<affiliation>
<mods:affiliation></mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>E-mail: syk4@po.cwru.edu</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>syk4@po.cwru.edu.</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Youngstrom, Eric A" sort="Youngstrom, Eric A" uniqKey="Youngstrom E" first="Eric A." last="Youngstrom">Eric A. Youngstrom</name>
<affiliation>
<mods:affiliation>Case Western Reserve University</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>Case Western Reserve University</mods:affiliation>
</affiliation>
</author>
<author>
<name sortKey="Glutting, Joseph J" sort="Glutting, Joseph J" uniqKey="Glutting J" first="Joseph J." last="Glutting">Joseph J. Glutting</name>
<affiliation>
<mods:affiliation>University of Delaware</mods:affiliation>
</affiliation>
<affiliation>
<mods:affiliation>University of Delaware</mods:affiliation>
</affiliation>
</author>
</analytic>
<monogr></monogr>
<series>
<title level="j">Assessment</title>
<idno type="ISSN">1073-1911</idno>
<idno type="eISSN">1552-3489</idno>
<imprint>
<publisher>Sage Publications</publisher>
<pubPlace>Sage CA: Thousand Oaks, CA</pubPlace>
<date type="published" when="2002-03">2002-03</date>
<biblScope unit="volume">9</biblScope>
<biblScope unit="issue">1</biblScope>
<biblScope unit="page" from="82">82</biblScope>
<biblScope unit="page" to="93">93</biblScope>
</imprint>
<idno type="ISSN">1073-1911</idno>
</series>
<idno type="istex">4408FD3559BF8E27046191B14AB80235D7D87C5A</idno>
<idno type="DOI">10.1177/1073191102009001010</idno>
<idno type="ArticleID">10.1177_1073191102009001010</idno>
</biblStruct>
</sourceDesc>
<seriesStmt>
<idno type="ISSN">1073-1911</idno>
</seriesStmt>
</fileDesc>
<profileDesc>
<textClass></textClass>
<langUsage>
<language ident="en">en</language>
</langUsage>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Past literature has largely ignored the population frequency of multivariate factor and subtest score discrepancies. Another limitation has been that statistical models imperfectly model the clinical assessment process, whereby significant discrepancies between both factors and subtests are included in predictions about an individual’s academic achievement. The present study examined these issues using a nationally representative sample (N = 1,185) completing the Differential Ability Scales. Results indicate that approximately 80% of children in a nonreferred sample show at least one statistically significant ability discrepancy. In addition, the global estimate of cognitive ability was the most parsimonious predictor of academic achievement, whereas information about ability discrepancies did not significantly improve prediction. Findings suggest that when predicting academic achievement, there is little value in interpreting cognitive scores beyond the global ability estimate.</div>
</front>
</TEI>
<istex>
<corpusName>sage</corpusName>
<author>
<json:item>
<name>Shoshana Y. Kahana</name>
<affiliations>
<json:null></json:null>
<json:string>E-mail: syk4@po.cwru.edu</json:string>
<json:string>syk4@po.cwru.edu.</json:string>
</affiliations>
</json:item>
<json:item>
<name>Eric A. Youngstrom</name>
<affiliations>
<json:string>Case Western Reserve University</json:string>
<json:string>Case Western Reserve University</json:string>
</affiliations>
</json:item>
<json:item>
<name>Joseph J. Glutting</name>
<affiliations>
<json:string>University of Delaware</json:string>
<json:string>University of Delaware</json:string>
</affiliations>
</json:item>
</author>
<subject>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>cognitive testing</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>predictions of academic achievement</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>global measure of intelligence</value>
</json:item>
<json:item>
<lang>
<json:string>eng</json:string>
</lang>
<value>factor and subtest interpretation</value>
</json:item>
</subject>
<articleId>
<json:string>10.1177_1073191102009001010</json:string>
</articleId>
<language>
<json:string>eng</json:string>
</language>
<originalGenre>
<json:string>research-article</json:string>
</originalGenre>
<abstract>Past literature has largely ignored the population frequency of multivariate factor and subtest score discrepancies. Another limitation has been that statistical models imperfectly model the clinical assessment process, whereby significant discrepancies between both factors and subtests are included in predictions about an individual’s academic achievement. The present study examined these issues using a nationally representative sample (N = 1,185) completing the Differential Ability Scales. Results indicate that approximately 80% of children in a nonreferred sample show at least one statistically significant ability discrepancy. In addition, the global estimate of cognitive ability was the most parsimonious predictor of academic achievement, whereas information about ability discrepancies did not significantly improve prediction. Findings suggest that when predicting academic achievement, there is little value in interpreting cognitive scores beyond the global ability estimate.</abstract>
<qualityIndicators>
<score>6.548</score>
<pdfVersion>1.3</pdfVersion>
<pdfPageSize>612 x 792 pts (letter)</pdfPageSize>
<refBibsNative>true</refBibsNative>
<abstractCharCount>993</abstractCharCount>
<pdfWordCount>8122</pdfWordCount>
<pdfCharCount>54415</pdfCharCount>
<pdfPageCount>12</pdfPageCount>
<abstractWordCount>129</abstractWordCount>
</qualityIndicators>
<title>Factor and Subtest Discrepancies on the Differential Ability Scales</title>
<refBibs>
<json:item>
<host>
<author></author>
<title>Multiple regression: Testing and interpreting interactions</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>V. C. Alfonso</name>
</json:item>
<json:item>
<name>T. D. Oakland</name>
</json:item>
<json:item>
<name>R. LaRocca</name>
</json:item>
<json:item>
<name>A. Spanakos</name>
</json:item>
</author>
<host>
<volume>29</volume>
<pages>
<last>64</last>
<first>52</first>
</pages>
<author></author>
<title>School Psychology Review</title>
</host>
<title>The course on individual cognitive assessment</title>
</json:item>
<json:item>
<author>
<json:item>
<name>D. W. Beebe</name>
</json:item>
<json:item>
<name>L. J. Pfiffner</name>
</json:item>
<json:item>
<name>K. McBurnett</name>
</json:item>
</author>
<host>
<volume>12</volume>
<pages>
<last>101</last>
<first>97</first>
</pages>
<author></author>
<title>Psychological Assessment</title>
</host>
<title>Evaluation of the validity of the Wechsler Intelligence Scale for Children-Third Edition Comprehension and Picture Arrangement subtests as measures of social intelligence</title>
</json:item>
<json:item>
<host>
<author></author>
<title>G—Power: A priori, posthoc, and compromise power analyses for the Macintosh (Version 2.1.1)</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>S. Cahan</name>
</json:item>
</author>
<host>
<volume>4</volume>
<pages>
<last>280</last>
<first>273</first>
</pages>
<author></author>
<title>Journal of Psychoeducational Assessment</title>
</host>
<title>Significance testing of subtest score differences: The rules of the game</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Psychological test usage in professional psychology: Report of the APA practice and science directorates</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>W. J. Camara</name>
</json:item>
<json:item>
<name>J. S. Nathan</name>
</json:item>
<json:item>
<name>A. E. Puente</name>
</json:item>
</author>
<host>
<volume>31</volume>
<pages>
<last>154</last>
<first>141</first>
</pages>
<author></author>
<title>Professional Psychology: Research and Practice</title>
</host>
<title>Psychological test usage: Implications in professional psychology</title>
</json:item>
<json:item>
<author>
<json:item>
<name>J. B. Carroll</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>456</last>
<first>449</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>Commentary on profile analysis</title>
</json:item>
<json:item>
<author>
<json:item>
<name>J. C. Caruso</name>
</json:item>
</author>
<host>
<volume>8</volume>
<pages>
<last>166</last>
<first>155</first>
</pages>
<author></author>
<title>Assessment</title>
</host>
<title>Increasing the reliability of the fluid/crystallized difference score from the Kaufman Adolescent and Adult Intelligence Test with reliable component analysis</title>
</json:item>
<json:item>
<author>
<json:item>
<name>S. J. Ceci</name>
</json:item>
<json:item>
<name>W. M. Williams</name>
</json:item>
</author>
<host>
<volume>52</volume>
<pages>
<last>1058</last>
<first>1051</first>
</pages>
<author></author>
<title>American Psychologist</title>
</host>
<title>Schooling, intelligence, and income</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Applied multiple regression/correlation analysis for the behavioral sciences</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Aptitudes and instructional methods: A handbook for research on interactions</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>J. Donders</name>
</json:item>
</author>
<host>
<volume>8</volume>
<pages>
<last>318</last>
<first>312</first>
</pages>
<author></author>
<title>Psychological Assessment</title>
</host>
<title>Factor subtypes in the WISC-III standardization sample: Analysis of factor index scores</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Problems and limitations in the use of psychological assessment in contemporary health care delivery: Report of the Board of Professional Affairs Psychological Assessment Workgroup, part II</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Differential Ability Scales: Introductory and technical handbook</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>C. D. Elliot</name>
</json:item>
</author>
<host>
<volume>8</volume>
<pages>
<last>390</last>
<first>376</first>
</pages>
<author></author>
<title>Journal of Psychoeducational Assessment</title>
</host>
<title>The nature and structure of children’s abilities: Evidence from the Differential Ability Scales. Conference on Intelligence: Theories and practice (1990, Memphis, Tennessee)</title>
</json:item>
<json:item>
<author>
<json:item>
<name>C. D. Elliot</name>
</json:item>
</author>
<host>
<volume>8</volume>
<pages>
<last>411</last>
<first>406</first>
</pages>
<author></author>
<title>Journal of Psychoeducational Assessment</title>
</host>
<title>The nature and structure of the DAS: Questioning the test’s organizing model and use</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Wide Range Intelligence Test manual</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>J. J. Glutting</name>
</json:item>
<json:item>
<name>E. A. McGrath</name>
</json:item>
<json:item>
<name>R. W. Kamphaus</name>
</json:item>
<json:item>
<name>P. A. McDermott</name>
</json:item>
</author>
<host>
<volume>26</volume>
<pages>
<last>115</last>
<first>85</first>
</pages>
<author></author>
<title>Journal of Special Education</title>
</host>
<title>Taxonomy and validity of subtest profiles on the Kaufman Assessment Battery for Children</title>
</json:item>
<json:item>
<author>
<json:item>
<name>J. J. Glutting</name>
</json:item>
<json:item>
<name>M. Watkins</name>
</json:item>
<json:item>
<name>E. A. Youngstrom</name>
</json:item>
<json:item>
<name> </name>
</json:item>
<json:item>
<name> </name>
</json:item>
</author>
<host>
<author></author>
<title>Handbook of psychological and educational assessment of children</title>
</host>
<title>Multifactored and cross-battery assessments: Are they worth the effort?</title>
</json:item>
<json:item>
<author>
<json:item>
<name>J. J. Glutting</name>
</json:item>
<json:item>
<name>E. A. Youngstrom</name>
</json:item>
<json:item>
<name>T. Ward</name>
</json:item>
<json:item>
<name>S. Ward</name>
</json:item>
<json:item>
<name>R. L. Hale</name>
</json:item>
</author>
<host>
<volume>9</volume>
<pages>
<last>301</last>
<first>295</first>
</pages>
<author></author>
<title>Psychological Assessment</title>
</host>
<title>Incremental efficacy of WISC-III factor scores in predicting achievement: What do they tell us?</title>
</json:item>
<json:item>
<author>
<json:item>
<name>H. Gough</name>
</json:item>
</author>
<host>
<volume>26</volume>
<pages>
<last>187</last>
<first>106</first>
</pages>
<author></author>
<title>American Psychologist</title>
</host>
<title>Some reflections on the meaning of psychodiagnosis</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Foundations of intellectual assessment: The WAIS-III and other tests in clinical practice</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>F. M. Grossman</name>
</json:item>
<json:item>
<name>K. M. Johnson</name>
</json:item>
</author>
<host>
<volume>19</volume>
<pages>
<last>468</last>
<first>465</first>
</pages>
<author></author>
<title>Psychology in the Schools</title>
</host>
<title>WISC-R factor scores as predictors of WRAT performance: A multivariate analysis</title>
</json:item>
<json:item>
<author>
<json:item>
<name>G. Groth-Marnat</name>
</json:item>
</author>
<host>
<volume>55</volume>
<pages>
<last>824</last>
<first>813</first>
</pages>
<author></author>
<title>Journal of Clinical Psychology</title>
</host>
<title>Financial efficacy of clinical assessment: Rational guidelines and issues for future research</title>
</json:item>
<json:item>
<author>
<json:item>
<name>G. S. Hanna</name>
</json:item>
<json:item>
<name>F. O. Bradley</name>
</json:item>
<json:item>
<name>M. C. Holen</name>
</json:item>
</author>
<host>
<volume>19</volume>
<pages>
<last>376</last>
<first>370</first>
</pages>
<author></author>
<title>Journal of School Psychology</title>
</host>
<title>Estimating major sources of measurement error in individual intelligence scales: Taking our heads out of the sand</title>
</json:item>
<json:item>
<host>
<author></author>
<title>The g factor: The science of mental ability</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Clinical assessment of children’s intelligence</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Intelligent testing with the WISC-R</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Intelligent testing with the WISC-III</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Essentials of WAIS-III assessment</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>T. Z. Keith</name>
</json:item>
</author>
<host>
<volume>8</volume>
<pages>
<last>405</last>
<first>391</first>
</pages>
<author></author>
<title>Journal of Psychoeducational Assessment</title>
</host>
<title>Confirmatory and hierarchical confirmatory analysis of the Differential Ability Scales</title>
</json:item>
<json:item>
<author>
<json:item>
<name>B. Kleinmuntz</name>
</json:item>
</author>
<host>
<volume>107</volume>
<pages>
<last>310</last>
<first>296</first>
</pages>
<author></author>
<title>Psychological Bulletin</title>
</host>
<title>Why we still use our heads instead of formulas: Toward an integrative approach</title>
</json:item>
<json:item>
<author>
<json:item>
<name>J. D. Lipsitz</name>
</json:item>
<json:item>
<name>R. H. Dworkin</name>
</json:item>
<json:item>
<name>L. Erlenmeyer-Kimling</name>
</json:item>
</author>
<host>
<volume>5</volume>
<pages>
<last>437</last>
<first>430</first>
</pages>
<author></author>
<title>Psychological Assessment</title>
</host>
<title>Wechsler Comprehension and Picture Arrangement subtests and social adjustment</title>
</json:item>
<json:item>
<author>
<json:item>
<name>D. Lubinski</name>
</json:item>
<json:item>
<name>C. P. Benbow</name>
</json:item>
</author>
<host>
<volume>55</volume>
<pages>
<last>150</last>
<first>137</first>
</pages>
<author></author>
<title>American Psychologist</title>
</host>
<title>States of excellence</title>
</json:item>
<json:item>
<host>
<author></author>
<title>National profiles in youth psychopathology: Manual of Adjustment Scales for Children and Adolescents</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>P. A. McDermott</name>
</json:item>
<json:item>
<name>J. J. Glutting</name>
</json:item>
</author>
<host>
<volume>26</volume>
<pages>
<last>175</last>
<first>163</first>
</pages>
<author></author>
<title>School Psychology Review</title>
</host>
<title>Informing stylistic learning behavior, disposition, and achievement through ability subtests—or more illusions of meaning?</title>
</json:item>
<json:item>
<host>
<author></author>
<title>The intelligence test desk reference(ITDR):Gf-Gccross-batteryassessment</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>S. P. Mishra</name>
</json:item>
</author>
<host>
<volume>20</volume>
<pages>
<last>154</last>
<first>150</first>
</pages>
<author></author>
<title>Psychology in the Schools</title>
</host>
<title>Validity of WISC-R IQs and factor scores in predicting achievement for Mexican-American children</title>
</json:item>
<json:item>
<author>
<json:item>
<name>T. E. Moffitt</name>
</json:item>
<json:item>
<name>A. Caspi</name>
</json:item>
<json:item>
<name>A. R. Harkness</name>
</json:item>
<json:item>
<name>P. A. Silva</name>
</json:item>
</author>
<host>
<volume>14</volume>
<pages>
<last>506</last>
<first>455</first>
</pages>
<author></author>
<title>Journal of Child Psychology and Psychiatry</title>
</host>
<title>The natural history of change in intellectual performance: Who changes? How much? Is it meaningful?</title>
</json:item>
<json:item>
<author>
<json:item>
<name>J. A. Naglieri</name>
</json:item>
</author>
<host>
<volume>5</volume>
<pages>
<last>116</last>
<first>113</first>
</pages>
<author></author>
<title>Psychological Assessment</title>
</host>
<title>Pairwise and ipsative comparisons of WISC-III IQ and index scores</title>
</json:item>
<json:item>
<author>
<json:item>
<name>J. A. Naglieri</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>433</last>
<first>419</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>Can profile analysis of ability tests work? An illustration using the PASS theory and CAS with an unselected cohort</title>
</json:item>
<json:item>
<author>
<json:item>
<name>U. Neisser</name>
</json:item>
<json:item>
<name>G. Boodoo</name>
</json:item>
<json:item>
<name>T. J. Bouchard</name>
</json:item>
<json:item>
<name>A. W. Boykin</name>
</json:item>
<json:item>
<name>N. Brody</name>
</json:item>
<json:item>
<name>S. J. Ceci</name>
</json:item>
</author>
<host>
<volume>51</volume>
<pages>
<last>101</last>
<first>77</first>
</pages>
<author></author>
<title>American Psychologist</title>
</host>
<title>Intelligence: Knowns and unknowns</title>
</json:item>
<json:item>
<author>
<json:item>
<name>S. I. Pfeiffer</name>
</json:item>
<json:item>
<name>L. A. Reddy</name>
</json:item>
<json:item>
<name>J. E. Kletzel</name>
</json:item>
<json:item>
<name>E. R. Schmelzer</name>
</json:item>
<json:item>
<name>L. A. Boyer</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>385</last>
<first>376</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>The practitioner’s view of IQ testing and profile analysis</title>
</json:item>
<json:item>
<author>
<json:item>
<name>A. Prifitera</name>
</json:item>
<json:item>
<name>L. G. Weiss</name>
</json:item>
<json:item>
<name>D. H. Saklofske</name>
</json:item>
<json:item>
<name> </name>
</json:item>
<json:item>
<name> </name>
</json:item>
</author>
<host>
<pages>
<last>39</last>
<first>1</first>
</pages>
<author></author>
<title>WISC-III clinical use and interpretation: Scientist-practitioner perspectives</title>
</host>
<title>The WISC-III in context</title>
</json:item>
<json:item>
<author>
<json:item>
<name>D. A. Pritchard</name>
</json:item>
<json:item>
<name>R. B. Livingston</name>
</json:item>
<json:item>
<name>C. R. Reynolds</name>
</json:item>
<json:item>
<name>J. A., Jr. Moses</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>418</last>
<first>400</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>Modal profiles for the WISC-III</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Wechsler Individual Achievement Test manual</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Wechsler Abbreviated Scale of Intelligence manual</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>C. A. Riccio</name>
</json:item>
<json:item>
<name>M. J. Cohen</name>
</json:item>
<json:item>
<name>J. Hall</name>
</json:item>
<json:item>
<name>C. M. Ross</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>39</last>
<first>27</first>
</pages>
<author></author>
<title>Journal of Psychoeducational Assessment</title>
</host>
<title>The third and fourth factors of the WISC-III: What they don’t measure</title>
</json:item>
<json:item>
<author>
<json:item>
<name>C. A. Riccio</name>
</json:item>
<json:item>
<name>G. W. Hynd</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>399</last>
<first>386</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>Measurable biological substrates to verbal-performance differences in Wechsler scores</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Assessment of children</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Assessment of children: Cognitive applications</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>V. L. Schwean</name>
</json:item>
<json:item>
<name>D. H. Saklofske</name>
</json:item>
<json:item>
<name>R. A. Yackulic</name>
</json:item>
<json:item>
<name>D. Quinn</name>
</json:item>
<json:item>
<name> </name>
</json:item>
<json:item>
<name> </name>
</json:item>
</author>
<host>
<pages>
<last>70</last>
<first>56</first>
</pages>
<author></author>
<title>Wechsler Intelligence Scale for Children</title>
</host>
<title>WISC-III performance of ADHD children</title>
</json:item>
<json:item>
<author>
<json:item>
<name>A. B. Silverstein</name>
</json:item>
</author>
<host>
<volume>5</volume>
<pages>
<last>74</last>
<first>72</first>
</pages>
<author></author>
<title>Psychological Assessment</title>
</host>
<title>Type I, Type II, and other types of errors in pattern analysis</title>
</json:item>
<json:item>
<author>
<json:item>
<name>H. C. Stanton</name>
</json:item>
<json:item>
<name>C. R. Reynolds</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>448</last>
<first>434</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>Configural frequency analysis as a method of determining Wechsler profile types</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Stanford-Binet Intelligence Scale—Fourth Edition</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Profile of general demographic characteristics for the United States</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>M. W. Watkins</name>
</json:item>
</author>
<host>
<volume>15</volume>
<pages>
<last>479</last>
<first>465</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>Cognitive profile analysis: A shared professional myth</title>
</json:item>
<json:item>
<host>
<author></author>
<title>Wechsler Intelligence Scale for Children—Revised Edition</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Manual for the Wechsler Intelligence Scale for Children—Third Edition</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Woodcock Reading Mastery Tests—Revised: Examiner’s manual</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Evidence and implications of over-factoring on commercial tests of cognitive ability</title>
</host>
</json:item>
<json:item>
<host>
<author></author>
<title>Youngstrom, E. A., & Glutting, J. J. (2001).</title>
</host>
</json:item>
<json:item>
<author>
<json:item>
<name>E. A. Youngstrom</name>
</json:item>
<json:item>
<name>J. L. Kogos</name>
</json:item>
<json:item>
<name>J. Glutting</name>
</json:item>
</author>
<host>
<volume>14</volume>
<pages>
<last>39</last>
<first>26</first>
</pages>
<author></author>
<title>School Psychology Quarterly</title>
</host>
<title>Incremental efficacy of Differential Ability Scales factor scores in predicting individual achievement criteria</title>
</json:item>
</refBibs>
<genre>
<json:string>research-article</json:string>
</genre>
<host>
<volume>9</volume>
<publisherId>
<json:string>ASM</json:string>
</publisherId>
<pages>
<last>93</last>
<first>82</first>
</pages>
<issn>
<json:string>1073-1911</json:string>
</issn>
<issue>1</issue>
<genre>
<json:string>journal</json:string>
</genre>
<language>
<json:string>unknown</json:string>
</language>
<eissn>
<json:string>1552-3489</json:string>
</eissn>
<title>Assessment</title>
</host>
<categories>
<wos>
<json:string>social science</json:string>
<json:string>psychology, clinical</json:string>
</wos>
<scienceMetrix>
<json:string>health sciences</json:string>
<json:string>psychology & cognitive sciences</json:string>
<json:string>clinical psychology</json:string>
</scienceMetrix>
</categories>
<publicationDate>2002</publicationDate>
<copyrightDate>2002</copyrightDate>
<doi>
<json:string>10.1177/1073191102009001010</json:string>
</doi>
<id>4408FD3559BF8E27046191B14AB80235D7D87C5A</id>
<score>0.015715938</score>
<fulltext>
<json:item>
<extension>pdf</extension>
<original>true</original>
<mimetype>application/pdf</mimetype>
<uri>https://api.istex.fr/document/4408FD3559BF8E27046191B14AB80235D7D87C5A/fulltext/pdf</uri>
</json:item>
<json:item>
<extension>zip</extension>
<original>false</original>
<mimetype>application/zip</mimetype>
<uri>https://api.istex.fr/document/4408FD3559BF8E27046191B14AB80235D7D87C5A/fulltext/zip</uri>
</json:item>
<istex:fulltextTEI uri="https://api.istex.fr/document/4408FD3559BF8E27046191B14AB80235D7D87C5A/fulltext/tei">
<teiHeader>
<fileDesc>
<titleStmt>
<title level="a" type="main" xml:lang="en">Factor and Subtest Discrepancies on the Differential Ability Scales</title>
<title level="a" type="sub" xml:lang="en">Examining Prevalence and Validity in PredictingAcademic Achievement</title>
</titleStmt>
<publicationStmt>
<authority>ISTEX</authority>
<publisher>Sage Publications</publisher>
<pubPlace>Sage CA: Thousand Oaks, CA</pubPlace>
<availability>
<p>SAGE</p>
</availability>
<date>2002</date>
</publicationStmt>
<sourceDesc>
<biblStruct type="inbook">
<analytic>
<title level="a" type="main" xml:lang="en">Factor and Subtest Discrepancies on the Differential Ability Scales</title>
<title level="a" type="sub" xml:lang="en">Examining Prevalence and Validity in PredictingAcademic Achievement</title>
<author xml:id="author-1">
<persName>
<forename type="first">Shoshana Y.</forename>
<surname>Kahana</surname>
</persName>
<email>syk4@po.cwru.edu</email>
<affiliation></affiliation>
<affiliation>syk4@po.cwru.edu.</affiliation>
</author>
<author xml:id="author-2">
<persName>
<forename type="first">Eric A.</forename>
<surname>Youngstrom</surname>
</persName>
<affiliation>Case Western Reserve University</affiliation>
<affiliation>Case Western Reserve University</affiliation>
</author>
<author xml:id="author-3">
<persName>
<forename type="first">Joseph J.</forename>
<surname>Glutting</surname>
</persName>
<affiliation>University of Delaware</affiliation>
<affiliation>University of Delaware</affiliation>
</author>
</analytic>
<monogr>
<title level="j">Assessment</title>
<idno type="pISSN">1073-1911</idno>
<idno type="eISSN">1552-3489</idno>
<imprint>
<publisher>Sage Publications</publisher>
<pubPlace>Sage CA: Thousand Oaks, CA</pubPlace>
<date type="published" when="2002-03"></date>
<biblScope unit="volume">9</biblScope>
<biblScope unit="issue">1</biblScope>
<biblScope unit="page" from="82">82</biblScope>
<biblScope unit="page" to="93">93</biblScope>
</imprint>
</monogr>
<idno type="istex">4408FD3559BF8E27046191B14AB80235D7D87C5A</idno>
<idno type="DOI">10.1177/1073191102009001010</idno>
<idno type="ArticleID">10.1177_1073191102009001010</idno>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<creation>
<date>2002</date>
</creation>
<langUsage>
<language ident="en">en</language>
</langUsage>
<abstract xml:lang="en">
<p>Past literature has largely ignored the population frequency of multivariate factor and subtest score discrepancies. Another limitation has been that statistical models imperfectly model the clinical assessment process, whereby significant discrepancies between both factors and subtests are included in predictions about an individual’s academic achievement. The present study examined these issues using a nationally representative sample (N = 1,185) completing the Differential Ability Scales. Results indicate that approximately 80% of children in a nonreferred sample show at least one statistically significant ability discrepancy. In addition, the global estimate of cognitive ability was the most parsimonious predictor of academic achievement, whereas information about ability discrepancies did not significantly improve prediction. Findings suggest that when predicting academic achievement, there is little value in interpreting cognitive scores beyond the global ability estimate.</p>
</abstract>
<textClass>
<keywords scheme="keyword">
<list>
<head>keywords</head>
<item>
<term>cognitive testing</term>
</item>
<item>
<term>predictions of academic achievement</term>
</item>
<item>
<term>global measure of intelligence</term>
</item>
<item>
<term>factor and subtest interpretation</term>
</item>
</list>
</keywords>
</textClass>
</profileDesc>
<revisionDesc>
<change when="2002-03">Published</change>
</revisionDesc>
</teiHeader>
</istex:fulltextTEI>
<json:item>
<extension>txt</extension>
<original>false</original>
<mimetype>text/plain</mimetype>
<uri>https://api.istex.fr/document/4408FD3559BF8E27046191B14AB80235D7D87C5A/fulltext/txt</uri>
</json:item>
</fulltext>
<metadata>
<istex:metadataXml wicri:clean="corpus sage not found" wicri:toSee="no header">
<istex:xmlDeclaration>version="1.0" encoding="UTF-8"</istex:xmlDeclaration>
<istex:docType PUBLIC="-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" URI="journalpublishing.dtd" name="istex:docType"></istex:docType>
<istex:document>
<article article-type="research-article" dtd-version="2.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="hwp">spasm</journal-id>
<journal-id journal-id-type="publisher-id">ASM</journal-id>
<journal-title>Assessment</journal-title>
<issn pub-type="ppub">1073-1911</issn>
<publisher>
<publisher-name>Sage Publications</publisher-name>
<publisher-loc>Sage CA: Thousand Oaks, CA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.1177/1073191102009001010</article-id>
<article-id pub-id-type="publisher-id">10.1177_1073191102009001010</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Articles</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Factor and Subtest Discrepancies on the Differential Ability Scales</article-title>
<subtitle>Examining Prevalence and Validity in PredictingAcademic Achievement</subtitle>
</title-group>
<contrib-group>
<contrib contrib-type="author" xlink:type="simple">
<name name-style="western">
<surname>Kahana</surname>
<given-names>Shoshana Y.</given-names>
</name>
<aff>
<email xlink:type="simple">syk4@po.cwru.edu</email>
.</aff>
</contrib>
<contrib contrib-type="author" xlink:type="simple">
<name name-style="western">
<surname>Youngstrom</surname>
<given-names>Eric A.</given-names>
</name>
<aff>Case Western Reserve University</aff>
</contrib>
<contrib contrib-type="author" xlink:type="simple">
<name name-style="western">
<surname>Glutting</surname>
<given-names>Joseph J.</given-names>
</name>
<aff>University of Delaware</aff>
</contrib>
</contrib-group>
<pub-date pub-type="ppub">
<month>03</month>
<year>2002</year>
</pub-date>
<volume>9</volume>
<issue>1</issue>
<fpage>82</fpage>
<lpage>93</lpage>
<abstract>
<p>
<italic>Past literature has largely ignored the population frequency of multivariate factor and subtest score discrepancies. Another limitation has been that statistical models imperfectly model the clinical assessment process, whereby significant discrepancies between both factors and subtests are included in predictions about an individual’s academic achievement. The present study examined these issues using a nationally representative sample (</italic>
N
<italic>= 1,185) completing the Differential Ability Scales. Results indicate that approximately 80% of children in a nonreferred sample show at least one statistically significant ability discrepancy. In addition, the global estimate of cognitive ability was the most parsimonious predictor of academic achievement, whereas information about ability discrepancies did not significantly improve prediction. Findings suggest that when predicting academic achievement, there is little value in interpreting cognitive scores beyond the global ability estimate.</italic>
</p>
</abstract>
<kwd-group>
<kwd>cognitive testing</kwd>
<kwd>predictions of academic achievement</kwd>
<kwd>global measure of intelligence</kwd>
<kwd>factor and subtest interpretation</kwd>
</kwd-group>
<custom-meta-wrap>
<custom-meta xlink:type="simple">
<meta-name>sagemeta-type</meta-name>
<meta-value>Journal Article</meta-value>
</custom-meta>
<custom-meta xlink:type="simple">
<meta-name>search-text</meta-name>
<meta-value> ASSESSMENTKahana et al. / DISCREPANCIES ON THE DAS Factor and Subtest Discrepancies on the Differential Ability Scales Examining Prevalence and Validity in Predicting Academic Achievement Shoshana Y. Kahana Eric A. Youngstrom Case Western Reserve University Joseph J. Glutting University of Delaware Past literature has largely ignored the population frequency of multivariate factor and subtest score discrepancies. Another limitation has been that statistical models imperfectly model the clinical assessment process, whereby significant discrepancies between both fac- tors and subtests are included in predictions about an individual's academic achievement. The present study examined these issues using a nationally representative sample (N = 1,185)completingtheDifferentialAbilityScales.Resultsindicatethatapproximately80%of children in a nonreferred sample show at least one statistically significant ability discrep- ancy. In addition, the global estimate of cognitive ability was the most parsimonious predic- tor of academic achievement, whereas information about ability discrepancies did not significantly improve prediction. Findings suggest that when predicting academic achieve- ment,thereislittlevalueininterpretingcognitivescoresbeyondtheglobalabilityestimate. Keywords:cognitivetesting,predictionsofacademicachievement,globalmeasureofintelli- gence, factor and subtest interpretation The interpretation of cognitive abilities has significant implicationsfortheprognosticevaluationofanindividual. General cognitive ability predicts criteria such as scholas- tic achievement (Jensen, 1998), years of education (Jensen, 1998), and work-related success (Ceci & Williams, 1997; Jensen, 1998; Kaufman, 1994; Neisser et al., 1996). Many practicing clinicians, however, place less emphasis on a general cognitive ability construct and opt instead to inter- pret the more discrete measures of cognitive ability tests, such as factor and subtest scores, to glean information that is thought to be diagnostically useful and pertinent to treatment.The current standard of clinical practice employs the "top-down approach" to test score interpretation, a meth- odology that advocates the use of factors (Donders, 1996; Kamphaus, 1993; Kaufman, 1994; McGrew & Flanagan, 1998; Naglieri, 1993; Sattler, 1992, 2001) and subtests (Alfonso,Oakland,LaRocca,&Spanakos,2000;Gregory, 1999; Kamphaus, 1993; Kaufman, 1979, 1994; Kaufman & Lichtenberger, 1999; Prifitera, Weiss, & Saklofske, 1998; Sattler, 1992, 2001) to construct clinical formula- tions about the strengths and deficits associated with an in- dividual's performance. For example, the Processing Speed Index on the Wechsler Intelligence Scale for Children Third Edition (WISC-III) (Wechsler, 1991) may relate to Please address correspondence and reprint requests to Shoshana Y. Kahana, Case Western Reserve University, Department of Psy- chology, 11220 Bellflower Road, Cleveland, OH 44106-7123; e-mail: syk4@po.cwru.edu. Assessment, Volume 9, No. 1, March 2002 82-93 2002 Sage Publications attentional problems associated with information- processing deficits such as those manifest in attention- deficit hyperactivity disorder (Schwean, Saklofske, Yackulic, & Quinn, 1993). In addition, Lipsitz, Dworkin, and Erlenmeyer-Kimling (1993) found that the Compre- hension subtest on the WISC-Revised (WISC-R) (Wechs- ler, 1974) significantly correlated with social competence for normal participants in childhood. A recent special is- sue of School Psychology Quarterly (2000, Vol. 15) dem- onstrates that interpretive approaches that incorporate both factor and subtest interpretation, such as the succes- sive levels method and profile analysis, are still very much practiced and actively encouraged for clinicians (Carroll, 2000; Naglieri, 2000; Pfeiffer, Reddy, Kletzel, Schmelzer, & Boyer, 2000; Pritchard, Livingston, Reynolds, & Mo- ses, 2000; Riccio & Hynd, 2000; Stanton & Reynolds, 2000; see Watkins, 2000, for commentary). This top-down approach is based on the premise that multiple abilities have more predictive power than a single general cognitive factor. Despite the potential clinical promise of differentiating among cognitive ability dimensions, the utility of these more discrete indices is dubious. In the past decade, sev- eral studies have demonstrated the questionable reliability and validity of interpreting the more specific measures of cognitive ability for predicting child functioning (Beebe, Pfiffner, & McBurnett, 2000; Glutting, Youngstrom, Ward, Ward, & Hale, 1997; McDermott & Glutting, 1997; Riccio, Cohen, Hall, & Ross, 1997; Youngstrom & Glutting, 2001; Youngstrom, Kogos, & Glutting, 1999). More specifically, proponents of the top-down approach and subtest analysis (e.g., Kamphaus, 1993; Kaufman, 1979, 1994; Sattler, 1992, 2001) have yet to prove the criterion-related and incremental validity of discrete indi- ces in predicting academic criteria beyond that offered by a global measure of cognitive ability. In practical terms, the primary standard for validity of diagnostic, score- based interpretations should be the degree to which they accurately predict future performance or prescribe a clear intervention (Glutting et al., 1997; Glutting, Watkins, & Youngstrom, in press; Gough, 1971). Current tests of ability and achievement have attained a level of precision whereby it is possible to identify reliable discrepancies between cognitive ability indices that occur at high rates in the population and, as a result, are likely to be diagnostically uninformative. Because contemporary interpretive practice encourages multiple comparisons across sets of factor and subtest scores, it is important to know the overall base rate of discrepancies (or the frequency of dis- crepancies between multiple variables) in the population. This information has not been previously reported for the Differential Ability Scales (DAS) (Elliot, 1990a). Thus,the first purpose of this article is to examine the overall fre- quency of significant ability discrepancies between factor and subtest scores from the DAS. In addition, the current literature on cognitive interpre- tations has utilized research designs that fail to accurately model the clinical assessment process. Previous investiga- tions of the incremental validity of the more discrete cog- nitive ability scores (e.g., Youngstrom et al., 1999) were limitedinthattheyusuallyselectedoremphasizedonedis- crete index (e.g., factor), as opposed to simultaneously considering several lower level indices, in predicting aca- demic achievement. In reality, however, current clinical interpretive practice typically encourages psychologists to compare many ability variables present on an IQ-test pro- tocol (e.g., possible factor-score comparisons, possible subtest-score comparisons) (Kaufman, 1994; Sattler, 1992, 2001). For example, assessment authorities such as Kaufman (1994) and Sattler (1992, 2001) provided inter- pretive tables that include common or shared abilities tapped by various subtests as well as what specific subtest discrepancies might indicate. More specifically pertaining to the DAS, Sattler (2001) clearly outlined the approaches to profile analysis for the DAS. In addition to evaluating the global estimate of intelligence and factor scores, the clinician is encouraged to evaluate within-factor differ- ences, evaluate differences between subtests and the mean core T-score, and compare each subtest T-score with the other subtest T-scores. It appears that the clinician is advised to interpret if not all then certainly most potential discrepancies between subtests. Whereas past studies have found that factor IQs retain no general advantage over conventional IQs in pre- dicting academic performance (Glutting et al., 1997; Youngstrometal.,1999),noresearchtodatehasexamined whether there is any predictive utility to factor and subtest discrepancies for the prediction of academic achievement on the DAS. Thus, the second purpose of this study is to model and examine the validity of current clinical assess- ment, whereby (a) both factors and subtests are used in predictingacademicachievementand,moreimportant,(b) through the use of interaction terms, to examine whether significant discrepancies between discrete indices are valid in predicting academic achievement (i.e., do they provide any incremental contributions beyond the global estimate of cognitive ability in predicting academic achievement). Many advocates of factor and subtest analysis acknowl- edge that global estimates of cognitive ability will make accurate predictions, particularly for children with evenly developed cognitive abilities (e.g., Kamphaus, 1993; Kaufman, 1994; Sattler, 1992). The question is whether children who display discrepancies across specific cogni- tive abilities will perform differently, on average, fromKahana et al. / DISCREPANCIES ON THE DAS 83 those who do not display either strengths or weaknesses in academic achievement. The DAS serves as a good instrument for this study. Re- search has consistently shown that both the general cogni- tive ability scores and the more discrete verbal ability and nonverbal reasoning conceptual abilities are measured well by the DAS (Keith, 1990). The DAS was a priori de- signed to measure multiple factors of cognitive ability in the belief that such measurement would provide clinically relevant information pertaining to diagnosis and treat- ment. In addition, the DAS contains multiple achievement tests that can serve as particularly effective criterion mea- sures for the concurrent prediction of academic achieve- ment (Elliot, 1990b, 1990c). The present investigation did not consider all possible patterns of discrepancy on the DAS. We chose to examine those discrepancies that current clinical interpretive prac- tice considers either theoretically motivated or empirically supported for predicting clinical or academic problems. Sattler (2001) provided a list of illustrative hypotheses for verbal ability and nonverbal reasoning discrepancies, ver- bal ability and spatial ability discrepancies, and for com- parisons of the Verbal, Nonverbal Reasoning, and Spatial Ability subtests. To choose factors that were related to spe- cific achievement criteria, we also examined the DAS manual's table of correlations (Table 9.37) between ability and achievement for all children in the standardization sample. We chose those factors that had the highest corre- lations with the achievement tests, such that nonverbal rea- soning correlated the highest with basic number skills (r = .59), verbal ability and nonverbal reasoning with spelling (bothcorrelatedr=.49),andverbalabilitywithwordread- ing (r = .59). Thus, the final choice of discrepancies to in- clude in the models predicting achievement blended clinical recommendations with existing data about ability- achievement correlations. METHOD Participants Participants were 1,185 children who completed the DAS during the national standardization of the Adjust- ment Scales for Children and Adolescents (McDermott, 1994). The sample was configured according to the 1988 to 1990 U.S. census and was stratified for age, gender, race/ethnicity, parent education, family structure, national region, and community size. Although the DAS standard- ization sample included preschoolers, the present study concentrated entirely on school-aged children, who ranged in age from 6 to 17 years, with a mean of 11.54 (SD = 3.41) years. Children from kindergarten through 12thgrade were included, with 3 participants marked as "other." There was a roughly equal representation of gender across grade, with 595 males and 590 females participating. The ethnicbreakdownofthesamplewassuchthat69.7%ofthe participants were Caucasian, 14.9% African American, 11.8% Hispanic, and 3.6% from other ethnic groups. In ac- cordance with the 1988 to 1990 U.S. census, three fourths of the sample (74.8%) came from families in which there was both a mother and father. Finally, 44.8% of the partici- pants were from a major metropolitan area, 35% from a minor metropolitan area, and 19.4% from a rural area. Procedure Two hundred and twenty-five individuals with formal training in individual assessment of cognitive ability ad- ministered the DAS and achievement subtests. Adminis- trators were either independently qualified to administer such tests or were appropriately supervised. The majority of the administrators were trained in DAS administration by project staff members at workshops conducted during the fall of 1986 and were required to submit two satisfac- tory practice cases before being permitted to conduct test- ing. Central project staff managers reviewed all case protocols as they were received to maintain adequate stan- dardizationandadministrationprocedures(Elliot,1990a). Measures Cognitive ability. The interpretive hierarchical struc- ture of the DAS begins with a global measure of intelli- gence called the General Conceptual Ability (GCA) (alpha = .95), a measure of general cognitive ability, analo- goustog,theconstructunderlyingintelligence.Ingeneral, the DAS uses a relatively small number of core subtests that have high g loadings for the calculations of the GCA score. The subtests cover a range of abilities and pro- cesses, including verbal ability and nonverbal reasoning, visual and auditory memory, language expression and comprehension, perceptual-motor skills, speed of infor- mationprocessing,andschoolachievementinessentialar- eas (Elliot, 1990a). The GCA shows good convergent validity with other measures of general ability, including WISC-R full scale IQ (FSIQ) (r = .84) and the Stanford- Binet Intelligence ScaleFourth Edition (Thorndike, Hagen, & Sattler, 1986) composite score (r = .88) (Elliot, 1990a). The GCA is composed of three factor scores: Verbal Ability, Nonverbal Reasoning, and Spatial Ability.1 Each factor score is defined by two subtests. The Word Defini- tions (alpha = .83) and Similarities (alpha = .79) subtests compose the Verbal Ability factor. The Nonverbal Rea- soning factor is formed by the Matrices (alpha = .82) and 84ASSESSMENT Sequential and Quantitative Reasoning (alpha = .85) subtests. The Recall of Designs (alpha = .84) and Pattern Construction (alpha = .91) subtests constitute the Spatial Ability factor. The Verbal Ability factor is a measure of complex ver- bal mental processing that includes acquired verbal con- cepts, verbal knowledge, reasoning, and a general knowledge base. It is strongly associated with other related measures, reporting a correlation of .84 with the WISC-R verbal IQ and a .72 with the WISC-R FSIQ (Elliot, 1990a). The Non- verbal Reasoning factor represents nonverbal and induc- tive reasoning and requires complex mental processing. It too shows good convergent validity with other measures, correlating .75 with the WISC-III Perceptual Organization Index (POI) and .78 with the WISC-III performance IQ (PIQ) (Elliot, 1990a). The Spatial Ability factor is related to visualization, spatial orientation, and visual-motor tasks. It shows strong associations with other performance- oriented tasks, correlating .82 with both the WISC-III PIQ and POI. The GCA and factor indices are expressed as standard scores, with population means of 100 and stan- dard deviations of 15. Achievement criteria. The DAS also consists of three individual achievement scales that were conormed with the ability subtests. They include Basic Number Skills (al- pha = .87), Spelling (alpha = .92), and Word Reading (al- pha = .92). Basic Number Skills focuses on the conceptsand skills that underlie basic competence in arithmetic cal- culation. Spelling examines the child's ability to produce correct spellings and includes a range of phonetically reg- ular and irregular words. Word Reading is an achievement test of the recognition and oral reading of single words. In general, each of these tests shows a consistently moderate to good pattern of convergent and divergent va- lidity with other individually and group administered achievement tests (r = .43-.68) (Elliot, 1990a). In addition, the three DAS achievement scales demonstrate moder- ately positive correlations with school performance, spe- cifically teacher-assigned school grades in Mathematics, Spelling, and Reading (Elliot, 1990a). The tendency of school grades to be less reliable than scores on standard- ized group achievement tests reduces the level of correla- tion between the two. Nevertheless, these correlations are higher than those obtained by other frequently used achieve- ment tests, such as the Wechsler Individual Achievement Test (Psychological Corporation, 1992) (r = .23-.46). RESULTS Descriptive Statistics Table 1 presents the descriptive statistics for all of the measures used in subsequent analyses. It is worth noting that all ability and achievement variables showed approxi- mately normal distributions (all skews between .05 and +.14, all kurtoses between .40 and +.08), and there were no univariate outliers with unusually extreme scores. Frequency of Discrepant Cognitive Abilities To estimate multivariate prevalence, we calculated crit- ical discrepancy scores for each of the following pairings: at the factor score level, Verbal Ability versus Nonverbal Reasoning,VerbalAbilityversusSpatialAbility,andNon- verbal Reasoning versus Spatial Ability and at the subtest level, Word minus Similarities, Design minus Pattern Construction, Matrices minus Sequential and Quantitative Reasoning.2 It is important to note that other pairings are possible (e.g., Similarities versus Matrices), but these were not included because current practice does not em- phasize direct comparison of subtests that fall on different ability factors (cf. Sattler, 1992). We then determined the percentage of participants showing discrepancies exceed- ing each critical value. As reported in Table 2, 373 partici- pants (31.5%) showed a reliable difference between Verbal Ability and Nonverbal Reasoning scores, whereas Verbal and Spatial Ability discrepancies were somewhat more common, with 474 youths (40.0%) exhibiting discrepan- cies. Finally, we calculated the cumulative number of dis-Kahana et al. / DISCREPANCIES ON THE DAS 85 TABLE 1 Descriptive Statistics for Cognitive Ability and Achievement Measures (N = 1,185) Standard VariableMeanDeviationRange Cognitive abilitya GCA100.5614.6755-145 Verbal factor100.1514.5455-140 Nonverbal Reasoning factor100.4414.9960-142 Spatial factor100.6814.4856-143 Cognitive subtestsb Matrices50.5010.0020-80 Similarities50.199.7720-80 Sequential and Quantitative Reasoning50.459.9020-79 Pattern Construction50.549.7721-80 Recall of Digits50.189.8420-80 Recall of Objects50.1410.3320-80 Recall of Designs50.879.6520-79 Achievement criteriaa Basic Number Skills100.4414.5357-145 Word Reading101.1715.0155-145 Spelling100.5014.8855-145 NOTE: GCA = General Conceptual Ability. a. Standard score metric, M = 100, SD = 15. b. T-score metric, M = 50, SD = 10. crepancies exhibited by each participant. Table 2 indicates that a staggering 80% of the nonreferred sample showed at least one discrepancy between factor or subtest scores when using a conservative 95% confidence interval ap- proach. When using the 90% confidence interval recom- mended for clinical applications (e.g., Kaufman, 1994; Sattler, 1992), 88.4% of youths exhibited at least one dis- crepancy. Incremental Value of Specific Cognitive Abilities in Predicting Achievement Multiple regression analyses tested the hypotheses that DAS factor or subtest scores would significantly improve prediction of academic achievement criteria even after controlling for GCA. These analyses are consistent with contemporary assessment practice that often incorporates informationaboutthepresenceofaclinicallyinterpretable discrepancy in underlying cognitive abilities. Particularly in situations where cognitive abilities are not evenly devel- oped, discrepancies between the more discrete indices of measurement, such as factors and subtests, will often be- come the focal point or main piece of interest in determin- ing a clinician's interpretation of the child. Authorities argue that when a child shows a substantial difference be- tween Verbal Ability and Nonverbal Reasoning, for exam- ple, then GCA would not be the optimal predictor of reading achievement (e.g., Kaufman, 1994; Sattler, 1992, 2001). Instead, the child's Verbal Ability should be the more accurate predictor because it more purely reflects the cognitive ability most involved in reading. Given the high frequency of statistically significant dif- ferences in the standardization sample (see Table 2), we defined clinically interpretable discrepancies as those hav- ing a bivariate prevalence rate of less than 10%. A number of writers have emphasized the differences between statis- tical significance and clinical rarity (Cahan, 1986; Glutting,McGrath, Kamphaus, & McDermott, 1992). Differences between scores may be statistically significant but not es- pecially unusual or particularly meaningful in the popula- tion. The statistical significance of a discrepancy refers to the probability that the results are not merely a chance oc- currence but does not describe how frequently a discrep- ancy of a given magnitude occurs in the normal population or whether the differences are so large that they are consid- ered abnormal or rare. Following this first set of results and employing statisti- cal significance as a guideline, clinicians would identify some form of ability discrepancy or generate an interpre- tive hypothesis for close to 80% of the children in the United States. Thus, we decided that it might be more meaningful to look at those discrepancies that were both statistically significant and rare. We created two dummy codes for each participant, which were scored 0 if he or she did not show a rare or unusual discrepancy between factor scores (Verbal AbilityNonverbal Reasoning or Verbal AbilitySpatial Ability) and scored 1 if he or she did. We also created dummy codes reflecting the presence or ab- sence of rare subtest scatter between the two subtests com- posingtheVerbalAbilityfactor(i.e.,WordDefinitionsand Similarities, n = 105 with significant discrepancies or 8.9% of sample), the Nonverbal Reasoning factor (i.e., Matrices and Sequential and Quantitative Reasoning, n = 82 or 6.9% of sample), and the Spatial Ability factor (i.e., PatternConstructionandRecallofDesigns,n=88or7.4% of sample). Next, we subtracted 100 from the GCA and the factor scores and 50 from each subtest score for each participant (centering them around their population means, as recom- mendedforregressionmodelsusinginteractionterms).In- teraction terms were created by multiplying each dummy code by its respective ability score. Regression models then tested the incremental validity of the factor scores, subtests, and discrepancy information using a block entry approach. For all three achievement criteria, the first block entered the GCA by itself. This model represents the most parsimonious approach, basing predicted academic achievement simply on general cognitive ability. The sec- ond block entered three factor-score predictors as a set, chosen on theoretical grounds for each achievement crite- rion. For Basic Number Skills, the second block of predic- tors included the centered Nonverbal Reasoning factor score, the dummy code indicating whether Nonverbal Reasoning was significantly different from Verbal Ability, and the interaction of the two. For Spelling and Word Reading achievement, the predictors were Verbal Ability, the dummy code for significant VerbalNonverbal Rea- soning discrepancy, and the interaction term. The regres- sion analyses were also repeated for each achievement criterion using Verbal and Spatial Ability discrepancies, 86ASSESSMENT TABLE 2 Multivariate Frequency of Cognitive Ability Discrepancies (N = 1,185) 90% Confidence (Clinical Standard of Practice)95% Confidence Number ofPercentage Cumulative Percentage Cumulative Discrepanciesof SamplePercentageof SamplePercentage 60.20.20.00.0 52.42.60.70.7 411.113.75.25.9 326.840.519.225.1 228.268.730.055.1 119.788.425.180.3 011.6100.019.7100.0 resulting in a total of six regression models. Nonverbal ReasoningSpatial Ability discrepancies were not exam- ined because we are not aware of clinical guidelines for their interpretation, especially in the context of predicting academic achievement. The interaction term was the most interesting compo- nent of the second block of predictors, for three reasons. Most important, the interaction operationalized the clini- cal view that the particular ability factor was most likely to be important for individuals with a marked strength or weakness on that ability. Conversely, previous research (e.g., Youngstrom et al., 1999) has already established that the main effect for each factor score does not provide sub- stantial improvements in the prediction of academic crite- ria. Finally, the dummy codes were not expected to show significant effects because they blended children who showed weaknesses with those who showed strengths on a particular factor (whereas the interaction term included in- formation about each child's specific performance on the factor, along with whether this performance constituted a clinically interpretable strength or weakness). The main effect and dummy code were still included in the regres- sion model even though they were conceptually uninter- esting because that is the recommended practice for using multiple regression to test interactions (Aiken & West, 1991; Cohen & Cohen, 1983). The third block for the regression models entered the centered subtest scores, dummy codes for significant subtest discrepancies, and Subtest Discrepancy interaction terms.3 The third block simultaneously entered all of the subtests and discrepancies for the factor scores used in the second block.4 For example, the regression model predicting Word Reading entered the centered scores for Word Definitions and Similarities, the dummy code for the discrepancy, and the interaction between both subtests and the discrepancy. Again, the interaction terms were the only components of conceptual interest in each block; the main effects were in- cluded only to follow established guidelines for regression analysis. The inclusion of the main effects did not substan- tially reduce power to detect interactions, as the main ef- fects in even the most complicated regression models only consumed 5 degrees of freedom, leaving 1,175 degrees of freedom for model testing. Table 3 presents the results for the regressions that were grouped by achievement criterion. Regression coefficients are only reported for variables making a significant unique contribution or for the interaction terms, which are in- cluded because of their conceptual importance within the study. As the findings indicate, the GCA was the only vari- able to contribute significant, unique variance to the pre- diction of achievement in all of the various regression models. Nonverbal Reasoning and Spatial Ability both made small but statistically significant incremental pre-dictions for Basic Number Skills, as can be seen by look- ing at the tests of their respective main effects. Similarly, Verbal Ability provided a small but significant incremen- tal improvement in predicted Word Reading and Spelling. Reported statistical probabilities are two-tailed and should be compared with a Bonferroni-adjusted critical value of p < .0083tomaintainoverallalpha<.05acrosssixmodels. Table 3 also reports the part correlations between each predictor and the achievement criterion. Squaring the part r indicates how much variance in achievement was uniquely explainedbythefactorscore.Thelargestpartrwas.18,for the Verbal Ability factor predicting Word Reading, show- ing that less than 4% of the variance in any achievement criterion was uniquely accounted for by an ability score. None of the subtests provided any significant predictive increment when entered in Block 3, and none of the mean centered factor scores continued to provide unique infor- mation once the constituent subtests were entered into the regression model. This is not surprising because the factor scores would be highly collinear with the subtests. Also as expected, the dummy coded discrepancy variables did not provide any new information in predicting the achieve- ment criteria (all ps > .05). None of the Cognitive Ability Discrepancy interac- tions were statistically significant when compared with a Bonferroni-adjusted critical value of p < .0024 to maintain overall alpha < .05 for 21 comparisons. Even though none met statistical criteria for interpretation, all 21 interaction terms are reported in Table 3. They represent the most di- recttestoftheclinicalhypothesisthatpredictionofachieve- ment criteria should be adjusted when there are clinically significantdiscrepanciesbetweenthefactorscores.Exam- ination of Table 3 shows that the largest part r associated with an interaction term is .07, meaning that adjustments made on the basis of clinically significant ability discrep- ancies improved prediction of achievement criteria by, at best, 0.49%. These null findings are disappointing; how- ever, they are unlikely to result from low statistical power. The tests of the interaction terms are based on 1, 1175 df, and using a two-tailed alpha of .05, statistical power was .80 to detect partial correlations of .082 or larger (Buchner, Faul, & Erdfelder, 1996). Some researchers might take issue with the blocked hi- erarchical regression approach that we employed. Spe- cifically, the high g loadings of the subtests and factors make it almost impossible to find additional unique contri- butions if the added variables bring in shared g as well as shared error variance. Obviously, there is a high degree of multicollinearity among the predictors as a consequence of global ability being derived from the underlying factor and subtest scores. However, in situations where variables are all highly interrelated, we would contend that more things (such as factor and subtest scores) will nearly al-Kahana et al. / DISCREPANCIES ON THE DAS 87 88ASSESSMENT TABLE 3 Tests of Incremental Validity of Factor and Subtest Scores in Predicting Achievement Criteria for Individuals With Discrepant Cognitive Abilities Criterion and AnalysisBlockPredictorR2 BPart r Basic Number Skills1. GCA onlyGCA.347***.58***.59 Verbal versus Nonverbal Reasoning2a. Factor discrepancies3 df.017*** GCA.36***.17 Nonverbal Reasoning.25***.11 Nonverbal Reasoning Discrepancy.04.01 3a. Subtest discrepancies5 df.010** GCA.34***.15 Nonverbal Reasoning Discrepancy.06.02 Matrices Discrepancy.00.00 Sequential/Quantitative Discrepancy.11.02 Verbal versus Spatiala2b. Factor discrepancies3 df.015*** GCA.77***.42 Spatial.24***.12 Spatial Discrepancy.12.04 Word Reading1. GCA onlyGCA.356***.61***.61 Verbal versus Nonverbal Reasoning2a. Factor discrepancies3 df.037*** GCA.31***.16 Verbal.37***.18 Verbal Discrepancy.10.03 3a. Subtest discrepancies5 df.011*** GCA.31***.17 Verbal Discrepancy.09.03 Word Definitions Discrepancy.11.02 Similarities Discrepancy.07.02 Verbal versus Spatial2b. Factor discrepancies3 df.037*** GCA.35***.19 Verbal.31***.15 Verbal Discrepancy.09.03 3b. Subtest discrepancies5 df.012** GCA.35***.19 Verbal Discrepancy.09.03 Word Definitions Discrepancy.11.02 Similarities Discrepancy.07.02 Spelling1. GCA onlyGCA.273***.54***.52 Verbal versus Nonverbal Reasoning2a. Factor discrepancies3 df.018*** GCA.32***.17 Verbal.27***.14 Verbal Discrepancy.24.07 3a. Subtest discrepancies5 df.012** GCA.32***.17 Verbal Discrepancy.23.06 Word Definitions Discrepancy.18.04 Similarities Discrepancy.08.02 Verbal versus Spatial2b. Factor discrepancies3 df.015*** GCA.38***.20 Verbal.19***.09 Verbal Discrepancy.07.03 3b. Subtest discrepancies5 df.013** GCA.38***.20 Verbal Discrepancy.06.02 Word Definitions Discrepancy.17.04 Similarities Discrepancy.09.02 NOTE: GCA = General Conceptual Ability. All analyses are based on N = 1,185. Final univariate tests of significance are based on 1, 1175 df. All interac- tion terms are reported regardless of statistical significance. Only main effects making a significant unique contribution to prediction are reported. a. There were no subtest discrepancies that were considered appropriate to use in predicting Basic Number Skills when there was a Verbal AbilitySpatial Ability split. The subtests that were most logically connected to Basic Number Skills were subsumed in the Nonverbal Reasoning factor. ***p < .001, compare with a Bonferroni-adjusted critical value of p < .0024 to maintain overall alpha < .05 for 21 comparisons. **p < .05. ways predict as well, or even marginally better, than one thing(globalability).Thisphenomenonispreciselythera- tionale for why such multicollinearity is a violation of par- simony and not an asset (Glutting et al., in press). Nevertheless, we decided to examine if the factor indices would outpredict the GCA when each was entered alone intoaregressionequation.AsTable4indicates,evenwhen entered alone, in almost all cases the GCA will still outper- form more discrete indices. Single-factor scores did not outpredict the GCA for any achievement criterion, with the exception of the Verbal Ability factor accounting for .002 more of the variance than GCA for Word Reading. Alternate Tests of Clinical Interpretive Model The regression models presented above represent a sta- tistically conventional approach to evaluating interpretive strategy but might not exactly duplicate the logic recom- mended in clinical procedures. The regression approach uses the GCA to predict every child's achievement, then adjusts predictions based on one of the factor scores, and finally further adjusts scores when a significant discrep- ancy is present between the ability factor scores (by means of the interaction term). Similarly, information from the subtests is integrated after already controlling for the GCA, the factor score, the presence or absence of a cogni- tive discrepancy, and the interaction between factor and discrepancy. As Table 3 makes clear, these regression models become complicated, and in the final stages they are using 10 different pieces of information about the child to optimize prediction of achievement criteria. Some authorities recommend the substitution of the factor score for the global cognitive ability estimate in cases where the cognitive ability scores actually are signif- icantly disparate (e.g., Kaufman, 1994). In cases where there is reliable and statistically significant scatter among the factor scores, the regression approach would continue to utilize the GCA and augment it with additional informa- tion, whereas clinical authorities would opt to avoid inter- preting the GCA and instead to replace it with one of the factor scores. The logic is that (a) if there are marked dif- ferences in the cognitive abilities measured by the factor scores, then the global measure of ability is a potentially misleading aggregate and (b) achievement criteria are likely to be predicted more accurately by a specific cogni- tive ability that is more directly related to the particular achievement task. For example, if a child has a GCA of 94 but a Verbal Ability factor score of 77 and a Nonverbal Reasoning score of 107, clinical authorities would gener- ally recommend ignoring the GCA and using the Verbal Ability factor to predict reading achievement. Similarly, clinicians might consider using Nonverbal Reasoning topredict the child's likely performance in mathematics. The important point is that clinical guidelines are oriented to- ward the selection of the optimal piece of information rather than the combination of multiple pieces of informa- tion according to regression weights (especially not in- volving interaction terms). Although the use of formulas and other aids, such as signal detectability, the Bayesian approach, or decision analysis, can help to model deci- sions by a variety of regression approaches (Kleinmuntz, 1990), clinicians typically do not rely on such formulas and instead select one test score from several that are avail- able and base their interpretations on that score. To test the efficiency of actual clinical practice, we se- lected the optimal predictor variable for each achievement criterion. If an individual did not show any significant dis- crepancies among factor scores, then we used the GCA as the predictor. If the youth showed Verbal Ability strengths or weaknesses (as compared to either Nonverbal Rea- soning or Spatial Ability), then we switched to using Ver- bal Ability as the predictor for Word Reading and Spelling criteria. If a participant showed Nonverbal Reasoning Verbal Ability or Spatial-Verbal Ability discrepancies, then we used Nonverbal Reasoning or Spatial Ability as the predictor of Basic Number Skills. If the youth showed discrepancies with both Nonverbal Reasoning and Spatial Ability, then we used the average of Nonverbal Reasoning and Spatial Ability as the predictor. Table 5 shows the correlation between each "clinically optimized" index and the achievement criterion. Table 5 also presents the correlation between the GCA and the same achievement criterion and the correlation between the GCA and the clinically optimized index. It is possible to test whether the correlations with achievement are sig- nificantly different from each other, using the t test of de- pendent correlations (as per Cohen & Cohen, 1983). For Word Reading and Spelling, there is no difference in the performance of the GCA versus the clinically optimized index. For Basic Number Skills, the clinically optimized index actually performs significantly worse than the GCA alone.Kahana et al. / DISCREPANCIES ON THE DAS 89 TABLE 4 Percentage of Variance Explained by GCA (Alone) and Factor (Alone) in Predicting Academic Achievement Achievement CriterionGCAFactor Basic Number Skills34.717.6 (spatial) 32.9 (nonverbal) Word Reading35.635.8 (verbal) Spelling27.324.6 (verbal) NOTE: GCA = General Conceptual Ability. DISCUSSION Our results suggest that statistically significant discrep- ancies are quite frequent. The average child in the sample exhibited two "significant" discrepancies at the factor or subtest level. Nearly 4 children in 5 demonstrated at least one ability discrepancy when using a conservative 95% confidence approach, and more than 88% of children showed at least one discrepancy when employing the 90% standard recommended in many textbooks. What is partic- ularly striking about this finding is that it comes from a representative, nonreferred sample. If the results are to be taken at face value, then it stands to follow that nearly 80% of average American children will be diagnosed as having some kind of discrepancy, either classified as a deficit or a strength. Thus, when incorporating information about dis- crepancies, it is important to consider the multivariate base rate in the sample population. Multiple regression analyses showed that the GCA was the most parsimonious and robust predictor of all three forms of academic achievement. The global cognitive esti- mate continued to make a unique contribution to achieve- ment prediction even when combined with a factor score, subtest scores, and information about the presence of dis- crepancies. Multiple regression analyses also failed to de- tect any instance where interactions between factor scores and reliable discrepancies (or subtests and discrepancies) significantly improved prediction of achievement. This finding was surprising because it represents a statistically sophisticated means of testing the clinical impression that specific abilities should outperform global ability esti- mates when an individual's specific abilities are not evenly developed. The failure to find a significant interaction is unlikely to be due to low statistical power, as the sample size was large enough to afford power > .90 to detect small effect sizes (e.g., r = .10). Null findings are also unlikely tobe attributable to sampling bias, as the data come from the standardization sample of a published measure conforming closely to the 1988 to 1990 U.S. census data (McDermott, 1994). In reality, clinicians do not combine multiple scores fromwithinatestbatterytobestestimateachild'sachieve- ment performance but rather use interpretive guidelines to select one test score as optimal for a particular purpose. When comparing the individualized predictor approach to prediction using the GCA, the clinically optimized predic- tor performed either equally well as the GCA or signifi- cantly worse (in the case of Basic Number Skills). We wouldarguethat"ties"shouldbeawardedtotheGCA:Re- liance on the GCA is the simplest prediction model, and it concentrates clinical attention on the most reliable and well-validated score (for purposes of predicting academic achievement) from the test battery. The failure to detect statistically significant improve- ments in prediction of achievement, even when consider- ing cases that show dramatic (i.e., 30+ point) discrepancies in underlying cognitive abilities, is surprising. Global abil- ity estimates have drawn considerable criticism, and inter- pretivesystemshavedevotedconsiderableenergytocreating auxiliary or alternative assessment systems. What could explain the apparent robustness of the GCA and the rela- tively modest performance of the factor scores? Several possibilities include the following: The factor indexes do not contain sufficient unique variance to provide accurate estimates of individual specific abilities as distinct from general ability (Youngstrom & Frazier, 2000), the specific cognitive abilities measured do not possess significant in- cremental validity for the achievement criteria studied here (although it is an empirical question whether they might demonstrate incremental validity for other criteria), statistically significant differences in ability are appar- ently commonplace in the population, and finally, a sub- stantial proportion of the apparently significant discrep- ancies may either be Type I errors (Silverstein, 1993) or artifacts of administrative error (Hanna, Bradley, & Holen, 1981). Another important consideration is the psychometric limitations of difference scores. Differences between correlated scores can show very low levels of reli- ability, even when the two tests being compared are highly reliable. This reflects the fact that difference scores will possess minimal true score variance and almost entirely reflect measurement error (cf. Caruso, 2001). Relying on a single score, however, such as the GCA, obviates this problem because there is no need to interpret any differ- ences between scores. In addition to the psychometric limitations, other com- pelling factors need to be considered when debating the use of more discrete indices. In scientific investigations, the law of parsimony, or having an explanation that in- 90ASSESSMENT TABLE 5 Correlations Between Achievement Criteria and General Conceptual Ability or Clinically "Optimized" Index r(Clinical Nuisancet Test of Achievement Criterionr(GCA)Choice)rDifference Word Reading.597.604.8480.55 Basic Number Skills.589.488.8507.84**** Spelling.523.504.8481.40 NOTE: r(GCA) = correlations between achievement criteria and General Conceptual Ability; r(Clinical Choice) = correlations between achieve- ment criteria and clinically "optimized" index; nuisance r is the correla- tion between GCA and the respective clinically "optimized" index. All t tests are based on 1, 182 df. ****p < 1.1 1014. vokes as few principles as necessary, is ideal. Interpreta- tions that employ factor and subtest scores are not conso- nant with this principle--unless they demonstrate marked incremental validity. In addition, the use of more discrete measures significantly contributes to the length of the test administration process (Camara, Nathan, & Puente, 1998). Given the current public policy and managed health care milieu, there are very often severe time constraints and eroding reimbursement rates surrounding psychological assessments (Camara, Nathan, & Puente, 2000; Eisman et al., 1998; Groth-Marnat, 1999). In that sense, the gains from interpreting the more dubious discrete indices might not be commensurate with the expenditures involved in their use. Using more discrete indices to predict academic achievement, even in more specific content areas, leads to more complex models that provide meager dividends (e.g., Grossman & Johnson, 1982; Mishra, 1983). There are several limitations to the generalizability of the present study. First, the findings are based solely on the DAS as a measure of cognitive ability and academic achievement. Although the DAS has good psychometric properties (e.g., good validity, high reliability), these re- sults would benefit from further cross-validation using ad- ditional well-established measures of ability and achievement. Glutting et al. (1997) have provided some initial support for the current findings with similar results for the WISC-III sample. Second, the study only empha- sizes the criteria of academic achievement. Given the reli- ability of factor scores and their inclusion in most top- down approaches, it might be helpful to examine whether the more discrete measures significantly or meaningfully relate to other criteria of interest besides achievement. Factor scores might become more important predictors of educational-vocational criteria as people begin to pursue more specialized educational and vocational training in young adulthood (Lubinski & Benbow, 2000). Third, it is important to recognize that all of the analyses examined here employ concurrent measures. It might be useful to ex- plore whether factor scores provide any incremental pre- dictive advantage beyond the GCA with longitudinal data (e.g., Moffitt, Caspi, Harkness, & Silva, 1993). Fourth, the sample is based on U.S. census information from more than a decade ago. As compared to the 1988 to 1990 U.S. census information, the 2000 census indicates that there are some significant differences in certain demographic characteristics of the U.S. population. For example, of the total population, there is a 6.5% decrease in the Caucasian population, whereas the Hispanic population has in- creased by 3.5%, so that they currently compose more than 10% of the total population (U.S. Department of Com- merce, 2000). These demographic changes may alter the generalizability of the current findings to certain ethnicgroups. Finally, the results of the current study are based on a nonclinical population, and it is unclear how well they would generalize to other settings. The present study has important clinical implications. Results strongly suggest that clinical assessments should concentrate on the most global assessment of cognitive ability when addressing referral questions pertaining to academic achievement. This accords well with other re- search that has suggested that general intelligence is the most potent and parsimonious predictor of academic per- formance for K-12 students (Lubinski & Benbow, 2000). Interpretation of discrepancies between factors and subtests does not significantly help in predicting academic achievement, even in specific content areas, and results in models that are more complex, confusing, and time consuming. We are certainly not implying that a general estimate of cognitive ability is the only piece of information, or even the most important data point, in addressing the clinical needs of children. It appears unlikely, however, that as- sessment of specific cognitive strengths and weaknesses on the DAS, using either factor or subtest scores, will un- coverdiagnosticinformationthatcouldleadtomoreeffec- tive educational interventions. This view is consistent with the largely negative literature on aptitude by treatment in- teractions (Cronbach & Snow, 1977). If clinicians are in- tent on accurately predicting children's performance, then it appears that a brief and reliable assessment of general cognitiveabilitywouldbesufficient.Evaluatorscanusean instrument that was specifically designed and normed as a brief ability test, such as the Wechsler Abbreviated Scales of Intelligence (Psychological Corporation, 1999) or the Wide Range Intelligence Test (Glutting, Adams, & Sheslow, 2000). Examiners might also consider using a new, shorter version of the DAS that would include only three subtests. Either the Word Definitions (alpha = .83) or Similarities (alpha = .79) subtests could signify the Verbal Ability fac- tor. These subtests are also compelling because each also loads on highly with g, as evidenced by correlations of r = .74andr=.75,respectively,withtheGCA.TheSequential and Quantitative Reasoning subtest could well represent the Nonverbal Reasoning factor, both because of its high internal consistency (alpha = .85) and correlation with GCA (r = .79). Similarly, the Pattern Construction subtest would strongly represent the Spatial Ability factor, with alpha = .91 and a correlation of r = .77 with the GCA. An important caveat to consider is that future research will have to validate this shorter DAS form before it would be appropriate for clinicians to use. Ultimately, any abbrevi- ated ability battery would free up time and resources for diagnostic assessment activities that are more likely to identify remediable weaknesses and to prescribe success-Kahana et al. / DISCREPANCIES ON THE DAS 91 ful interventions (e.g., for referral questions involving reading difficulty, tests such as the Woodcock Reading Mastery TestsRevised) (Woodcock, 1987). NOTES 1. Elliot (1990a) called the three factors "clusters" in the manual, even though exploratory and confirmatory factor analyses were the em- pirical basis for generating the scales. 2. Discrepant scores were calculated using the geometric mean or dif- ference score formula, where differences required for statistical signifi- cance are based on the standard errors of measurement for each index scale as well as the z score under the normal curve that is associated with the desired significance level. The formula is Difference Score = Z SEma2+ SEmb2. 3. See Note 2. 4. In the present study, we used interaction terms to examine if there was a differential predictive relationship of academic achievement be- tween those individuals who demonstrated clinically significant or rare discrepancies between factor and subtest scores and those who did not. Analternativeapproachwouldbetotakeallparticipantswiththediscrep- ancy or discrepancies of interest and compare their academic scores with those of a subgroup having the same General Conceptual Abilities but no discrepancies. Supplemental analyses were conducted using this paired matching technique. Specifically, youths showing rare or clinically sig- nificant factor discrepancies (less than 5% population prevalence) were matched with controls, drawn from an epidemiological sample of 1,400, on overall cognitive ability and demographics. Three academic achieve- ment criteria were used (Word Reading, Number Skills, Spelling) with four groups showing ability discrepancies (Verbal Ability > Nonverbal Reasoning, Nonverbal Reasoning > Verbal Ability, Verbal Ability > Spa- tial Ability, Spatial Ability > Verbal Ability) and matched controls. The ns for each group ranged from 67 to 75, and t values fell between 3.406 and 1.941. Results indicate that no means showed reliable differences when compared with a Bonferroni-adjusted critical value of p < .0042 (correcting for 12 comparisons--three achievement scores with four matched groups), with the possible exception of strengths on Verbal Ability as compared to Nonverbal Reasoning being associated with mod- estly higher Word Reading (p < .001). These results are convergent with and serve to validate the findings of the present study. REFERENCES Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and in- terpreting interactions. Newbury Park: Sage. Alfonso, V. C., Oakland, T. D., LaRocca, R., & Spanakos, A. (2000). The course on individual cognitive assessment. School Psychology Re- view, 29, 52-64. Beebe, D. W., Pfiffner, L. J., & McBurnett, K. (2000). Evaluation of the validity of the Wechsler Intelligence Scale for Children-Third Edi- tion Comprehension and Picture Arrangement subtests as measures of social intelligence. Psychological Assessment, 12, 97-101. Buchner, A., Faul, F., & Erdfelder, E. (1996). GPower: A priori, post- hoc, and compromise power analyses for the Macintosh (Version 2.1.1). Trier, Germany: University of Trier. Cahan, S. (1986). Significance testing of subtest score differences: The rules of the game. Journal of Psychoeducational Assessment, 4, 273- 280. Camara, W., Nathan, J., & Puente, A. (1998). Psychological test usage in professional psychology: Report of the APA practice and science di- rectorates. Washington, DC: American Psychological Assocation.Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in professional psychology. Professional Psy- chology: Research and Practice, 31, 141-154. Carroll, J. B. (2000). Commentary on profile analysis. School Psychol- ogy Quarterly, 15, 449-456. Caruso, J. C. (2001). Increasing the reliability of the fluid/crystallized difference score from the Kaufman Adolescent and Adult Intelli- gence Test with reliable component analysis. Assessment, 8, 155- 166. Ceci, S. J., & Williams, W. M. (1997). Schooling, intelligence, and in- come. American Psychologist, 52, 1051-1058. Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Hillsdale, NJ: Law- rence Erlbaum. Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional meth- ods:Ahandbookforresearchoninteractions.NewYork:Irvington. Donders,J.(1996).FactorsubtypesintheWISC-IIIstandardizationsam- ple: Analysis of factor index scores. Psychological Assessment, 8, 312-318. Eisman, E. J., Dies, R. R., Finn, S. E., Eyde, L. D., Kay, G. G., Kubiszyn, T. W., et al. (1998). Problems and limitations in the use of psychologi- cal assessment in contemporary health care delivery: Report of the Board of Professional Affairs Psychological Assessment Workgroup, part II. Washington, DC: American Psychological Association. Elliot, C. D. (1990a). Differential Ability Scales: Introductory and tech- nical handbook. San Antonio, TX: Psychological Corporation. Elliot, C. D. (1990b). The nature and structure of children's abilities: Evi- dence from the Differential Ability Scales. Conference on Intelli- gence: Theories and practice (1990, Memphis, Tennessee). Journal of Psychoeducational Assessment, 8, 376-390. Elliot, C. D. (1990c). The nature and structure of the DAS: Questioning the test's organizing model and use. Journal of Psychoeducational Assessment, 8, 406-411. Glutting, J. J., Adams, W., & Sheslow, D. (2000). Wide Range Intelli- gence Test manual. Wilmington, DE: Wide Range. Glutting, J. J., McGrath, E. A., Kamphaus, R. W., & McDermott, P. A. (1992). Taxonomy and validity of subtest profiles on the Kaufman Assessment Battery for Children. Journal of Special Education, 26, 85-115. Glutting, J. J., Watkins, M., & Youngstrom, E. A. (in press). Multifactored and cross-battery assessments: Are they worth the effort? In C. R. Reynolds & R. Kamphaus (Eds.), Handbook of psychological and educational assessment of children (2nd ed.). New York: Guilford. Glutting, J. J., Youngstrom, E. A., Ward, T., Ward, S., & Hale, R. L. (1997). Incremental efficacy of WISC-III factor scores in predicting achievement: What do they tell us? Psychological Assessment, 9, 295-301. Gough, H. (1971). Some reflections on the meaning of psychodiagnosis. American Psychologist, 26, 106-187. Gregory, R. J. (1999). Foundations of intellectual assessment: The WAIS- III and other tests in clinical practice. Boston: Allyn & Bacon. Grossman, F. M., & Johnson, K. M. (1982). WISC-R factor scores as pre- dictors of WRAT performance: A multivariate analysis. Psychology in the Schools, 19, 465-468. Groth-Marnat, G. (1999). Financial efficacy of clinical assessment: Ra- tional guidelines and issues for future research. Journal of Clinical Psychology, 55, 813-824. Hanna, G. S., Bradley, F. O., & Holen, M. C. (1981). Estimating major sources of measurement error in individual intelligence scales: Taking ourheadsoutofthesand.JournalofSchoolPsychology,19,370-376. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Kamphaus, R. W. (1993). Clinical assessment of children's intelligence. Boston: Allyn & Bacon. Kaufman, A. S. (1979). Intelligent testing with the WISC-R. New York: John Wiley. 92ASSESSMENT Kaufman, A. S. (1994). Intelligent testing with the WISC-III. New York: John Wiley. Kaufman, A. S., & Lichtenberger, E. O. (1999). Essentials of WAIS-III assessment. New York: John Wiley. Keith, T. Z. (1990). Confirmatory and hierarchical confirmatory analysis of the Differential Ability Scales. Journal of Psychoeducational As- sessment, 8, 391-405. Kleinmuntz, B. (1990). Why we still use our heads instead of formulas: Toward an integrative approach. Psychological Bulletin, 107, 296- 310. Lipsitz, J. D., Dworkin, R. H., & Erlenmeyer-Kimling, L. (1993). Wechs- ler Comprehension and Picture Arrangement subtests and social ad- justment. Psychological Assessment, 5, 430-437. Lubinski, D., & Benbow, C. P. (2000). States of excellence. American Psychologist, 55, 137-150. McDermott, P. A. (1994). National profiles in youth psychopathology: ManualofAdjustmentScalesforChildrenandAdolescents.Philadel- phia, PA: Edumetric and Clinical Science. McDermott, P. A., & Glutting, J. J. (1997). Informing stylistic learning behavior, disposition, and achievement through ability subtests--or moreillusionsofmeaning?SchoolPsychologyReview,26,163-175. McGrew, K. S., & Flanagan, D. P. (1998). The intelligence test desk refer- ence(ITDR):Gf-Gccross-batteryassessment.Boston:Allyn&Bacon. Mishra, S. P. (1983). Validity of WISC-R IQs and factor scores in predict- ing achievement for Mexican-American children. Psychology in the Schools, 20, 150-154. Moffitt, T. E., Caspi, A., Harkness, A. R., & Silva, P. A. (1993). The natu- ralhistoryofchange inintellectual performance: Whochanges?How much? Is it meaningful? Journal of Child Psychology and Psychiatry, 14, 455-506. Naglieri, J. A. (1993). Pairwise and ipsative comparisons of WISC-III IQ and index scores. Psychological Assessment, 5, 113-116. Naglieri, J. A. (2000). Can profile analysis of ability tests work? An illus- tration using the PASS theory and CAS with an unselected cohort. School Psychology Quarterly, 15, 419-433. Neisser, U., Boodoo, G., Bouchard, T. J., Boykin, A. W., Brody, N., Ceci, S. J., et al. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51, 77-101. Pfeiffer, S. I., Reddy, L. A., Kletzel, J. E., Schmelzer, E. R., & Boyer, L. A. (2000). The practitioner's view of IQ testing and profile analy- sis. School Psychology Quarterly, 15, 376-385. Prifitera, A., Weiss, L. G., & Saklofske, D. H. (1998). The WISC-III in context. In A. Prifitera & D. H. Saklofske (Eds.), WISC-III clinical use and interpretation: Scientist-practitioner perspectives (pp. 1- 39). New York: Academic Press. Pritchard, D. A., Livingston, R. B., Reynolds, C. R., & Moses, J. A., Jr. (2000). Modal profiles for the WISC-III. School Psychology Quar- terly, 15, 400-418. Psychological Corporation. (1992). Wechsler Individual Achievement Test manual. San Antonio, TX: Author. Psychological Corporation. (1999). Wechsler Abbreviated Scale of Intel- ligence manual. San Antonio, TX: Author. Riccio, C. A., Cohen, M. J., Hall, J., & Ross, C. M. (1997). The third and fourth factors of the WISC-III: What they don't measure. Journal of Psychoeducational Assessment, 15, 27-39. Riccio, C. A., & Hynd, G. W. (2000). Measurable biological substrates to verbal-performance differences in Wechsler scores. School Psychol- ogy Quarterly, 15, 386-399. Sattler, J. (1992). Assessment of children (3rd ed.). San Diego, CA: Author.Sattler, J. (2001). Assessment of children: Cognitive applications (4th ed.). San Diego, CA: Author. Schwean, V. L., Saklofske, D. H., Yackulic, R. A., & Quinn, D. (1993). WISC-III performance of ADHD children. In B. A. Bracken & R. S. McCallum (Eds.), Wechsler Intelligence Scale for Children (3rd ed., pp. 56-70). Brandon, VT: Clinical Psychology Publishing. Silverstein, A. B. (1993). Type I, Type II, and other types of errors in pat- tern analysis. Psychological Assessment, 5, 72-74. Stanton, H. C., & Reynolds, C. R. (2000). Configural frequency analysis as a method of determining Wechsler profile types. School Psychol- ogy Quarterly, 15, 434-448. Thorndike, R. L., Hagen, E. P., & Sattler, J. M. (1986). Stanford-Binet In- telligence ScaleFourth Edition. Chicago: Riverside. U.S. Department of Commerce. (2000). Profile of general demographic characteristics for the United States (Current Population Reports). Washington, DC: Bureau of the Census. Watkins, M. W. (2000). Cognitive profile analysis: A shared professional myth. School Psychology Quarterly, 15, 465-479. Wechsler, D. (1974). Wechsler Intelligence Scale for ChildrenRevised Edition. San Antonio, TX: Psychological Corporation. Wechsler, D. (1991). Manual for the Wechsler Intelligence Scale for ChildrenThirdEdition.SanAntonio,TX:PsychologicalCorporation. Woodcock, R. W. (1987). Woodcock Reading Mastery TestsRevised: Examiner'smanual.CirclePines,MN:AmericanGuidanceService. Youngstrom, E., & Frazier, T. W. (2000, December). Evidence and impli- cations of over-factoring on commercial tests of cognitive ability. Pa- per presented at the Annual Meeting of the International Society for Intelligence Research, Cleveland, OH. Youngstrom, E. A., & Glutting, J. J. (2001). Individual strengths and weaknesses on factor scores from the Differential Ability Scales: Va- lidity in predicting concurrent achievement and behavioral criteria. Manuscript submitted for publication. Youngstrom, E. A., Kogos, J. L., & Glutting, J. (1999). Incremental effi- cacy of Differential Ability Scales factor scores in predicting individ- ual achievement criteria. School Psychology Quarterly, 14, 26-39. Shoshana Y. Kahana is currently a graduate student in the clini- cal psychology doctoral program at Case Western Reserve Uni- versity. She holds a B.A. from the University of Pennsylvania in Psychology. Her current research interests include the interpreta- tion of cognitive and academic achievement performance in ad- dition to the relationship between maternal affect and ratings of child functioning. EricA.Youngstrom,Ph.D.,iscurrentlyanassistantprofessorat Case Western Reserve University. His research interests are in measurement and clinical assessment as well as the appropriate interpretation of individual test data. In addition, his research also examines the use of clinical instruments and the integration of multiple sources of data to assess individuals'emotional expe- riences. Joseph J. Glutting, Ph.D., is currently a professor in the School of Education at the University of Delaware. His reseach interests include issues related to school psychology, psychoeducational assessment, and educational measurement.Kahana et al. / DISCREPANCIES ON THE DAS 93 </meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<back>
<notes>
<p>1. Elliot (1990a) called the three factors “clusters” in the manual, even though exploratory and confirmatory factor analyses were the empirical basis for generating the scales.</p>
<p>2. Discrepant scores were calculated using the geometric mean or difference score formula, where differences required for statistical significance are based on the standard errors of measurement for each index scale as well as the
<italic>z</italic>
score under the normal curve that is associated with the desired significance level. The formula is Difference Score = Z [UNKNOWN]S
<sub>ma</sub>
<sup>2</sup>
+SE
<sub>mb</sub>
<sup>2</sup>
.</p>
<p>3. See Note 2.</p>
<p>4. In the present study, we used interaction terms to examine if there was a differential predictive relationship of academic achievement between those individuals who demonstrated clinically significant or rare discrepancies between factor and subtest scores and those who did not. An alternative approach would be to take all participants with the discrepancy or discrepancies of interest and compare their academic scores with those of a subgroup having the same General Conceptual Abilities but no discrepancies. Supplemental analyses were conducted using this paired matching technique. Specifically, youths showing rare or clinically significant factor discrepancies (less than 5% population prevalence) were matched with controls, drawn from an epidemiological sample of 1,400, on overall cognitive ability and demographics. Three academic achievement criteria were used (Word Reading, Number Skills, Spelling) with four groups showing ability discrepancies (Verbal Ability > Nonverbal Reasoning, Nonverbal Reasoning > Verbal Ability, Verbal Ability > Spatial Ability, Spatial Ability > Verbal Ability) and matched controls. The
<italic>n</italic>
s for each group ranged from 67 to 75, and
<italic>t</italic>
values fell between —3.406 and 1.941. Results indicate that no means showed reliable differences when compared with a Bonferroni-adjusted critical value of
<italic>p</italic>
< .0042 (correcting for 12 comparisons—three achievement scores with four matched groups), with the possible exception of strengths on Verbal Ability as compared to Nonverbal Reasoning being associated with modestly higher Word Reading (
<italic>p</italic>
< .001). These results are convergent with and serve to validate the findings of the present study.</p>
</notes>
<ref-list>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Aiken, L. S.</surname>
</name>
, &
<name name-style="western">
<surname>West, S. G.</surname>
</name>
(
<year>1991</year>
).
<source>Multiple regression: Testing and interpreting interactions</source>
.
<publisher-loc>Newbury Park</publisher-loc>
:
<publisher-name>Sage</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Alfonso, V. C.</surname>
</name>
,
<name name-style="western">
<surname>Oakland, T. D.</surname>
</name>
,
<name name-style="western">
<surname>LaRocca, R.</surname>
</name>
, &
<name name-style="western">
<surname>Spanakos, A.</surname>
</name>
(
<year>2000</year>
).
<article-title>The course on individual cognitive assessment</article-title>
.
<source>School Psychology Review</source>
,
<volume>29</volume>
,
<fpage>52</fpage>
-
<lpage>64</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Beebe, D. W.</surname>
</name>
,
<name name-style="western">
<surname>Pfiffner, L. J.</surname>
</name>
, &
<name name-style="western">
<surname>McBurnett, K.</surname>
</name>
(
<year>2000</year>
).
<article-title>Evaluation of the validity of the Wechsler Intelligence Scale for Children-Third Edition Comprehension and Picture Arrangement subtests as measures of social intelligence</article-title>
.
<source>Psychological Assessment</source>
,
<volume>12</volume>
,
<fpage>97</fpage>
-
<lpage>101</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Buchner, A.</surname>
</name>
,
<name name-style="western">
<surname>Faul, F.</surname>
</name>
, &
<name name-style="western">
<surname>Erdfelder, E.</surname>
</name>
(
<year>1996</year>
).
<source>G—Power: A priori, posthoc, and compromise power analyses for the Macintosh (Version 2.1.1)</source>
.
<publisher-loc>Trier, Germany</publisher-loc>
:
<publisher-name>University of Trier</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Cahan, S.</surname>
</name>
(
<year>1986</year>
).
<article-title>Significance testing of subtest score differences: The rules of the game</article-title>
.
<source>Journal of Psychoeducational Assessment</source>
,
<volume>4</volume>
,
<fpage>273</fpage>
<lpage>280</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Camara, W.</surname>
</name>
,
<name name-style="western">
<surname>Nathan, J.</surname>
</name>
, &
<name name-style="western">
<surname>Puente, A.</surname>
</name>
(
<year>1998</year>
).
<source>Psychological test usage in professional psychology: Report of the APA practice and science directorates</source>
.
<publisher-loc>Washington, DC</publisher-loc>
:
<publisher-name>American Psychological Assocation</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Camara, W. J.</surname>
</name>
,
<name name-style="western">
<surname>Nathan, J. S.</surname>
</name>
, &
<name name-style="western">
<surname>Puente, A. E.</surname>
</name>
(
<year>2000</year>
).
<article-title>Psychological test usage: Implications in professional psychology</article-title>
.
<source>Professional Psychology: Research and Practice</source>
,
<volume>31</volume>
,
<fpage>141</fpage>
-
<lpage>154</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Carroll, J. B.</surname>
</name>
(
<year>2000</year>
).
<article-title>Commentary on profile analysis</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>15</volume>
,
<fpage>449</fpage>
-
<lpage>456</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Caruso, J. C.</surname>
</name>
(
<year>2001</year>
).
<article-title>Increasing the reliability of the fluid/crystallized difference score from the Kaufman Adolescent and Adult Intelligence Test with reliable component analysis</article-title>
.
<source>Assessment</source>
,
<volume>8</volume>
,
<fpage>155</fpage>
<lpage>166</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Ceci, S. J.</surname>
</name>
, &
<name name-style="western">
<surname>Williams, W. M.</surname>
</name>
(
<year>1997</year>
).
<article-title>Schooling, intelligence, and income</article-title>
.
<source>American Psychologist</source>
,
<volume>52</volume>
,
<fpage>1051</fpage>
-
<lpage>1058</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Cohen, J.</surname>
</name>
, &
<name name-style="western">
<surname>Cohen, P.</surname>
</name>
(
<year>1983</year>
).
<source>Applied multiple regression/correlation analysis for the behavioral sciences</source>
(
<edition>3rd ed.</edition>
).
<publisher-loc>Hillsdale, NJ</publisher-loc>
:
<publisher-name>Lawrence Erlbaum</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Cronbach, L. J.</surname>
</name>
, &
<name name-style="western">
<surname>Snow, R. E.</surname>
</name>
(
<year>1977</year>
).
<source>Aptitudes and instructional methods: A handbook for research on interactions</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Irvington</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Donders, J.</surname>
</name>
(
<year>1996</year>
).
<article-title>Factor subtypes in the WISC-III standardization sample: Analysis of factor index scores</article-title>
.
<source>Psychological Assessment</source>
,
<volume>8</volume>
,
<fpage>312</fpage>
-
<lpage>318</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Eisman, E. J.</surname>
</name>
,
<name name-style="western">
<surname>Dies, R. R.</surname>
</name>
,
<name name-style="western">
<surname>Finn, S. E.</surname>
</name>
,
<name name-style="western">
<surname>Eyde, L. D.</surname>
</name>
,
<name name-style="western">
<surname>Kay, G. G.</surname>
</name>
,
<name name-style="western">
<surname>Kubiszyn, T. W.</surname>
</name>
, et al. (
<year>1998</year>
).
<source>Problems and limitations in the use of psychological assessment in contemporary health care delivery: Report of the Board of Professional Affairs Psychological Assessment Workgroup, part II</source>
.
<publisher-loc>Washington, DC</publisher-loc>
:
<publisher-name>American Psychological Association</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Elliot, C. D.</surname>
</name>
(
<year>1990a</year>
).
<source>Differential Ability Scales: Introductory and technical handbook</source>
.
<publisher-loc>San Antonio, TX</publisher-loc>
:
<publisher-name>Psychological Corporation</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Elliot, C. D.</surname>
</name>
(
<year>1990b</year>
).
<article-title>The nature and structure of children’s abilities: Evidence from the Differential Ability Scales. Conference on Intelligence: Theories and practice (1990, Memphis, Tennessee)</article-title>
.
<source>Journal of Psychoeducational Assessment</source>
,
<volume>8</volume>
,
<fpage>376</fpage>
-
<lpage>390</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Elliot, C. D.</surname>
</name>
(
<year>1990c</year>
).
<article-title>The nature and structure of the DAS: Questioning the test’s organizing model and use</article-title>
.
<source>Journal of Psychoeducational Assessment</source>
,
<volume>8</volume>
,
<fpage>406</fpage>
-
<lpage>411</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Glutting, J. J.</surname>
</name>
,
<name name-style="western">
<surname>Adams, W.</surname>
</name>
, &
<name name-style="western">
<surname>Sheslow, D.</surname>
</name>
(
<year>2000</year>
).
<source>Wide Range Intelligence Test manual</source>
.
<publisher-loc>Wilmington, DE</publisher-loc>
:
<publisher-name>Wide Range</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Glutting, J. J.</surname>
</name>
,
<name name-style="western">
<surname>McGrath, E. A.</surname>
</name>
,
<name name-style="western">
<surname>Kamphaus, R. W.</surname>
</name>
, &
<name name-style="western">
<surname>McDermott, P. A.</surname>
</name>
(
<year>1992</year>
).
<article-title>Taxonomy and validity of subtest profiles on the Kaufman Assessment Battery for Children</article-title>
.
<source>Journal of Special Education</source>
,
<volume>26</volume>
,
<fpage>85</fpage>
-
<lpage>115</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Glutting, J. J.</surname>
</name>
,
<name name-style="western">
<surname>Watkins, M.</surname>
</name>
, &
<name name-style="western">
<surname>Youngstrom, E. A.</surname>
</name>
(in press).
<article-title>Multifactored and cross-battery assessments: Are they worth the effort?</article-title>
In
<name name-style="western">
<surname>C. R. Reynolds</surname>
</name>
&
<name name-style="western">
<surname>R. Kamphaus</surname>
</name>
(Eds.),
<source>Handbook of psychological and educational assessment of children</source>
(
<edition>2nd ed.</edition>
).
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Guilford</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Glutting, J. J.</surname>
</name>
,
<name name-style="western">
<surname>Youngstrom, E. A.</surname>
</name>
,
<name name-style="western">
<surname>Ward, T.</surname>
</name>
,
<name name-style="western">
<surname>Ward, S.</surname>
</name>
, &
<name name-style="western">
<surname>Hale, R. L.</surname>
</name>
(
<year>1997</year>
).
<article-title>Incremental efficacy of WISC-III factor scores in predicting achievement: What do they tell us?</article-title>
<source>Psychological Assessment</source>
,
<volume>9</volume>
,
<fpage>295</fpage>
-
<lpage>301</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Gough, H.</surname>
</name>
(
<year>1971</year>
).
<article-title>Some reflections on the meaning of psychodiagnosis</article-title>
.
<source>American Psychologist</source>
,
<volume>26</volume>
,
<fpage>106</fpage>
-
<lpage>187</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Gregory, R. J.</surname>
</name>
(
<year>1999</year>
).
<source>Foundations of intellectual assessment: The WAIS-III and other tests in clinical practice</source>
.
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Allyn & Bacon</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Grossman, F. M.</surname>
</name>
, &
<name name-style="western">
<surname>Johnson, K. M.</surname>
</name>
(
<year>1982</year>
).
<article-title>WISC-R factor scores as predictors of WRAT performance: A multivariate analysis</article-title>
.
<source>Psychology in the Schools</source>
,
<volume>19</volume>
,
<fpage>465</fpage>
-
<lpage>468</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Groth-Marnat, G.</surname>
</name>
(
<year>1999</year>
).
<article-title>Financial efficacy of clinical assessment: Rational guidelines and issues for future research</article-title>
.
<source>Journal of Clinical Psychology</source>
,
<volume>55</volume>
,
<fpage>813</fpage>
-
<lpage>824</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Hanna, G. S.</surname>
</name>
,
<name name-style="western">
<surname>Bradley, F. O.</surname>
</name>
, &
<name name-style="western">
<surname>Holen, M. C.</surname>
</name>
(
<year>1981</year>
).
<article-title>Estimating major sources of measurement error in individual intelligence scales: Taking our heads out of the sand</article-title>
.
<source>Journal of School Psychology</source>
,
<volume>19</volume>
,
<fpage>370</fpage>
-
<lpage>376</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Jensen, A. R.</surname>
</name>
(
<year>1998</year>
).
<source>The g factor: The science of mental ability</source>
.
<publisher-loc>Westport, CT</publisher-loc>
:
<publisher-name>Praeger</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Kamphaus, R. W.</surname>
</name>
(
<year>1993</year>
).
<source>Clinical assessment of children’s intelligence</source>
.
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Allyn & Bacon</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Kaufman, A. S.</surname>
</name>
(
<year>1979</year>
).
<source>Intelligent testing with the WISC-R</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>John Wiley</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Kaufman, A. S.</surname>
</name>
(
<year>1994</year>
).
<source>Intelligent testing with the WISC-III</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>John Wiley</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Kaufman, A. S.</surname>
</name>
, &
<name name-style="western">
<surname>Lichtenberger, E. O.</surname>
</name>
(
<year>1999</year>
).
<source>Essentials of WAIS-III assessment</source>
.
<publisher-loc>New York</publisher-loc>
:
<publisher-name>John Wiley</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Keith, T. Z.</surname>
</name>
(
<year>1990</year>
).
<article-title>Confirmatory and hierarchical confirmatory analysis of the Differential Ability Scales</article-title>
.
<source>Journal of Psychoeducational Assessment</source>
,
<volume>8</volume>
,
<fpage>391</fpage>
-
<lpage>405</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Kleinmuntz, B.</surname>
</name>
(
<year>1990</year>
).
<article-title>Why we still use our heads instead of formulas: Toward an integrative approach</article-title>
.
<source>Psychological Bulletin</source>
,
<volume>107</volume>
,
<fpage>296</fpage>
<lpage>310</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Lipsitz, J. D.</surname>
</name>
,
<name name-style="western">
<surname>Dworkin, R. H.</surname>
</name>
, &
<name name-style="western">
<surname>Erlenmeyer-Kimling, L.</surname>
</name>
(
<year>1993</year>
).
<article-title>Wechsler Comprehension and Picture Arrangement subtests and social adjustment</article-title>
.
<source>Psychological Assessment</source>
,
<volume>5</volume>
,
<fpage>430</fpage>
-
<lpage>437</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Lubinski, D.</surname>
</name>
, &
<name name-style="western">
<surname>Benbow, C. P.</surname>
</name>
(
<year>2000</year>
).
<article-title>States of excellence</article-title>
.
<source>American Psychologist</source>
,
<volume>55</volume>
,
<fpage>137</fpage>
-
<lpage>150</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>McDermott, P. A.</surname>
</name>
(
<year>1994</year>
).
<source>National profiles in youth psychopathology: Manual of Adjustment Scales for Children and Adolescents</source>
.
<publisher-loc>Philadelphia, PA</publisher-loc>
:
<publisher-name>Edumetric and Clinical Science</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>McDermott, P. A.</surname>
</name>
, &
<name name-style="western">
<surname>Glutting, J. J.</surname>
</name>
(
<year>1997</year>
).
<article-title>Informing stylistic learning behavior, disposition, and achievement through ability subtests—or more illusions of meaning?</article-title>
<source>School Psychology Review</source>
,
<volume>26</volume>
,
<fpage>163</fpage>
-
<lpage>175</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>McGrew, K.</surname>
</name>
S., &
<name name-style="western">
<surname>Flanagan, D. P.</surname>
</name>
(
<year>1998</year>
).
<source>The intelligence test desk reference(ITDR):Gf-Gccross-batteryassessment</source>
.
<publisher-loc>Boston</publisher-loc>
:
<publisher-name>Allyn&Bacon</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Mishra, S. P.</surname>
</name>
(
<year>1983</year>
).
<article-title>Validity of WISC-R IQs and factor scores in predicting achievement for Mexican-American children</article-title>
.
<source>Psychology in the Schools</source>
,
<volume>20</volume>
,
<fpage>150</fpage>
-
<lpage>154</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Moffitt, T. E.</surname>
</name>
,
<name name-style="western">
<surname>Caspi, A.</surname>
</name>
,
<name name-style="western">
<surname>Harkness, A. R.</surname>
</name>
, &
<name name-style="western">
<surname>Silva, P. A.</surname>
</name>
(
<year>1993</year>
).
<article-title>The natural history of change in intellectual performance: Who changes? How much? Is it meaningful?</article-title>
<source>Journal of Child Psychology and Psychiatry</source>
,
<volume>14</volume>
,
<fpage>455</fpage>
-
<lpage>506</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Naglieri, J. A.</surname>
</name>
(
<year>1993</year>
).
<article-title>Pairwise and ipsative comparisons of WISC-III IQ and index scores</article-title>
.
<source>Psychological Assessment</source>
,
<volume>5</volume>
,
<fpage>113</fpage>
-
<lpage>116</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Naglieri, J. A.</surname>
</name>
(
<year>2000</year>
).
<article-title>Can profile analysis of ability tests work? An illustration using the PASS theory and CAS with an unselected cohort</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>15</volume>
,
<fpage>419</fpage>
-
<lpage>433</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Neisser, U.</surname>
</name>
,
<name name-style="western">
<surname>Boodoo, G.</surname>
</name>
,
<name name-style="western">
<surname>Bouchard, T. J.</surname>
</name>
,
<name name-style="western">
<surname>Boykin, A. W.</surname>
</name>
,
<name name-style="western">
<surname>Brody, N.</surname>
</name>
,
<name name-style="western">
<surname>Ceci, S. J.</surname>
</name>
, et al. (
<year>1996</year>
).
<article-title>Intelligence: Knowns and unknowns</article-title>
.
<source>American Psychologist</source>
,
<volume>51</volume>
,
<fpage>77</fpage>
-
<lpage>101</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Pfeiffer, S. I.</surname>
</name>
,
<name name-style="western">
<surname>Reddy, L. A.</surname>
</name>
,
<name name-style="western">
<surname>Kletzel, J. E.</surname>
</name>
,
<name name-style="western">
<surname>Schmelzer, E. R.</surname>
</name>
, &
<name name-style="western">
<surname>Boyer, L. A.</surname>
</name>
(
<year>2000</year>
).
<article-title>The practitioner’s view of IQ testing and profile analysis</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>15</volume>
,
<fpage>376</fpage>
-
<lpage>385</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Prifitera, A.</surname>
</name>
,
<name name-style="western">
<surname>Weiss, L. G.</surname>
</name>
, &
<name name-style="western">
<surname>Saklofske, D. H.</surname>
</name>
(
<year>1998</year>
).
<article-title>The WISC-III in context</article-title>
. In
<name name-style="western">
<surname>A. Prifitera</surname>
</name>
&
<name name-style="western">
<surname>D. H. Saklofske</surname>
</name>
(Eds.),
<source>WISC-III clinical use and interpretation: Scientist-practitioner perspectives</source>
(pp.
<fpage>1</fpage>
-
<lpage>39</lpage>
).
<publisher-loc>New York</publisher-loc>
:
<publisher-name>Academic Press</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Pritchard, D. A.</surname>
</name>
,
<name name-style="western">
<surname>Livingston, R. B.</surname>
</name>
,
<name name-style="western">
<surname>Reynolds, C. R.</surname>
</name>
, &
<name name-style="western">
<surname>Moses, J. A., Jr.</surname>
</name>
(
<year>2000</year>
).
<article-title>Modal profiles for the WISC-III</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>15</volume>
,
<fpage>400</fpage>
-
<lpage>418</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Psychological Corporation</surname>
</name>
. (
<year>1992</year>
).
<source>Wechsler Individual Achievement Test manual</source>
.
<publisher-loc>San Antonio, TX</publisher-loc>
:
<publisher-name>Author</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Psychological Corporation</surname>
</name>
. (
<year>1999</year>
).
<source>Wechsler Abbreviated Scale of Intelligence manual</source>
.
<publisher-loc>San Antonio, TX</publisher-loc>
:
<publisher-name>Author</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Riccio, C. A.</surname>
</name>
,
<name name-style="western">
<surname>Cohen, M. J.</surname>
</name>
,
<name name-style="western">
<surname>Hall, J.</surname>
</name>
, &
<name name-style="western">
<surname>Ross, C. M.</surname>
</name>
(
<year>1997</year>
).
<article-title>The third and fourth factors of the WISC-III: What they don’t measure</article-title>
.
<source>Journal of Psychoeducational Assessment</source>
,
<volume>15</volume>
,
<fpage>27</fpage>
-
<lpage>39</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Riccio, C. A.</surname>
</name>
, &
<name name-style="western">
<surname>Hynd, G. W.</surname>
</name>
(
<year>2000</year>
).
<article-title>Measurable biological substrates to verbal-performance differences in Wechsler scores</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>15</volume>
,
<fpage>386</fpage>
-
<lpage>399</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Sattler, J.</surname>
</name>
(
<year>1992</year>
).
<source>Assessment of children</source>
(
<edition>3rd ed.</edition>
).
<publisher-loc>San Diego, CA</publisher-loc>
:
<publisher-name>Author</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Sattler, J.</surname>
</name>
(
<year>2001</year>
).
<source>Assessment of children: Cognitive applications</source>
(
<edition>4th ed.</edition>
).
<publisher-loc>San Diego, CA</publisher-loc>
:
<publisher-name>Author</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Schwean, V. L.</surname>
</name>
,
<name name-style="western">
<surname>Saklofske, D. H.</surname>
</name>
,
<name name-style="western">
<surname>Yackulic, R. A.</surname>
</name>
, &
<name name-style="western">
<surname>Quinn, D.</surname>
</name>
(
<year>1993</year>
).
<article-title>WISC-III performance of ADHD children</article-title>
. In
<name name-style="western">
<surname>B. A. Bracken</surname>
</name>
&
<name name-style="western">
<surname>R. S. McCallum</surname>
</name>
(Eds.),
<source>Wechsler Intelligence Scale for Children</source>
(
<edition>3rd ed.</edition>
, pp.
<fpage>56</fpage>
-
<lpage>70</lpage>
).
<publisher-loc>Brandon, VT</publisher-loc>
:
<publisher-name>Clinical Psychology Publishing</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Silverstein, A. B.</surname>
</name>
(
<year>1993</year>
).
<article-title>Type I, Type II, and other types of errors in pattern analysis</article-title>
.
<source>Psychological Assessment</source>
,
<volume>5</volume>
,
<fpage>72</fpage>
-
<lpage>74</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Stanton, H. C.</surname>
</name>
, &
<name name-style="western">
<surname>Reynolds, C. R.</surname>
</name>
(
<year>2000</year>
).
<article-title>Configural frequency analysis as a method of determining Wechsler profile types</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>15</volume>
,
<fpage>434</fpage>
-
<lpage>448</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Thorndike, R. L.</surname>
</name>
,
<name name-style="western">
<surname>Hagen, E. P.</surname>
</name>
, &
<name name-style="western">
<surname>Sattler, J. M.</surname>
</name>
(
<year>1986</year>
).
<source>Stanford-Binet Intelligence Scale—Fourth Edition</source>
.
<publisher-loc>Chicago</publisher-loc>
:
<publisher-name>Riverside</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>U.S. Department of Commerce</surname>
</name>
. (
<year>2000</year>
).
<source>Profile of general demographic characteristics for the United States</source>
(Current Population Reports).
<publisher-loc>Washington, DC</publisher-loc>
:
<publisher-name>Bureau of the Census</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Watkins, M. W.</surname>
</name>
(
<year>2000</year>
).
<article-title>Cognitive profile analysis: A shared professional myth</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>15</volume>
,
<fpage>465</fpage>
-
<lpage>479</lpage>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Wechsler, D.</surname>
</name>
(
<year>1974</year>
).
<source>Wechsler Intelligence Scale for Children—Revised Edition</source>
.
<publisher-loc>San Antonio, TX</publisher-loc>
:
<publisher-name>Psychological Corporation</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Wechsler, D.</surname>
</name>
(
<year>1991</year>
).
<source>Manual for the Wechsler Intelligence Scale for Children—Third Edition</source>
.
<publisher-loc>San Antonio,TX</publisher-loc>
:
<publisher-name>Psychological Corporation</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="book" xlink:type="simple">
<name name-style="western">
<surname>Woodcock, R. W.</surname>
</name>
(
<year>1987</year>
).
<source>Woodcock Reading Mastery Tests—Revised: Examiner’s manual</source>
.
<publisher-loc>Circle Pines, MN</publisher-loc>
:
<publisher-name>American Guidance Service</publisher-name>
.</citation>
</ref>
<ref>
<citation citation-type="confproc" xlink:type="simple">
<name name-style="western">
<surname>Youngstrom, E.</surname>
</name>
, &
<name name-style="western">
<surname>Frazier, T. W.</surname>
</name>
(
<conf-date>2000, December</conf-date>
).
<article-title>Evidence and implications of over-factoring on commercial tests of cognitive ability</article-title>
.Paper presented at the
<conf-name>Annual Meeting of the International Society for Intelligence Research</conf-name>
,
<conf-loc>Cleveland, OH</conf-loc>
.</citation>
</ref>
<ref>
<citation citation-type="other" xlink:type="simple">Youngstrom, E. A., & Glutting, J. J. (2001).
<italic>Individual strengths and weaknesses on factor scores from the Differential Ability Scales: Validity in predicting concurrent achievement and behavioral criteria</italic>
. Manuscript submitted for publication.</citation>
</ref>
<ref>
<citation citation-type="journal" xlink:type="simple">
<name name-style="western">
<surname>Youngstrom, E. A.</surname>
</name>
,
<name name-style="western">
<surname>Kogos, J. L.</surname>
</name>
, &
<name name-style="western">
<surname>Glutting, J.</surname>
</name>
(
<year>1999</year>
).
<article-title>Incremental efficacy of Differential Ability Scales factor scores in predicting individual achievement criteria</article-title>
.
<source>School Psychology Quarterly</source>
,
<volume>14</volume>
,
<fpage>26</fpage>
-
<lpage>39</lpage>
.</citation>
</ref>
</ref-list>
</back>
</article>
</istex:document>
</istex:metadataXml>
<mods version="3.6">
<titleInfo lang="en">
<title>Factor and Subtest Discrepancies on the Differential Ability Scales</title>
<subTitle>Examining Prevalence and Validity in PredictingAcademic Achievement</subTitle>
</titleInfo>
<titleInfo type="alternative" lang="en" contentType="CDATA">
<title>Factor and Subtest Discrepancies on the Differential Ability Scales</title>
<subTitle>Examining Prevalence and Validity in PredictingAcademic Achievement</subTitle>
</titleInfo>
<name type="personal">
<namePart type="given">Shoshana Y.</namePart>
<namePart type="family">Kahana</namePart>
<affiliation></affiliation>
<affiliation>E-mail: syk4@po.cwru.edu</affiliation>
<affiliation>syk4@po.cwru.edu.</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Eric A.</namePart>
<namePart type="family">Youngstrom</namePart>
<affiliation>Case Western Reserve University</affiliation>
<affiliation>Case Western Reserve University</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Joseph J.</namePart>
<namePart type="family">Glutting</namePart>
<affiliation>University of Delaware</affiliation>
<affiliation>University of Delaware</affiliation>
<role>
<roleTerm type="text">author</roleTerm>
</role>
</name>
<typeOfResource>text</typeOfResource>
<genre type="research-article" displayLabel="research-article"></genre>
<originInfo>
<publisher>Sage Publications</publisher>
<place>
<placeTerm type="text">Sage CA: Thousand Oaks, CA</placeTerm>
</place>
<dateIssued encoding="w3cdtf">2002-03</dateIssued>
<copyrightDate encoding="w3cdtf">2002</copyrightDate>
</originInfo>
<language>
<languageTerm type="code" authority="iso639-2b">eng</languageTerm>
<languageTerm type="code" authority="rfc3066">en</languageTerm>
</language>
<physicalDescription>
<internetMediaType>text/html</internetMediaType>
</physicalDescription>
<abstract lang="en">Past literature has largely ignored the population frequency of multivariate factor and subtest score discrepancies. Another limitation has been that statistical models imperfectly model the clinical assessment process, whereby significant discrepancies between both factors and subtests are included in predictions about an individual’s academic achievement. The present study examined these issues using a nationally representative sample (N = 1,185) completing the Differential Ability Scales. Results indicate that approximately 80% of children in a nonreferred sample show at least one statistically significant ability discrepancy. In addition, the global estimate of cognitive ability was the most parsimonious predictor of academic achievement, whereas information about ability discrepancies did not significantly improve prediction. Findings suggest that when predicting academic achievement, there is little value in interpreting cognitive scores beyond the global ability estimate.</abstract>
<subject>
<genre>keywords</genre>
<topic>cognitive testing</topic>
<topic>predictions of academic achievement</topic>
<topic>global measure of intelligence</topic>
<topic>factor and subtest interpretation</topic>
</subject>
<relatedItem type="host">
<titleInfo>
<title>Assessment</title>
</titleInfo>
<genre type="journal">journal</genre>
<identifier type="ISSN">1073-1911</identifier>
<identifier type="eISSN">1552-3489</identifier>
<identifier type="PublisherID">ASM</identifier>
<identifier type="PublisherID-hwp">spasm</identifier>
<part>
<date>2002</date>
<detail type="volume">
<caption>vol.</caption>
<number>9</number>
</detail>
<detail type="issue">
<caption>no.</caption>
<number>1</number>
</detail>
<extent unit="pages">
<start>82</start>
<end>93</end>
</extent>
</part>
</relatedItem>
<identifier type="istex">4408FD3559BF8E27046191B14AB80235D7D87C5A</identifier>
<identifier type="DOI">10.1177/1073191102009001010</identifier>
<identifier type="ArticleID">10.1177_1073191102009001010</identifier>
<recordInfo>
<recordContentSource>SAGE</recordContentSource>
</recordInfo>
</mods>
</metadata>
<serie></serie>
</istex>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Rhénanie/explor/UnivTrevesV1/Data/Istex/Corpus
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 001376 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Istex/Corpus/biblio.hfd -nk 001376 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Rhénanie
   |area=    UnivTrevesV1
   |flux=    Istex
   |étape=   Corpus
   |type=    RBID
   |clé=     ISTEX:4408FD3559BF8E27046191B14AB80235D7D87C5A
   |texte=   Factor and Subtest Discrepancies on the Differential Ability Scales
}}

Wicri

This area was generated with Dilib version V0.6.31.
Data generation: Sat Jul 22 16:29:01 2017. Site generation: Wed Feb 28 14:55:37 2024