Next Article in Journal
Assessing Students’ Critical Thinking in Dialogue
Previous Article in Journal
Crystallized Intelligence, Fluid Intelligence, and Need for Cognition: Their Longitudinal Relations in Adolescence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Internal Structure of the WISC-V in Chile: Exploratory and Confirmatory Factor Analyses of the 15 Subtests

by
Marcela Rodríguez-Cancino
* and
Andrés Concha-Salgado
Department of Psychology, Universidad de La Frontera, Temuco 4811322, Chile
*
Author to whom correspondence should be addressed.
J. Intell. 2024, 12(11), 105; https://doi.org/10.3390/jintelligence12110105
Submission received: 23 August 2024 / Revised: 3 October 2024 / Accepted: 24 October 2024 / Published: 25 October 2024
(This article belongs to the Section Contributions to the Measurement of Intelligence)

Abstract

:
The WISC-V is a widely used scale in clinical and educational settings in Chile. Given that its use guides critical decision-making for children and adolescents, it is essential to have evidence of its psychometric properties, including validity based on internal structure. This study analyzed the factor structure of the WISC-V through an exploratory (EFA) and confirmatory (CFA) approach considering the age range of 853 children and adolescents between 6 and 16 years. We obtained evidence favoring the four-factor structure in the EFA, which is a clearer organization in the 15–16 age group. In the confirmatory stage, the best four- and five-factor models showed factor loadings greater than 0.4, except for one subtest in the processing speed domain in the 6–8 age group. The internal consistency ranged from acceptable to good estimates for the best two models. The results support the use of hierarchical factor structures of four and five factors, which offer specific advantages and disadvantages discussed in the article. The implications of these findings in both the professional area of psychology and future research are discussed.

1. Introduction

Professionals who use educational or psychological tests must ensure that the assessments they perform with them have a sound scientific basis that supports the conclusions and decisions that emerge from their administration (Sireci and Benítez 2023). In the clinical or educational context, where decisions relevant to the support of children and adolescents must be made, it is essential to have rigorous information on the psychometric properties of the most commonly used tests to ensure that their use is appropriate, ethical, and fair (APA 2020; deLeyer-Tiarks et al. 2024; ITC 2013; Muñiz et al. 2015; Muñiz and Fonseca-Pedrero 2019; Vinet et al. 2023).
In this regard, the Standards for Educational and Psychological Testing (AERA et al. 2018), indicate that it is crucial to show evidence of validity when examining the psychometric properties of a test, since it demonstrates the extent to which evidence and theory support its intended application.
Internal structure validation is a crucial aspect of determining the validity of a test. It offers insights into the construct being tested, its functioning, and its structure (Leong et al. 2020). Examining the consistency between the items and the underlying dimensions of the test helps determine if the proposed interpretation of its scores is supported (Manzi et al. 2019; Sireci and Benítez 2023).
The evaluation of dimensionality (factor structure) is a way of gathering evidence about its internal structure. This process entails comparing the hypothesized factor structures with those acquired by applying suitable confirmatory statistical methods (e.g., confirmatory factor analysis). The results obtained through such an analysis must then be discussed in terms of their consistency with the theory underlying the test (Brenlla et al. 2023; Iliescu et al. 2024; Sireci 2020).
In the clinical or educational setting, one of the most widely used tests worldwide is the Weschler Intelligence Scale for Children (WISC), developed in the United States and adapted to different cultural settings in Europe and Latin America (Forns and Amador 2017; Kaufman et al. 2016; McGill et al. 2020; Niileksela and Reynolds 2019; Rosas et al. 2022). For more than 75 years of research, this scale has demonstrated its clinical utility for a variety of purposes, whether in the identification of an intellectual disability, specific learning disorders, clinical interventions, or neuropsychological assessment (Arango-Lasprilla et al. 2017; Brue and Wilmshurst 2016; Echavarría-Ramírez and Tirapu-Ustárroz 2021; Estrada 2022; Hebben and Milberg 2009; Wechsler 2014b).
Since its creation in 1949, the scale (see Table 1) has undergone a series of revisions and adjustments to accurately represent and accommodate the cultural and technological advancements that have occurred with each successive generation (Niileksela and Reynolds 2019; Rosas et al. 2022; Wechsler 2014b; Weiss et al. 2019). The WISC-V is the latest available version of this scale, and it includes improvements in its theoretical foundations, psychometric properties, clinical utility, and administration formats (Canivez et al. 2017; Rosas et al. 2022; Wechsler 2014b).

1.1. Factor Structure of the WISC-V

The factor structure of the WISC is one of the aspects that has been modified based on these revisions, and given that it underpins the interpretation of its scores, its psychometric exploration is of great importance to ensure its use in different contexts (Dombrowski et al. 2022). According to the Technical and Interpretative Manual of the fifth American edition of the scale (Wechsler 2014b), and as seen in Table 1, the internal structure of WISC has progressed from the understanding of a general intellectual ability model (FSIQ) composed only of a verbal IQ (VIQ) and a performance IQ (PIQ) to hierarchical factor models that include a general factor and four or five cognitive domains.
Decisions about the structure of the WISC-V were based on neurocognitive research, neurodevelopmental theories, and structural intelligence models (Forns and Amador 2017; Kahan and Salvo 2022). According to Wechsler (2014b), there is now broad agreement on hierarchical intelligence models, which identify a general intelligence factor at the top and several broad, related, but distinguishable skills at a lower level. Although there are different models, many agree that verbal comprehension, visuospatial reasoning, fluid reasoning, working memory, and processing speed should be included as essential components. According to Flanagan and Alfonso (2017), Reynolds and Keith (2017), and Wilson et al. (2023), the structure of the WISC-V is closely aligned with the Cattell, Horn, Carroll framework (CHC), which describes a three-stratum model of intelligence, with the general intelligence factor at the top (second-order factor like the FSIQ), and broad and narrow abilities (first-order factors like the primary index scales) and more specific skills at the bottom.
The latest edition of the scale (WISC-V) included changes compared to the structure of the previous version (WISC-IV), which had a hierarchical factorial configuration with a second-order factor called general intelligence (FSIQ) or g, and four first-order factors called verbal comprehension (VC), perceptual reasoning (PR), working memory (WM), and processing speed (PS). To enhance the understanding and interpretation of the measures, and based on factor analysis and theoretical foundations, the fifth version splits PR into two indices, namely visuospatial reasoning and fluid reasoning. It also included a task that measures visual working memory, especially useful when it is necessary to differentiate it from verbal auditory working memory when evaluating different clinical conditions (Kaufman et al. 2016; Rosas et al. 2022; Wechsler 2014b; Weiss et al. 2019).
Thus, the factor structure of the WISC-V includes a second-order factor called general intelligence (FSIQ) and five first-order factors called verbal comprehension (VC), visuospatial (VS), fluid reasoning (FR), working memory (WM) and processing speed (PS), configuring a hierarchical five-factor intelligence model (Flanagan and Alfonso 2017; Forns and Amador 2017; Kaufman et al. 2016; Niileksela and Reynolds 2019; Reynolds and Keith 2017; Rosas et al. 2022; Wechsler, 2014b; Wilson et al. 2023).
According to Wilson et al. (2023), these five indicators on the WISC-V are equivalent to the broad abilities proposed in the CHC model in the following way: VC corresponds to comprehension knowledge (Gc), VS to visual processing (Gv), FR to fluid reasoning (Gf), WM to working memory capacity (Gwm), and PS to processing speed (Gs).
This factor structure has been reported in several countries, such as France, (Wechsler 2016), Spain (Wechsler 2015), Canada (Wechsler 2014a), Taiwan (Chen et al. 2015), and Germany (Wechsler 2017), as proposed by Wechsler (2014b) in the original US version. It should be noted that in a recent study, Wilson et al. (2023) explored the equivalence of the WISC-V five-factor model with the French, Spanish, and US standardization samples and found that the five-factor hierarchical model demonstrated an excellent fit in all three samples independently. In addition, these authors demonstrated strict factorial invariance between France, Spain, and the United States, supporting the generalizability of the constructs across populations speaking different languages.

1.2. Criticisms of the Factor Structure Proposed for the WISC-V

Despite the clinical, psychometric, and theoretical foundations of the WISC-V’s internal structure, it has faced criticism from multiple independent authors (not linked to the publisher) who have performed exploratory factor analyses (EFA), confirmatory factor analyses (CFA), and bifactor analyses (BFA) with data from the standardization samples of their respective countries (Canivez et al. 2018; Canivez and Watkins 2016; Fenollar-Cortés and Watkins 2019; Lecerf and Canivez 2018; Watkins et al. 2018).
Canivez and Watkins (2016) explored the internal structure of the WISC-V in the US standardization sample (n = 2200). The results of the EFA highlight the clear presence of four factors (VC, VS, WM, and PS), without a fifth latent factor (such as FR) as proposed by Wechsler (2014b). On the other hand, the authors mention that the CFA results with models that included five first-order factors are inadmissible because they present negative variances in FR, which is an improper solution. In contrast, four-factor models that merge VS and FR in a single factor showed a better fit.
These same results were found in some studies that used standardization samples, namely Watkins et al. (2018; 880 Canadian children) and Canivez et al. (2018; 415 United Kingdom children), who examined the factor structure for the 16 subtests of the WISC-V and found no support for the five-factor model proposed by Wechsler (2014b). The same conclusion was reached by Lecerf and Canivez (2018), who replicated the analysis on the French standardization sample (n = 1049) using the 15 WISC-V subtests. A similar result was found in Spain, using data from a standardized sample comprising 1008 examinees throughout the country (Fenollar-Cortés and Watkins 2019).
Likewise, analyses of the internal structure of the WISC-V conducted on large clinical samples, by Canivez et al. (2020) with 2512 participants and Dombrowski et al. (2022) with 5359, show that the hierarchical factor models composed of four first-order factors have a better fit than the five-factor model proposed by Wechsler (2014b), replicating the structure of the previous version of this scale (WISC-IV).
A recent study, which used a new strategy for analyzing the internal structure of a test known as the exploratory graph analysis (EGA), explored the dimensionality of the WISC-V in the French standardization sample, suggesting the presence of three dimensions that coincide with the cognitive domains of processing speed (PS), verbal comprehension (VC), and perceptual reasoning (PR), discarding the distinction between VS and FR (Lecerf et al. 2023).
The studies discussed thus far add inconclusive evidence about the number of dimensions in the instrument. Views have recently emerged favoring structures other than the five-factor structure, such as four or three dimensions. Another source of contention is related to the hierarchical nature of the construct. Several independent authors propose that a bifactor model for the WISC-V offers a better factor solution than hierarchical (or second-order) models since g results from a direct measurement of the subtests, making it more parsimonious or simplified. In hierarchical models, g is an indirect measure or an “abstraction of abstractions” (Canivez et al. 2020, p. 289), where the five primary indices are an unnecessary intermediate layer between g and the subtests. These authors emphasize that clinical interpretation should rest solely on the FSIQ (Canivez and Watkins 2016; Reynolds and Keith 2017; Weiss et al. 2019).
In contrast to this idea, Weiss et al. (2019) argue for using the hierarchical five-factor model proposed by Wechsler (2014b) for the WISC-V. They support their argument with statistical evidence, such as the adequate fit indicators of the second-order model and the minimal improvement in fit seen with bifactor models. They also provide theoretical support, noting that interpretations of scores are based on an internationally validated theoretical model for intelligence-CHC tests, which is not consistent with bifactor models. This viewpoint is also shared by Keith and Reynolds (2018). Additionally, Weiss et al. (2019) emphasize clinical arguments, highlighting the consistency of relationships observed in clinical practice between measured skills and the proposed indices.
The results presented highlight the need to study different variants of the models, whether five-factor or four-factor, or hierarchical, bifactor, or oblique, to analyze the advantages and disadvantages of one over the other comparatively.

1.3. Factor Structure of the WISC-V in Chile

The WISC-V was standardized in Chile with a sample of 754 children and adolescents (Rosas et al. 2022). Its psychometric properties include an internal consistency that varies between acceptable (0.645) and excellent (0.941) reliability values for its subtests and excellent (between 0.900 and 0.968) for all its indices (see Table 2).
On the other hand, it has evidence of validity based on its internal structure and association with different variables (WISC-III and WAIS-IV). Regarding the internal structure, the scale exhibits satisfactory fit indices, enabling us to confidently assert that the hierarchical five-factor model presented by Wechsler (2014b) for the US sample, both 10 and 15 subtests, are adequately replicated for the Chilean population (Rosas et al. 2022). It should be noted that in Chile, the internal structure of the WISC-V has also been explored in a rural sample, finding an adequate level of fit for the five-factor model of 15 and 10 subtests and for the model of 7 primary subtests that make up the FSIQ (Rodríguez-Cancino et al. 2022).
Furthermore, in Chile, evidence supports the measure’s equivalence by testing the factorial invariance of the WISC-V based on urban/rural origin. This analysis revealed partial metric invariance, indicating discrepancies in analogies that may indicate the existence of measurement bias in this particular subtest (Rodríguez-Cancino et al. 2021).
In a more recent study, Rodríguez-Cancino and Concha-Salgado (2023) tested the invariance of two WISC-V factor models (hierarchical and oblique) in the standardization sample (n = 740) according to the gender and age group of their participants (6–8, 9–11, 12–14, 15–16). The results showed complete invariance according to sex but incomplete according to age group due to discrepancies in the subtests that are part of the fluid reasoning index (matrix reasoning and figure weights). This suggests that the items on these subtests do not measure these skills in the same way in young children as in adolescents. Based on this, considering that the equivalence of the measure according to the participants’ age was not demonstrated, the authors note that, in addition to exploring the nature of the possible measurement bias, it would be advisable to test other factor models (e.g., four-factor models), to verify whether there are better factorial structures, or different and more relevant models within each of the age ranges studied, an aspect that has not yet been studied in Chile.
Considering the myriad international evidence and the questions raised by independent authors regarding the internal structure of the WISC-V in the different cultural contexts in which it has been translated, adapted, and standardized, the present study seeks to generate psychometric evidence that guarantees its use is appropriate and fair for children and adolescents of different ages in the Chilean population.
It is important to note that when translating a test, there will always be discrepancies between the original and translated versions. Therefore, it cannot be assumed that an adapted version accurately measures the intended constructs or automatically captures the expected relationships between these proposed constructs in the same way in different groups (McGill et al. 2020; Van de Vijver 2020). This must be examined and demonstrated.
On the other hand, explaining whether the best internal configuration of the WISC-V consists of four or five factors or if it differs depending on whether the respondent is a child or an adolescent is especially critical for professionals who frequently use this instrument and must make sense of its scores (McGill et al. 2020), as this will guide the decision-making process that will directly impact the life and academic trajectory of those being assessed.

1.4. The Present Study

To provide information on these issues, the present study examined the latent factor structure of the 15 primary subtests of the Chilean version of the WISC-V by (1) conducting an exploratory factor analysis (EFA) on the total sample and by age group, (2) performing a confirmatory factor analysis (CFA) on the total sample, and (3) comparing the best-fitted models by age group.

2. Materials and Methods

2.1. Participants

The total sample included 853 participants aged 6–16 years (Mage = 10.9, DSage = 3.039), with 693 (81%) from urban and 160 (19%) from rural areas.
The children in the urban sample correspond to secondary data obtained from the Standardization Project of the WISC-V Scale in Chile. The data collection was carried out by the Center for the Development of Inclusion Technologies research team at the Pontificia Universidad Católica de Chile (CEDETi-UC) through a purposive sampling method based on sex (balanced) and the region of the country and a stratified method depending on whether the school was public, private, or mixed administered as a proxy socio-economic status (Rosas et al. 2022). The rural sample corresponds to primary data collected by the research team through non-probability purposive sampling.
In both samples, the inclusion criteria were as follows: (a) aged between 6 and 16 years and 11 months, and (b) not recently assessed on a similar scale. The exclusion criteria were a clinical diagnosis and permanent or temporary special educational needs. Detailed information on frequencies by gender and age group is given in Table 3. The age groups for this research were defined based on the previous study by Rodríguez-Cancino and Concha-Salgado (2023).

2.2. Instruments

Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V)

An individually administered clinical instrument assesses the cognitive functioning of children and adolescents (Rosas and Pizarro 2018; Wechsler 2014b). The Chilean version of this scale, available since 2018, presents adequate psychometric properties in this population, with appropriate levels of reliability (see Table 2), evidence of internal structure validity (in urban and rural samples), and evidence of its relationship with other variables (Rodríguez-Cancino et al. 2022; Rosas et al. 2022).
The Chilean version of the WISC-V includes 15 subtests, ten primary and five complementary, organized into five cognitive domains (see Table 2). The scores of the subtests are expressed as scale scores (M = 10, SD = 03). With M = 100 and SD = 15, the FSIQ and the indices are expressed as composite scores (Kaufman et al. 2016; Rosas and Pizarro 2018; Wechsler 2014b).

2.3. Procedure

In urban and rural samples, WISC-V was administered only to children or adolescents previously authorized by their parents, who agreed to participate voluntarily. Each child was assessed individually at their schools and during regular class hours by research assistants who had passed a training course in applying and correcting the WISC-V. The duration of each application was between 60 and 90 min, depending on the child’s performance.
Regarding ethical considerations, all the procedures carried out for the data collection of the urban sample were approved by the Scientific Ethics Committee in Social Sciences, Arts, and Humanities of the Pontificia Universidad Católica de Chile. The Universidad de La Frontera Scientific Ethics Committee approved the research protocol for the rural sample. The informed consent documents provided detailed information to parents, children, and adolescents about the project’s objectives, the administration process, and the right to withdraw from participation at any time without negative consequences for the participants. These documents also detailed confidentiality safeguards, indicating that the data would only be used anonymously and for scientific and academic purposes.
To utilize the data from the urban sample in this study, permission was obtained from CEDETi-UC, the institution holding the rights to the WISC-V in Chile, which also endorsed this study.

2.4. Data Analyses

The scale scores of the 15 subtests of the Chilean WISC-V standardization sample served as the basis for the exploratory factor analyses (EFA) and confirmatory factor analyses (CFA) conducted for the study.

2.4.1. Exploratory Factor Analyses

First, we wanted to identify the number and composition of common factors (latent variables) necessary to explain the common variance of the 15 indicators analyzed. EFA were performed using the principal-axis factoring method and oblique rotation (Oblimin) with a Pearson correlation matrix in JASP 0.18.3 (JASP Team 2024). Horn’s parallel analysis (HPA) determined the number of factors in the total sample and age subgroups.

2.4.2. Confirmatory Factor Analyses

Factor models were then tested with various configurations replicating the CFA reported in the American WISC-V Technical and Interpretative Manual (Wechsler 2014b). Through decisions guided by theory and previous empirical evidence, these models tested different allocations of the subtests to various factors, as follows:
M1 =
a model where all subtests load directly on a general ability factor as the only indicator responsible for the intercorrelations between subtests.
M2 =
a “traditional” two-factor Wechsler model (VIQ and PIQ) that distinguishes verbal from performance, present in initial versions of the instrument.
M3 =
a three-factor model combining verbal comprehension with auditory working memory on the one hand; fluid reasoning, visuospatial ability, and visual working memory on the other; and a third factor containing only processing speed.
M4 =
four models of four factors each, where VC and PS (with the same subtests) are common to all four variants but with the following distinctions in the other two factors that comprise them:
M4a,
as in the WISC-IV, which includes a reasoning factor (PR) that merges fluid and visuospatial reasoning and another for working memory (WM);
M4b,
which, based on findings in cognitive neuroscience, combines fluid reasoning and working memory into a single factor since they share a common function of the prefrontal cortex (FR+WM) and another of visuospatial skills (VS);
M4c,
the same as M4a but with cross-loadings from the arithmetic subtest in WM and PR; and
M4d,
the same as M4a, but with cross-loadings from the arithmetic subtest in WM, PR, and VC.
M5 =
five five-factor models, including VC, VS, FR, WM, and PS, which differ only in the location of the arithmetic subtest loading(s):
M5a,
only in WM (as in the four-factor models);
M5b,
only in FR;
M5c,
in WM and FR;
M5d,
in WM and VC; and
M5e,
in WM, FR, and VC, as proposed by Wechsler (2014b).
CFA were implemented using the robust maximum likelihood estimator (MLR) in Mplus 8.10 (Muthen and Muthen 2020). In terms of model fit, the Tucker–Lewis index (TLI), comparative fit index (CFI), and root mean square error of approximation (RMSEA) were used. According to Browne and Cudeck (1992) and Hu and Bentler (1999), an optimal fit was defined as TLI and CFI > 0.95 and RMSEA < 0.05, as well as a reasonable fit when the TLI and CFI > 0.90 and RMSEA < 0.08.

2.4.3. Age-Group Model Comparison

To determine the best fit for the four- and five-factor hierarchical models according to age group (6–8, 9–11, 12–14, and 15–16), we considered AIC and BIC indices (lower values indicate better fit) and the chi-square difference test using the Satorra-Bentler scaled chi-square from the Mplus software (Statmodel 2024). We also took the model’s theoretical soundness into account. The selected best options were analyzed within the four age groups using CFA.

2.4.4. Reliability

Internal consistency was calculated through the omega coefficient at a subscale level (ΩS) and at a general factor level (ΩGF) using the information from the CFA in JASP.

3. Results

3.1. Exploratory Approach

In the total sample, the EFA with the 15 primary and complementary WISC-V subtests generates a distinguishable and theoretically coherent four-factor solution (VC, PR, WM, and PS), with good fit indices and adequate factor weights, which replicates the model of its predecessor, WISC-IV.
Within the four age groups, the four-factor organization remains consistent with a good fit, factor loadings, and theoretical interpretability, although with minor variations in cross-loadings and the size of factor weights. Cross-loadings > 0.3 were observed for IN and CA in G1, FW in G2, and AR in G3. Likewise, factor loadings < 0.4 were identified in FW in G1, G2, and G3, MR in G2 and G3, CA in G1, and CO in G2. Interestingly, in G4, unlike the other three groups, a clear organization of the subtests with factor weights greater than 0.4 and no cross-loadings was observed (see Table 4).

3.2. Confirmatory Approach

The CFA in the total sample made it possible to identify some models with good fit indices and factor weights, although others had poor indicators. M1, M2, and M3 had a poor fit. The four four-factor models (M4) and the five five-factor models (M5) showed adequate fit indices. However, M4a and M5a had comparative advantages by showing good fit indices, chi-square differences with other models, theoretical consistency with the CHC model, and no cross-loadings. Notably, M4a corresponds exactly to the four-factor model provided by the EFA (see Table 5).

3.3. Best Fitted Models Comparison by Age Group

Once the best fit and theoretical soundness models were identified, they were analyzed in the total sample (see Figure 1) and then within each age group (see Table 6) to verify their stability. Both M4a and M5a had adequate fit indices in all four age subgroups. All the factor loadings were greater than 0.4 in all models and age groups, except for CA in PS in G1 (6–8) in M4a and M5a. This characteristic disappears in the other age groups in both models, with the weight of CA increasing as age increases.
Regarding the magnitude of the factor loadings in the cognitive domains (first-order factors), it should be noted that in G1 and G2 in M4a and G2 in M5a, the WM domain presents loadings greater than 0.9. Something similar occurs with FR in G2 and G3 in M5a. This situation could represent a possible redundancy with g in these subgroups, an issue that tends to balance out in the older age group, since the domain weights in g are more balanced.

3.4. Internal Consistency

Reliability estimates in the M4a, at the subscale level (ΩS), were acceptable to good, except for PS, which was questionable, although the overall factor level (ΩGF) was acceptable. In M5a, reliabilities were good in only two of the five first-order factors, although it was good at the general factor level.

4. Discussion

In Chile, the use of the WISC-V is mandatory in the educational context (Ministerio Educación Chile 2009). Therefore, examining its psychometric properties and generating evidence of its reliability and validity are essential to guarantee the quality of the assessments made with this scale. This study aimed to provide evidence of validity by examining the internal structure of the test on the basis that it is possible to find support at the international level for the model proposed by Wechsler (2014b), as well as addressing concerns raised by independent authors.
The first objective was to explore the factor structure of the WISC-V using an EFA. The 15 subtests were naturally grouped into four theoretically interpretable factors (VC, PR, WM, and PS) in the total sample, without the presence of a fifth factor (such as FR), which is consistent with reports from other countries using exploratory analyses (Canivez et al. 2018; Canivez and Watkins 2016; Fenollar-Cortés and Watkins 2019; Lecerf and Canivez 2018; Watkins et al. 2018). These authors note that the WISC-V is over-factored since there is no empirical evidence of the existence of a fifth factor.
Similarly, grouping the subtests into the four factors found in the total sample is maintained in all four groups. However, the factor solutions in G1, G2, and G3 show problems such as cross-loadings greater than 0.3 (some theoretically inconsistent, such as CA in VC) and weak factor loadings (e.g., FW weights 0.250 on F2 in G1), i.e., lower than what is conventionally accepted as adequate. It is important to point out that the factor solution in the older age group (15–16) is similar to that of the total sample: parsimonious and without problems that could cast doubt on its interpretability.
The EFA findings show cross-loadings in G1, G2, and G3, which naturally disappear in G4. Therefore, the results of this study, in relation to the structure of intelligence, could represent a pattern like the one proposed by the differentiation hypothesis (Breit et al. 2022; Juan-Espinosa et al. 2006; Zimprich and Martin 2010). This hypothesis states that cognitive skills are undifferentiated at the beginning of the life cycle and gradually break down into more specific or specialized skills. Corroborating this proposition according to age or ability level with other methodological approaches would be an interesting line of future research.
To achieve the study’s second objective, various confirmatory factor models were tested, and the results show that the hierarchical structures of four and five factors present comparatively better-fit indices. The models of one, two, and three factors were discarded. This empirical evidence supports the plausibility of a four-factor structure and the five-factor one originally proposed by Wechsler (2014b) in the United States. The results of the five-factor model coincide with those reported in France, Spain, Canada, Taiwan, and Germany (Chen et al. 2015; Wechsler 2014a, 2015, 2016, 2017) indicating that the intelligence construct, measured with the WISC-V in the Chilean sample, is composed of a set of positively correlated latent abilities and structured hierarchically, coinciding with the proposal of the CHC model (Flanagan and Alfonso 2017; Reynolds and Keith 2017; Wilson et al. 2023).
This study also sought to determine whether a four- or five-factor model shows a more relevant fit according to the age of the participants. To this end, the “best models” were chosen and tested according to age range. They were identified based on psychometric aspects (review of the fit indices and factor loadings) and theoretical aspects (agreement with the proposals of the CHC model). In addition, considering the concerns of the independent authors, we chose models that did not include improper solutions or cross-loadings, placing AR only in the working memory domain.
Psychometrically, comparisons of the “best models” of four and five factors in the Chilean sample suggest that for all age ranges, both configurations fulfill the current requirements shared by the scientific community that indicate when a model presents a plausible factor solution (Browne and Cudeck 1992; Hu and Bentler 1999), with minimal differences in their fit indices. Both structures have issues related to weak factor weights, which can indicate a lack of relevance of the subtest in its intended domain. Conversely, there may be very high domain loadings that independent authors would find questionable, as they could be empirically redundant with g (Canivez et al. 2018).
Regarding the latter, the CFA shows empirical redundancies in WM for G1 and G2 in the four-factor model and G2 in the five-factor model. This is also observed in FR, for G2 in the four-factor model, and for G3 in the five-factor model. No empirical redundancies were found in either model in the older age group (15–16). In this regard, it should be noted that evidence has shown a close relationship (sometimes indistinguishable) between WM and intelligence or executive functions, identifying it as a central process of cognition and a good predictor of academic performance in children with typical and atypical development (Dehn 2015; Ger and Roebers 2023; Ikeda et al. 2023). A close relationship between WM and FR has also been observed in neuroimaging studies that have shown that solving WM and FR tasks requires the activation of the same region of the prefrontal cortex (Dehn 2015).
The empirical redundancy found in this study could be explained by this close interrelation, observing that WM is a central component of intelligence in younger children, which in middle childhood coexists with FR. Then, in the following age range, abstract reasoning (FR) becomes the central component until, in adolescence, the set of cognitive domains, differentiated, make up the general intellectual ability, without the role of one standing out over the other. It should be noted that in adolescence, FR reaches its full development (Dehn 2015).
These results are particularly interesting for the Chilean sample, as they suggest that the evaluation of WM is central to understanding their general cognitive functioning in younger children. Moreover, Ger and Roebers (2023) report that there is evidence showing that WM training can enhance fluid intelligence in children aged 7–11. Therefore, further exploration of these results in the Chilean sample, along with new lines of research on the relationship between WM components and general intelligence, could be a significant practical contribution to assessing and intervening cognitive processes in childhood.
At the subtest level, it is interesting that in the CFA of the four-factor model and the five-factor model, the CA subtest exhibits the same trend of gradually increasing its factor loading as the age of the subjects increases. In addition, the weights of CA are practically identical in both configurations, as follows: weak in the youngest children, medium in the intermediate ages, and highest in the last age range. Effective task resolution on the CA subtest depends on autonomous, quick, and successful decision-making and strategy selection, as well as the ability to inhibit impulsive responses (Kaufman et al. 2016). This result could indicate that in this subtest, the performance of the Chilean sample reflected the progressive development of these skills, which is explained by the maturational process that occurs during childhood and adolescence. This provides further proof that the role of CA in determining information processing speed grows increasingly important as individuals age, making its interpretation more reliable during adolescence. In the other VP subtests and the verbal comprehension and working memory dimensions, there is no gradual increase with age, as observed in CA. However, it is notable that the factor loadings are very similar in the four- and five-factor models. This suggests that there are no noteworthy differences in structuring these domains, regardless of whether a four- or five-factor configuration is considered.
In terms of determining whether the best internal configuration of the WISC-V for the Chilean sample is composed of four or five factors, the empirical evidence of this study shows arguments in favor of both models (fit indices, theoretical consistency) but also against (models with cross-loadings or empirical redundancies). One advantage of the four-factor model is its more evenly distributed configuration of subtests, with three domains comprising four subtests and one composed of three subtests. This contrasts with the five-factor model, which has two factors with two subtests, one with three subtests and two with four subtests, as illustrated in Figure 1. On the other hand, the five-factor model has a better internal consistency of the general intelligence construct and is perfectly aligned with the CHC theoretical proposal (Wilson et al. 2023), which is internationally recognized as a solid foundation for interpreting the scores on current intelligence tests. As mentioned above, this was one of Wechsler’s (2014b) arguments for retaining the five-factor model.
The four-factor structure of the WISC-IV emerged after several attempts to integrate theory and research, incorporating the CHC theoretical model of intelligence in the interpretation of its scores for the first time (Flanagan and Kaufman 2009). For the WISC-V, new subtests or combinations of these were incorporated to have increasingly more specific indices or to provide “purer” measures of the constructs evaluated, incorporating VS and FR as two separate factors to increase their practical usefulness (Forns and Amador 2017).
Weiss et al. (2019) note that in clinical practice, there is consistency between the skills measured and the indices that make up the five-factor model, where the separation of VS and FR increases the practical usefulness of this version of the scale, as the measurement of nonverbal reasoning skills is more precise. For their part, Reynolds and Keith (2017) and Keith and Reynolds (2018) performed a series of CFA to determine the feasibility of separating VS and FR in the WISC-V and found that it was indeed psychometrically admissible. This represents an advantage of five-factor models over four-factor models.
In summary, in the Chilean sample, the four- and five-factor models are defensible at the psychometric level, presenting minimal differences in fit according to age range and similar problems. In theoretical terms, both models are based on the same theoretical framework (CHC), although the five-factor model is more closely aligned. Regarding clinical utility, the five-factor models achieve greater accuracy in the measurement of nonverbal reasoning skills because they have the separation of VS and FR. Considering all these psychometric, theoretical, and practical utility aspects, the results of this study provide evidence to support the use of the five-factor model to interpret WISC-V scores in Chile in children and adolescents.

Limitations and Future Research

Despite the theoretical and empirical contributions of the present study, its findings cannot be generalized to minority groups of the Chilean population or clinical groups. Given that the use of this scale is widespread in Chile to assess all children who require it, it would be advisable to explore the psychometric evidence in these groups. In addition, it is suggested that the reported findings be investigated with factorial invariance analyses.

5. Conclusions

This groundbreaking study in Chile investigated the significance of factor models with various configurations for the WISC-V in a substantial sample of children from different backgrounds across four age groups. Although varied, the results lend support to using the five-factor model proposed by Wechsler (2014b). Hopefully, these findings will enhance Chilean psychologists’ clinical and educational practice and contribute to fair and relevant evaluations.

Author Contributions

Both authors contributed substantially and equally to this research article. Conceptualization, M.R.-C. and A.C.-S.; methodology, M.R.-C. and A.C.-S.; software, A.C.-S.; validation, M.R.-C. and A.C.-S.; formal analysis, M.R.-C. and A.C.-S.; investigation, M.R.-C. and A.C.-S.; resources, M.R.-C.; data curation, M.R.-C. and A.C.-S.; writing—original draft preparation, M.R.-C. and A.C.-S.; writing—review and editing, M.R.-C. and A.C.-S.; visualization, M.R.-C. and A.C.-S.; supervision, M.R.-C. and A.C.-S.; project administration, M.R.-C.; funding acquisition, M.R.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Dirección de Investigación, Universidad de La Frontera, Project DIUFRO DI22-0030 and Agencia Nacional de Investigación y Desarrollo de Chile (ANID) through the Project FONDECYT Iniciación N°11230429. The APC was funded by ANID.

Institutional Review Board Statement

The study was conducted by the Declaration of Helsinki and was approved by the Scientific Ethical Committee, Social Sciences, Arts and Humanities, of the Pontificia Universidad Católica de Chile (Protocol code No. 15091200; date of approval: 10 December 2015) and by the Scientific Ethics Committee of the Universidad de La Frontera (record No. 052_22, date of approval: 19 May 2022; record No. 065_23, date of approval: 9 May 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are unavailable due to ethical or privacy restrictions.

Acknowledgments

We want to thank the research team of Centro de Desarrollo de Tecnologías de Inclusión de la Pontificia Universidad Católica de Chile (CEDETi-UC) for providing data from the WISC-V Standardization Project of Chile.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the study’s design, data collection, analysis, interpretation, manuscript writing, or decision to publish the results.

References

  1. American Education Research Association [AERA], American Psychological Association [APA], and National Council on Measurement in Education [NCME]. 2018. Estándares para pruebas educativas y psicológicas. Washington, DC: American Educational Research Association. [Google Scholar]
  2. American Psychological Association [APA]. 2020. APA Guidelines for Psychological Assessment and Evaluation. Washington, DC: American Psychological Association. [Google Scholar] [CrossRef]
  3. Arango-Lasprilla, Juan Carlos, Diego Rivera, and Laiene Olabarrieta-Landa. 2017. Neuropsicología Infantil. Mexico City: Manual Moderno. [Google Scholar]
  4. Breit, Moritz, Martin Brunner, Dylan Molenaar, and Franzis Preckel. 2022. Differentiation hypotheses of intelligence: A systematic review of the empirical evidence and an agenda for future research. Psychological Bulletin 148: 518–54. [Google Scholar] [CrossRef]
  5. Brenlla, María Elena, Mariana Soledad Seivane, Rocío Giselle Fernández Da Lama, and Guadalupe Germano. 2023. Pasos fundamentales para realizar adaptaciones de pruebas psicológicas. Revista de Psicología 19: 121–48. [Google Scholar] [CrossRef]
  6. Browne, Michael W., and Robert Cudeck. 1992. Alternative Ways of Assessing Model Fit. Sociological Methods & Research 21: 230–58. [Google Scholar] [CrossRef]
  7. Brue, Alan W., and Linda Wilmshurst. 2016. Essentials of Intellectual Disability Assessment and Identification. New York: Wiley. [Google Scholar]
  8. Canivez, Gary L., and Marley W. Watkins. 2016. Review of the Weschler Intelligence Scale for Children—Fifth Edition: Critique, Commentary, and Independent Analyses in Intelligent Testing with the WISC-V. New York: Wiley. [Google Scholar]
  9. Canivez, Gary L., Marley W. Watkins, and Ryan J. McGill. 2018. Construct validity of the Wechsler Intelligence Scale for Children—Fifth UK Edition: Exploratory and confirmatory factor analyses of the 16 primary and secondary subtests. British Journal of Educational Psychology 89: 195–224. [Google Scholar] [CrossRef] [PubMed]
  10. Canivez, Gary L., Marley W. Watkins, and Stefan C. Dombrowski. 2017. Structural validity of the Wechsler Intelligence Scale for Children–Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests. Psychological Assessment 28: 975–86. [Google Scholar] [CrossRef] [PubMed]
  11. Canivez, Gary L., Ryan J. McGill, Stefan C. Dombrowski, Marley W. Watkins, Alison E. Pritchard, and Lisa A. Jacobson. 2020. Construct validity of the WISC-V in clinical cases: Exploratory and confirmatory factor analyses of the 10 Primary Subtests. Assessment 27: 274–96. [Google Scholar] [CrossRef]
  12. Chen, Hsinyi, Ou Zhang, Susan Engi Raiford, Jianjun Zhu, and Lawrence G. Weiss. 2015. Factor invariance between genders on the Wechsler Intelligence Scale for Children–Fifth Edition. Personality and Individual Differences 86: 1–5. [Google Scholar] [CrossRef]
  13. Dehn, Milton J. 2015. Working Memory Assessment and Intervention. New York: Wiley. [Google Scholar]
  14. deLeyer-Tiarks, Johanna M., Jacqueline M. Caemmerer, Melissa A. Bray, and Alan S. Kaufman. 2024. Assessment of Human Intelligence—The State of the Art in the 2020s. Journal of Intelligence 12: 72. [Google Scholar] [CrossRef]
  15. Dombrowski, Stefan C., Ryan J. McGill, Marley W. Watkins, Gary L. Canivez, Alison E. Pritchard, and Lisa A. Jacobson. 2022. Will the Real Theoretical Structure of the WISC-V Please Stand Up? Implications for Clinical Interpretation. Contemporary School Psychology 26: 492–503. [Google Scholar] [CrossRef]
  16. Echavarría-Ramírez, Luis M., and Javier Tirapu-Ustárroz. 2021. Exploración neuropsicológica en niños con discapacidad intelectual. Revista de Neurología 73: 66–76. [Google Scholar] [CrossRef]
  17. Estrada, M. Elena. 2022. La evaluación e intervención neuropsicológica en los trastornos del desarrollo. Seville: Punto Rojo Libros S.L. [Google Scholar]
  18. Fenollar-Cortés, Javier, and Marley W. Watkins. 2019. Construct validity of the Spanish Version of the Wechsler Intelligence Scale for Children Fifth Edition (WISC-VSpain). International Journal of School & Educational Psychology 7: 150–64. [Google Scholar] [CrossRef]
  19. Flanagan, Dawn P., and Alan S. Kaufman. 2009. Claves para la evaluación con WISC-IV, 2nd ed. Mexico City: Manual Moderno. [Google Scholar]
  20. Flanagan, Dawn P., and Vincent C. Alfonso. 2017. Essentials of WISC-V Assessment. New York: Wiley. [Google Scholar]
  21. Forns, Maria, and Juan Antonio Amador. 2017. Habilidades clínicas para aplicar, corregir e interpretar las escalas de inteligencia de Wechsler. Madrid: Pirámide. [Google Scholar]
  22. Ger, Ebru, and Claudia M. Roebers. 2023. The Relationship between Executive Functions, Working Memory, and Intelligence in Kindergarten Children. Journal of Intelligence 11: 64. [Google Scholar] [CrossRef] [PubMed]
  23. Hebben, Nancy, and William Milberg. 2009. Essentials of Neuropsychological Assessment, 2nd ed. New York: Wiley. [Google Scholar]
  24. Hu, Li-tze, and Peter M. Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal 6: 1–55. [Google Scholar] [CrossRef]
  25. Ikeda, Yoshifumi, Yosuke Kita, Yuhei Oi, Hideyuki Okuzumi, Silvia Lanfranchi, Francesca Pulina, Irene Cristina Mammarella, Katie Allen, and David Giofrè. 2023. The Structure of Working Memory and Its Relationship with Intelligence in Japanese Children. Journal of Intelligence 11: 167. [Google Scholar] [CrossRef]
  26. Iliescu, Dragos, Dave Bartram, Pia Zeinoun, Matthias Ziegler, Paula Elosua, Stephen Sireci, Kurt F. Geisinger, Aletta Odendaal, Maria Elena Oliveri, Jon Twing, and et al. 2024. The Test Adaptation Reporting Standards (TARES): Reporting test adaptations. International Journal of Testing 24: 80–102. [Google Scholar] [CrossRef]
  27. International Test Commission [ITC]. 2013. International Test Commission Guidelines on Test Use. Available online: https://www.intestcom.org/files/guideline_test_use.pdf (accessed on 1 July 2024).
  28. JASP Team. 2024. JASP (Version 0.18.3). JASP Team. Amsterdam. Available online: https://jasp-stats.org/ (accessed on 17 July 2024).
  29. Juan-Espinosa, Manuel, Lara Cuevas, Sergio Escorial, and Luis F. García. 2006. The differentiation hypothesis and the Flynn effect. Psicothema 18: 284–87. [Google Scholar]
  30. Kahan, Evelina, and Lourdes Salvo. 2022. Evaluación de la inteligencia en niños. Actualización en WISC-V. Cambridge, UK: Ediciones Universitarias. Available online: https://www.colibri.udelar.edu.uy/jspui/handle/20.500.12008/35019 (accessed on 12 June 2024).
  31. Kaufman, Alan S., Susan Engi Raiford, and Diane L. Coalson. 2016. Intelligent testing with the WISC-V. New York: I. John Wiley & Sons. [Google Scholar]
  32. Keith, Timothy Z., and Matthew R. Reynolds. 2018. Using confirmatory factor analysis to aid in understanding the constructs measured by intelligence tests. In Contemporary Intellectual Assessment: Theories, Tests and Issues, 4th ed. Edited by Dawn P. Flanagan and Erin M. McDonough. New York: Guildford Press. [Google Scholar]
  33. Lecerf, Thierry, and Gary L. Canivez. 2018. Complementary exploratory and confirmatory factor analyses of the French WISC-V: Analyses based on the standardization sample. Psychological Assessment 30: 793–808. [Google Scholar] [CrossRef]
  34. Lecerf, Thierry, Salome Döll, and Mathilde Bastien. 2023. Investigating the structure of the French WISC–V (WISC–VFR) for five age groups using psychometric network modeling. Journal of Intelligence 11: 160. [Google Scholar] [CrossRef]
  35. Leong, Frederick T. L., Dave Bartram, and Fanny M. Cheung. 2020. Manual internacional de pruebas y evaluación del ITC. Mexico City: Manual Moderno. [Google Scholar]
  36. Manzi, Jorge, María Rosa García, and Sandy Taut. 2019. Validez de las evaluaciones educacionales en Chile y Latinoamérica. Santiago: Ediciones Universidad Católica de Chile. [Google Scholar]
  37. McGill, Ryan J., Thomas J. Ward, and Gary L. Canivez. 2020. Use of translated and adapted versions of the WISC-V: Caveat emptor. School Psychology International 41: 276–94. [Google Scholar] [CrossRef]
  38. Ministerio Educación Chile. 2009. Decreto 170/Fija normas para determinar los alumnos con necesidades educativas especiales que serán beneficiarios de las subvenciones para educación especial. Available online: https://www.leychile.cl/Navegar?idNorma=1012570 (accessed on 1 July 2024).
  39. Muñiz, José, Ana Hernández, and Vicente Ponsoda. 2015. Nuevas Directrices sobre el Uso de los Test: Investigación, Control de Calidad y Seguridad. Papeles del Psicólogo 36: 161–73. [Google Scholar]
  40. Muñiz, José, and Eduardo Fonseca-Pedrero. 2019. Diez pasos para la construcción de un test. Psicothema 31: 7–16. [Google Scholar] [CrossRef] [PubMed]
  41. Muthen, L. K., and B. O. Muthen. 2020. MPlus (Version 8.10). Available online: https://www.statmodel.com/index.shtml (accessed on 1 July 2024).
  42. Niileksela, Christopher R., and Matthew R. Reynolds. 2019. Enduring the tests of age and time: Wechsler constructs across versions and revisions. Intelligence 77: 2–15. [Google Scholar] [CrossRef]
  43. Reynolds, Matthew R., and Timothy Z. Keith. 2017. Multi-group and hierarchical confirmatory factor analysis of the Wechsler Intelligence Scale for Children—Fifth Edition: What does it measure? Intelligence 62: 31–47. [Google Scholar] [CrossRef]
  44. Rodríguez-Cancino, Marcela, and Andrés Concha-Salgado. 2023. WISC-V Measurement Invariance According to Sex and Age: Advancing the Understanding of Intergroup Differences in Cognitive Performance. Journal of Intelligence 11: 180. [Google Scholar] [CrossRef]
  45. Rodríguez-Cancino, Marcela, María Beatriz Vizcarra, and Andrés Concha-Salgado. 2021. ¿Se Puede Evaluar a Niños Rurales con WISC-V? Explorando la Invarianza Factorial de la Inteligencia en Chile. Revista Iberoamericana de Diagnóstico y Evaluación–e Avaliação Psicológica 60: 117–31. [Google Scholar] [CrossRef]
  46. Rodríguez-Cancino, Marcela, María Beatriz Vizcarra, and Andrés Concha-Salgado. 2022. Propiedades Psicométricas de la Escala WISC-V en Escolares Rurales Chilenos. Psykhe 31: 1–14. [Google Scholar] [CrossRef]
  47. Rosas, Ricardo, and Marcelo Pizarro. 2018. WISC-V: Manual de administración y corrección. Santiago: Pontificia Universidad Católica de Chile, Centro de Desarrollo de Tecnologías de Inclusión & Pearson. [Google Scholar]
  48. Rosas, Ricardo, Marcelo Pizarro, Olivia Grez, Valentina Navarro, Dolly Tapia, Susana Arancibia, María Teresa Muñoz-Quezada, Boris Lucero, Claudia P. Pérez-Salas, Karen Oliva, and et al. 2022. Estandarización Chilena de la Escala Wechsler de Inteligencia para Niños—Quinta Edición. Psykhe 31: 1–23. [Google Scholar] [CrossRef]
  49. Sireci, Stephen G. 2020. De-“Constructing” Test Validation. Chinese/English Journal of Educational Measurement and Evaluation 1: 1–12. [Google Scholar] [CrossRef]
  50. Sireci, Stephen G., and Isabel Benítez. 2023. Evidence for Test Validation: A Guide for Practitioners. Psicothema 3: 217–26. [Google Scholar] [CrossRef]
  51. Statmodel. 2024. Chi-Square Difference Testing Using the Satorra-Bentler Scaled Chi-Square. Available online: http://www.statmodel.com/chidiff.shtml (accessed on 25 July 2024).
  52. Van de Vijver, F. J. R. 2020. Adaptaciones de pruebas en F. Leong, Manual internacional de pruebas y evaluación del ITC (371–385). Mexico City: Manual Moderno. [Google Scholar]
  53. Vinet, Eugenia V., Marcela Rodríguez-Cancino, Ailin Sandoval Domínguez, Paloma Rojas Mora, and José L. Saiz. 2023. El Empleo de Test por Psicólogos/as Chilenos/as: Un Inquietante Panorama. Psykhe 32: 1–19. [Google Scholar]
  54. Watkins, Marley W., Stefan C. Dombrowski, and Gary L. Canivez. 2018. Reliability and factorial validity of the Canadian Wechsler Intelligence Scale for Children–Fifth Edition. International Journal of School & Educational Psychology 6: 252–65. [Google Scholar] [CrossRef]
  55. Wechsler, D. 2014a. Wechsler Intelligence Scale for Children-Fifth Edition: Canadian Manual. London: Pearson Canada Assessment. [Google Scholar]
  56. Wechsler, D. 2014b. WISC-V Wechsler Intelligence Scale for Children-Fifth Edition: Technical and Interpretive Manual. London: Pearson. [Google Scholar]
  57. Wechsler, D. 2015. WISC-V. Manual Técnico y de Interpretación. London: Pearson Educación. [Google Scholar]
  58. Wechsler, D. 2016. WISC-V. Echelle d’intelligence de Wechsler pour enfants-5e Edition. Paris: Pearson France-ECPA. [Google Scholar]
  59. Wechsler, D. 2017. Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V): Technisches Manual. (German Version of F. Petermann). London: Pearson. [Google Scholar]
  60. Weiss, Lawrence G., Donald H. Saklofske, James A. Holdnack, and Aurelio Prifitera. 2019. WISC-V. Clinical Use and Interpretation, 2nd ed. Amsterdam: Elsevier. [Google Scholar]
  61. Wilson, Christopher J., Stephen C. Bowden, Linda K. Byrne, Louis-Charles Vannier, Ana Hernández, and Lawrence G. Weiss. 2023. Cross-National Generalizability of WISC-V and CHC Broad Ability Constructs across France, Spain, and the US. Journal of Intelligence 11: 159. [Google Scholar] [CrossRef] [PubMed]
  62. Zimprich, Daniel, and Mike Martin. 2010. Differentiation-Dedifferentiation as a Guiding Principle for the Analysis of Lifespan Development. In Life in Old Age: Personal and Shared Responsibility in Society, Culture and Politics. Edited by Astrid Kruse. Heidelberg: AKA. Available online: https://www.zora.uzh.ch/id/eprint/40639 (accessed on 30 July 2024).
Figure 1. CFAs of the best-fitted models in the total sample.
Figure 1. CFAs of the best-fitted models in the total sample.
Jintelligence 12 00105 g001
Table 1. Versions of the WISC-V since its creation.
Table 1. Versions of the WISC-V since its creation.
EditionAgesSubtestsg LevelIndexes Level
WISC (1949)5–1512Full Scale IQ (FSIQ)Verbal IQ (VIQ)Performance IQ (PIQ)
WISC-R (1974)6–1612Full Scale IQ (FSIQ)Verbal IQ (VIQ)Performance IQ (PIQ)
WISC-III (1991)6–1613Full Scale IQ (FSIQ)Verbal IQ (VIQ)Performance IQ (PIQ)
Verbal comprehension (VCI)Freedom from distractibility (FDI)Perceptual organization (POI)Processing speed (PSI)
WISC-IV (2003)6–1615Full Scale IQ (FSIQ)Verbal comprehension (VCI)Working memory (WMI)Perceptual reasoning (PRI)Processing speed (PSI)
WISC-V (2014)6–1621Full Scale IQ (FSIQ)Verbal comprehension (VCI)Working memory (WMI)Fluid reasoning (FRI)Visual spatial (VSI)Processing speed (PSI)
Note: Table created by the authors.
Table 2. The classification of subtests according to the cognitive domain and reliability coefficients for the Chilean version of the WISC-V.
Table 2. The classification of subtests according to the cognitive domain and reliability coefficients for the Chilean version of the WISC-V.
Type of SubtestCognitive Domain
Verbal Comprehension
(α = 0.943)
Visual Spatial
(α = 0.912)
Fluid Reasoning
(α = 0.945)
Working Memory
(α = 0.933)
Processing Speed
(α = 0.900)
Primary subtestSimilarities
(SI; α = 0.921)
Block Design
(BD; α = 0.824)
Matrix Reasoning
(MR; α = 0.900)
Digit Span
(DS; α = 0.907)
Coding
(CD; α = 0.898)
Vocabulary
(VO; α = 0.888)
Visual Puzzles
(VP; α = 0.903)
Figure Weights
(FW; α = 0.941)
Picture Span
(PS; α = 0.891)
Symbol Search
(SS; α = 0.822)
Complementary subtestInformation
(IN; α = 0.910)
Arithmetic
(AR; α = 0.900)
Letter-Number Sequencing (LN; α = 0.895)Cancellation
(CA; α = 0.645)
Comprehension
(CO; α = 0.876)
Note: Table created by the authors; α = Cronbach’s alpha.
Table 3. Characteristics of the sample according to sex and age group.
Table 3. Characteristics of the sample according to sex and age group.
Age Group
(in Years)
BoysGirlsMissingTotal Sample
f%f%f%f%
6–811250.5%11049.5%00.0%22226.0%
9–1112647.4%13952.2%10.4%26631.2%
12–1411048.7%11550.9%10.4%22626.5%
15–166446.0%7554.0%00.0%13916.3%
Total sex41248.3%43951.5%20.2%853100%
Note: Table created by the authors.
Table 4. EFAs in the total sample and disaggregated by age group.
Table 4. EFAs in the total sample and disaggregated by age group.
Total SampleG1 (6–8)G2 (9–11)G3 (12–14)G4 (15–16)
Sub-TestsF1F2F3F4F1F2F3F4F1F2F3F4F1F2F3F4F1F2F3F4
SI.688 .440 .743 .747 .771
VO.840 .719 .715 .860 .884
IN.653 .402 .425 .701 .719 .648
CO.701 .381 .796 .750 .727
BD .585 .539 .490 .668 .621
VP .756 .676 .740 .743 .749
MR .518 .692 .374 .350 .597
FW .334 .250 .316.298 .312 .575
AR .539 .712 .582 .319 .435 .586
DS .786 .749 .756 .782 .855
PS .432 .317 .450 .608 .469
LN .739 .740 .648 .823 .614
CD .481 .564 .497 .460 .435
SS .766 .583 .812 .748 .795
CA .505.315 .347 .540 .477 .613
Fitχ2 = 81.854, df = 51, p = .004
CFI = .993, TLI = 0.986
RMSEA [90% CI] = .027 [.015 .037]
SRMR = .015
BIC = −262.333
χ2 = 60.431, df = 51, p = .172
CFI = .990, TLI = 0.980
RMSEA [90% CI] = .029 [.00 .054]
SRMR = .027
BIC = −215.105
χ2 = 48.823, df = 51, p = .561
CFI = 1.000, TLI = 1.003
RMSEA [90% CI] = .000 [.000 .037]
SRMR = .020
BIC = −235.936
χ2 = 73.401, df = 51, p = .022
CFI = .981, TLI = 0.961
RMSEA [90% CI] = .044 [.018 .066]
SRMR = .024
BIC = −203.046
χ2 = 70.209, df = 51, p = .038
CFI = .979, TLI = 0.957
RMSEA [90% CI] = .052 [.013 .080]
SRMR = .029
BIC = −181.450
Note: Factor loadings smaller than 0.25 were omitted; F1 is consistent with the verbal comprehension domain, F2 with perceptual reasoning, F3 with working memory, and F4 with processing speed.
Table 5. Factorial configurations, goodness-of-fit indexes, and model comparisons of the CFA models in the total sample.
Table 5. Factorial configurations, goodness-of-fit indexes, and model comparisons of the CFA models in the total sample.
FactorsFit IndicesModel Comparison
ModelF1F2F3F4F5χ2dfpCFITLIRMSEA
[90%CI]
SRMRAICBICComparisonΔχ2Δdfp
M1All 15 subtests----848.42490<.001.8190.789.099
[.093 .106]
.06459,50359,717----
M2V
SI VO IN CO
AR DS LN
P
BD VP MR FW
PS CD SS CA
---724.11789<.001.8480.821.091
[.085 .098]
.06059,37259,590----
M3V
SI VO IN CO
AR DS LN
P
BD VP MR
FW PS
PS
CD SS CA
---523.52287<.001.8960.874.077
[.070 .083]
.04959,17159,399----
M4a *VC
SI VO IN CO
PR
BD VP MR FW
WM
AR DS PS LN
PS
CD SS CA
-214.57186<.001.9690.963.042
[.035 .049]
.03458,86059,092M4a vs. M4c
M4a vs. M4d
M4a vs. M5a
M4a vs. M5c
M4a vs. M5d
M4a vs. M5e
7.497
17.124
4.773
7.402
14.857
16.474
1
2
1
2
2
3
.011
<.001
.026
.029
.001
.001
M4bVC
SI VO IN CO
VS
BD VP
FR+WM
MR FW AR DS PS LN
PS
CD SS CA
-283.09786<.001.9530.943.052
[.045 .059]
.03858,93159,163M4b vs. M4c
M4b vs. M4d
M4b vs. M5a
M4b vs. M5c
M4b vs. M5d
M4b vs. M5e
76.023
85.650
63.753
75.928
83.343
85.000
1
2
1
2
2
3
<.001
<.001
<.001
<.001
<.001
<.001
M4cVC
SI VO IN CO
PR
BD VP MR FW AR
WM
AR DS PS LN
PS
CD SS CA
-207.07485<.001.9710.964.041
[.034 .048]
.03358,85359,091M4c vs. M4d
M4c vs. M5c
M4c vs. M5d
M4c vs. M5e
9.627
0.095
7.360
8.977
1
1
1
2
.001
.833
.004
.010
M4dVC
SI VO IN CO AR
PR
BD VP MR FW AR
WM
AR DS PS LN
PS
CD SS CA
-197.44784<.001.9730.966.040
[.033 .047]
.03358,84659,088M4d vs. M5e0.6501.394
M5aVC
SI VO IN CO
VS
BD VP
FR
MR FW
WM
AR DS PS LN
PS
CD SS CA
219.34485<.001.9680.960.043
[.036 .050]
.03658,86759,104M5a vs. M4d
M5a vs. M5c
M5a vs. M5d
M5a vs. M5e
21.897
12.175
19.630
21.247
1
1
1
2
<.001
.002
<.001
<.001
M5bVC
SI VO IN CO
VS
BD VP
FR
MR FW AR
WM
DS PS LN
PS
CD SS CA
317.22286<.0010.9450.933.056
[.050 .063]
.06458,96559,197M5b vs. M4d
M5b vs. M5a
M5b vs. M5c
M5b vs. M5d
M5b vs. M5e
119.780
97.878
110.050
117.510
119.130
2
1
2
2
3
<.001
<.001
<.001
<.001
<.001
M5cVC
SI VO IN CO
VS
BD VP
FR
MR FW AR
WM
AR DS PS LN
PS
CD SS CA
207.16984<.001.9710.963.041
[.034 .049]
.03558,85559,098M5c vs M5e9.0721.001
M5dVC
SI VO IN CO
VS
BD VP
FR
MR FW
WM
AR DS PS LN
PS
CD SS CA
199.71484<.001.9720.965.040
[.033 .047]
.03458,84859,091M5d vs M5e1.6171.199
M5eVC
SI VO IN CO AR
VS
BD VP
FR
MR FW AR
WM
AR DS PS LN
PS
CD SS CA
198.09783<.001.9730.965.040
[.033 .048]
.03458,84959,096
Note: * = M4a was tested in the Technical and Interpretive Manual by Wechsler (2014b) and coincided with the four-factor exploratory model in our data.
Table 6. CFAs of the best-fitted models disaggregated by age group.
Table 6. CFAs of the best-fitted models disaggregated by age group.
Factors (Factor Loadings)Fit Indices
ModelAge GroupF1F2F3F4F5χ2dfpCFITLIRMSEA
[90%CI]
SRMRAICBIC
M4aG1
(6–8)
VC (.880)
SI (.678)
VO (.743)
IN (.605)
CO (.551)
PR (.790)
BD (.562)
VP (.727)
MR (.761)
FW (.449)
WM (.901)
AR (.776)
DS (.809)
PS (.557)
LN (.757)
PS (.594)
CD (.688)
SS (.626)
CA (.197)
120.12486.009.9640.956.042
[.022 .059]
.04815,50215,669
G2
(9–11)
VC (.861)
SI (.831)
VO (.816)
IN (.792)
CO (.667)
PR (.866)
BD (.670)
VP (.711)
MR (.594)
FW (.593)
WM (.920)
AR (.702)
DS (.761)
PS (.590)
LN (.683)
PS (.458)
CD (.576)
SS (.808)
CA (.499)
105.73086.073.9850.982.029
[.000 .047]
.04018,16818,344
G3
(12–14)
VC (.887)
SI (.880)
VO (.815)
IN (.779)
CO (.702)
PR (.804)
BD (.572)
VP (.690)
MR (.604)
FW (.599)
WM (.708)
AR (.687)
DS (.753)
PS (.686)
LN (.775)
PS (.556)
CD (.509)
SS (.720)
CA (.563)
155.83186<.001.9420.929.060
[.045 .075]
.05515,38115,549
G4
(15–16)
VC (.774)
SI (.802)
VO (.900)
IN (.822)
CO (.736)
PR (.853)
BD (.696)
VP (.634)
MR (.763)
FW (.781)
WM (.891)
AR (.811)
DS (.806)
PS (.587)
LN (.735)
PS (.677)
CD (.675)
SS (.731)
CA (.629)
122.19686.006.9610.953.055
[.030 .076]
.05295709713
M5aG1
(6–8)
VC (.859)
SI (.680)
VO (.759)
IN (.587)
CO (.549)
VS (.814)
BD (.575)
VP (.798)
FR (.846)
MR (.791)
FW (.489)
WM (.854)
AR (.772)
DS (.808)
PS (.560)
LN (.760)
PS (.583)
CD (.664)
SS (.650)
CA (.201)
131.84685<.001.9510.939.050
[.032 .066]
.05115,51615,687
G2
(9–11)
VC (.846)
SI (.833)
VO (.818)
IN (.788)
CO (.668)
VS (.826)
BD (.716)
VP (.746)
FR (.982)
MR (.583)
FW (.588)
WM (.910)
AR (.704)
DS (.761)
PS (.593)
LN (.677)
PS (.447)
CD (.571)
SS (.814)
CA (.498)
104.52185.074.9860.982.029
[.000 .047]
.04118,16918,348
G3
(12–14)
VC (.838)
SI (.877)
VO (.816)
IN (.780)
CO (.705)
VS (.724)
BD (.619)
VP (.799)
FR (.983)
MR (.589)
FW (.598)
WM (.717)
AR (.690)
DS (.752)
PS (.688)
LN (.771)
PS (.524)
CD (.510)
SS (.719)
CA (.563)
151.31485<.001.9450.932.059
[.043 .074]
.05215,37715,548
G4
(15–16)
VC (.758)
SI (.803)
VO (.900)
IN (.822)
CO (.734)
VS (.806)
BD (.823)
VP (.690)
FR (.859)
MR (.787)
FW (.823)
WM (.865)
AR (.810)
DS (.806)
PS (.589)
LN (.736)
PS (.678)
CD (.678)
SS (.730)
CA (.626)
112.91585.023.9700.963.049
[.019 .071]
.05295639709
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rodríguez-Cancino, M.; Concha-Salgado, A. The Internal Structure of the WISC-V in Chile: Exploratory and Confirmatory Factor Analyses of the 15 Subtests. J. Intell. 2024, 12, 105. https://doi.org/10.3390/jintelligence12110105

AMA Style

Rodríguez-Cancino M, Concha-Salgado A. The Internal Structure of the WISC-V in Chile: Exploratory and Confirmatory Factor Analyses of the 15 Subtests. Journal of Intelligence. 2024; 12(11):105. https://doi.org/10.3390/jintelligence12110105

Chicago/Turabian Style

Rodríguez-Cancino, Marcela, and Andrés Concha-Salgado. 2024. "The Internal Structure of the WISC-V in Chile: Exploratory and Confirmatory Factor Analyses of the 15 Subtests" Journal of Intelligence 12, no. 11: 105. https://doi.org/10.3390/jintelligence12110105

APA Style

Rodríguez-Cancino, M., & Concha-Salgado, A. (2024). The Internal Structure of the WISC-V in Chile: Exploratory and Confirmatory Factor Analyses of the 15 Subtests. Journal of Intelligence, 12(11), 105. https://doi.org/10.3390/jintelligence12110105

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop