Next Article in Journal
From the Outgoing Editor
Previous Article in Journal
How Wisdom Emerges from Intellectual Development: A Developmental/Historical Theory for Raising Mandelas
Previous Article in Special Issue
Levels of Emotional Awareness: Theory and Measurement of a Socio-Emotional Skill
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Are People-Centered Intelligences Psychometrically Distinct from Thing-Centered Intelligences? A Meta-Analysis

Department of Psychology, University of New Hampshire, McConnell Hall, 15 Academic Way, Durham, NH 03824, USA
*
Author to whom correspondence should be addressed.
Submission received: 15 August 2020 / Revised: 3 September 2021 / Accepted: 9 September 2021 / Published: 30 September 2021
(This article belongs to the Special Issue Advances in Socio-Emotional Ability Research)

Abstract

:
The Cattell–Horn–Carroll (CHC) or three-stratum model of intelligence envisions human intelligence as a hierarchy. General intelligence (g) is situated at the top, under which are a group of broad intelligences such as verbal, visuospatial processing, and quantitative knowledge that pertain to more specific areas of reasoning. Some broad intelligences are people-centered, including personal, emotional, and social intelligences; others concern reasoning about things more generally, such as visuospatial and quantitative knowledge. In the present research, we conducted a meta-analysis of 87 studies, including 2322 effect sizes, to examine the average correlation between people-to-people intelligences relative to the average correlation between people-to-thing-centered intelligences (and similar comparisons). Results clearly support the psychometric distinction between people-centered and thing-centered mental abilities. Coupled with evidence for incremental predictions from people-centered intelligences, our findings provide a secure foundation for continued research focused on people-centered mental abilities.

Intelligence researchers of the 20th century debated whether intelligence was best conceptualized as a general reasoning capacity, first proposed by Charles Spearman (1904), or a set of more-or-less distinct mental abilities that ranged from verbal skills to spatial reasoning, as suggested by L. L. Thurstone (1938). The controversy centered on the empirical discovery that a positive manifold characterized the correlations among people’s ability to solve problems across diverse areas. That is, people’s ability to solve distinct types of problems rose and fell together: if a person were high on one mental ability, they tended to be high on them all.
Psychologists of the time reasoned that if the correlations among verbal, visuospatial, and other intelligences were near r = 1.0, all mental abilities were in perfect synchrony or nearly so across people. Such a state of affairs would argue decisively for Spearman’s general intelligence. At the other extreme, had the correlations been near r = 0.00, each mental ability would have been unambiguously distinct from the others, arguing for a theory of independent, multiple intelligences. As it turned out, however, the average correlations were closer to r = 0.30 to 0.50, leading to a degree of uncertainty as to whether intelligence was unitary or multifaceted. Spearman and his many followers (Gottfredson 1997; Jensen 1998; Ree and Carretta 2002; Spearman 1904) argued eloquently for a general intelligence. However, Thurstone, Guilford, and others countered with equally compelling evidence for the existence of distinct mental abilities (Guilford 1966; Thurstone 1938).
By the 1970s, the development of confirmatory factor analysis allowed for the modeling of hierarchical relations among mental abilities (e.g., Hu and Bentler 1999; Joreskog 1969); these techniques allowed for a more nuanced understanding of the positive manifold: that there existed both a general intelligence and distinct, broad intelligences that were worthy of study. The apotheosis of this new look was the Cattell–Horn–Carroll (CHC), or three-stratum, model of intelligence (Carroll 1993; Flanagan and Dixon 2014; McGrew 2009). The three strata refer to the fact that g, general intelligence, is enshrined at the top of a hierarchy—a chief executive officer, of sorts—under which are situated a set of 10 to 15 broad intelligences. Each broad intelligence, in turn, is measured by specific tasks, represented at the lowest, third tier of the model.
A number of these second-stratum broad intelligences concern reasoning with particular classes of symbols such as words or spatial images. For example, comprehension knowledge divides into vocabulary knowledge, sentence comprehension, and word fluency at the third stratum (Schipolowski et al. 2014); visuo-spatial ability divides into paper-folding and mental rotation; quantitative intelligence into tests of arithmetic. Also present in the second stratum are more foundational, process-based intelligences such as short-term memory, long-term retrieval, and mental speededness that reflect more general characteristics of reasoning (Kovacs and Conway 2016; Schneider and McGrew 2018).

1. Organizing the Broad Intelligences

Although the CHC model produced a viable compromise between advocates for g and for multiple intelligences, there was a fly in the ointment. As the number of identified broad intelligences proliferated from 8 to 12 or more, some researchers asked whether there were “too many intelligences” (Austin and Saklofske 2005; Hedlund and Sternberg 2000). To address this issue, psychologists have suggested that there may be subsidiary groups among the broad abilities that can help organize them.
Perhaps the most well-known division among broad intelligences is that between fluid and crystallized intelligence (Cattell 1943; Cattell and Horn 1978; Horn and Cattell 1966, 1967). Fluid intelligence describes a general capacity to understand abstract relationships such as similarities and differences, apart from prior learning. Crystallized ability, by comparison, describes the depth, breadth, and understanding of acquired information about the world (Cattell 1961). These two factors are often found when analyzing broad intelligences, and are a precursor of the current CHC model (Carroll 1993). More recent proposals to organize the broad intelligences exist as well: Schneider and Newman (2015), for example, distinguished power intelligences including acquired knowledge and other domain-specific areas of reasoning from more speeded intelligences, reflecting the rate at which one finds an answer to a problem (Schneider and McGrew 2018; see also Figure 4 in Schneider and Newman 2015). A quite different model, developed by Mayer and colleagues, suggested the potential existence of a people-versus-thing continuum of broad intelligences—which will be our focus here (Mayer 2018; Mayer et al. 2016; Mayer and Skimmyhorn 2017).

The People versus Thing Continuum

Mayer and colleagues’ conception of the people-versus-thing continuum begins by noting that “many of the broad intelligences relate to specific subject or topic areas” such as quantitative, visuospatial, and verbal-comprehension areas (Mayer 2018, p. 272). There also exist content-free, process-based intelligences such as working memory and speededness, which lay outside the continuum but are important basic processing abilities (or “utility intelligences”) that people draw on in some capacity to solve many types of problems (Kovacs and Conway 2016). The people–thing continuum is focused on the content-focused broad intelligences and separates out people-centered problem-solving abilities like emotional, social, and personal intelligence—that individuals use to reason about themselves and others—from the other more thing-centered intelligences (Bryan and Mayer 2017; Mayer 2018; Mayer and Skimmyhorn 2017). The thing-centered group includes most centrally quantitative knowledge that concerns understanding numbers, and visuospatial processing, which pertains to understanding visual patterns as well as the movement of objects in space. In between the people- and thing-centered intelligences are mixed intelligences such as verbal–comprehension and reading–writing, which concern both people and things. This people–thing continuum is represented visually in the context of the modified Cattell–Horn–Carroll (CHC) model depicted in Figure 1.

2. Are People- and Thing-Centered Intelligences Truly Distinct?

The controversy addressed in this paper concerns whether the proposed people-centered abilities such as social, emotional, and personal intelligences are distinct in any fashion from other broad intelligences or from g. For example, Lee J. Cronbach wrote in the 1960s of social intelligence that “enough attempts were made [to measure it] to indicate that this line of approach is fruitless” (Cronbach 1960, p. 319). Indeed, the inability of many early researchers to produce evidence that social intelligence was psychometrically distinct from “abstract”, i.e., general intelligence (see R. L. Thorndike and Stein 1937) served to tamp down research in the area for decades (Conzelmann et al. 2013; Walker and Foley 1973). Emotional intelligence, too, had its early critics (Davies et al. 1998; Ree and Carretta 2002; Schulte et al. 2004), although it is now widely accepted as a broad intelligence within the field of intelligence (MacCann et al. 2014). Personal intelligence appears promising as a further semi-independent people-centered intelligence. However, is such optimism about the distinctness of this group warranted?

2.1. An Understanding of People-Centered Intelligences Is Just Now Emerging

Whether a people–thing continuum exists is still unexplored primarily because the class of people-centered intelligences has only recently become defined. An informal timeline of people-centered intelligences (which foregrounds brevity relative to subtlety) begins with the introduction of social intelligence in 1920 (E. L. Thorndike 1920), proceeds to its mid-20th-century demise as an area of interest (Cronbach 1960; Walker and Foley 1973), and picks up again with studies of nonverbal communication of emotion in the 1980s (Buck 1984). Our timeline marks the rise of emotional intelligence in 1990 (Mayer et al. 1990; Salovey and Mayer 1990), and the early controversy over whether it could be viably measured (Davies et al. 1998; Mayer et al. 1999; Zeidner et al. 2001)—finally settled in its favor. Still, 18 years after that, personal intelligence was introduced in the late 2000s, with the first measure in 2012 (Mayer 2008; Mayer et al. 2012). Concurrently, researchers increasingly regarded nonverbal emotion perception, now called emotion recognition ability (ERAs), as an intelligence itself, perhaps part of emotional intelligence (Schlegel et al. 2019b). The need for a new class of these measures was more recent still, coalescing in the past few years.
The identification of people-centered mental abilities may be ongoing: certain cognitive and social-cognitive tasks have not yet been conceptualized as intelligences but perhaps ought to be further explored (Haier 2017, pp. 124–25). These might include wisdom and even spiritual intelligence; that said, they are not yet ready for inclusion in this review because there are few or no ability-based measures of these skills that have been related to intelligence.
Given the relative recency of the study of people-centered intelligences, the correlations among them reported in the research literature were relatively sparse until recently. There are now, however, enough such reports to allow for a meta-analysis that might answer at least one fundamental and crucial question: do people-centered intelligences correlate among themselves more highly than with thing-centered intelligences, and, similarly, do thing-centered intelligences correlate among themselves more highly than with people-centered intelligences? Put another way, are these two classes of intelligences partially distinct from one another or, as was apparently the case for social intelligence decades ago, are they indistinguishable from any other broad intelligence?

2.2. Evidence for Incremental Validity Is Strong, but Also Incomplete and Indirect

It is worth noting before proceeding that one set of findings already supports the possible existence of a people–thing continuum among intelligences: a growing body of research indicates people-centered intelligences incrementally predict selected criteria over and above thing-centered abilities. For example, personal intelligence predicts such criteria as positive interpersonal relations, better performance in people-centered college courses—but not STEM courses—and other theoretically identified relations over and above verbal, quantitative, and visuospatial abilities (e.g., Bryan 2018; Mayer et al. 2018; Mayer and Skimmyhorn 2017). Similar incremental evidence can be found for emotional intelligence (Mayer et al. 2008). Yet, although these findings argue for the importance of people-centered reasoning, such incremental validity could emerge owing to artifacts such as additional reliable variance over the original measure (e.g., Hunsley and Meyer 2003; Sechrest 1963; Westfall and Yarkoni 2016). The current meta-analysis will provide more direct evidence for or against the class of people-centered intelligences.

3. Current Research

We set out to discover whether, within the pattern of positive correlations among broad mental abilities, there exist variations in correlational level supportive of the people–thing continuum. More specifically, we tested whether people-centered intelligences correlate more highly among themselves than with thing-centered intelligences, and whether their correlation with mixed intelligences lies between the two. We tested the reverse as well: that thing-centered intelligences correlate more highly among themselves than with mixed or with people-centered intelligences.
To accomplish this, we reviewed the literature reporting correlations between people-centered ability-based intelligence measures with other mental abilities. Our meta-analytic work draws on and extends previous research in the area by including social and personal intelligences along with the more studied emotional intelligence (Olderbak et al. 2018; Schlegel et al. 2019b). The inclusion of a wider scope of people-centered intelligences has the benefit of allowing us to model the variability in relations among different types of people-focused reasoning in addition to understanding their relations as a group with other mental abilities.

4. Hypotheses

With this in mind, we tested the following hypotheses:
Hypothesis 1 (H1).
People-centered intelligences will correlate most highly among themselves, next most highly with mixed intelligences, and least highly with thing-centered intelligences.
To test this hypothesis, we examined the differences between the average correlations for people-to-people, people-to-mixed, and people-to-thing mental abilities.
Hypothesis 2 (H2).
Thing-centered intelligences will correlate most highly among themselves, next most highly with mixed intelligences, and least highly with people-centered intelligences.
Hypothesis 2 is the complement of Hypothesis 1. We examined the differences between the average correlations for thing-to-thing, thing-to-mixed, and people-to-thing intelligences.
Hypothesis 3 (H3).
Personal and emotional intelligences will exhibit a greater difference (i.e., lower correlation) with thing-centered intelligences than social intelligence.
In the past, researchers have had particular difficulty distinguishing social intelligence from general intelligence (see the Introduction section). Consistent with those findings, we expected that social intelligence would correlate more highly with thing-centered intelligences than either emotional or personal intelligence. We tested this by comparing the average correlations of measures of emotional, personal, and social intelligences with thing-centered intelligences, expecting the social-to-thing correlation to be highest.

5. Additional Analyses

It was possible to construct a very limited correlation table among thing- and people-centered broad intelligences from our data; although the table’s utility is arguably limited by the fact that each correlation was drawn from different numbers and types of studies, we explored the possibility of factor analyzing the table to see if a people–thing factor emerged. Finally, we checked for publication bias among our sample of studies.

6. Method

Pre-Literature Search Index of People-Centered Assessments

Prior to beginning our literature search, we developed an index of ability-based assessments for emotional, social, and personal intelligences by noting, first, well-known tests in each area. For example, the Mayer–Salovey–Caruso Emotional Intelligence Test (MSCEIT) and Situational Test of Emotional Understanding and Emotional Management (STEU and STEM) are well-established ability-based assessments of emotional intelligence, the George Washington Social Intelligence Test is a known measure of social intelligence, and the Test of Personal Intelligence (TOPI) is an ability-based measure of personal intelligence (Allen et al. 2015; Mayer et al. 2003, 2019; Walker and Foley 1973).
Our review omits potential measures of personal intelligence such as person memory, scales of wisdom, spiritual intelligence, interpersonal judgment accuracy, and empathic accuracy (e.g., Ickes 2016; Letzring and Funder 2018). Either there existed no ability-based measures in the area, or, as far as we knew, no report existed that correlated such measures with another ability-based intelligence measure.
We next consulted relevant review articles, book chapters, and meta-analyses for additional assessments. At a most fundamental level, any scale included in our index needed to be plainly operationalized as an ability-based test, i.e., with correct and incorrect answers. This ruled out self-judgment measures of intelligences such as the Tett, Schutte, and Bar-On scales of emotional intelligence (Bar-On 1997; Schutte et al. 1998; Tett et al. 2005), as well as the Self-Estimated Personal Intelligence measure (Mayer et al. 2021). At least one test fell in a gray area: the Levels of Emotional Awareness Scale (Lane et al. 1990). An earlier review remarked it “does not fit easily into the self-report personality category, the ability category, or the self-reported ability category” (Ciarrochi et al. 2003, p. 1488). Rather it may be closer to a cognitive style or thematic/projective measure and was omitted as a consequence.
Finally, to be indexed, the scale had to possess a reasonable track record including one or more reports of reliabilities and correlations with other scales of intelligence; this tended to exclude both rarely used scales and new scales for which a research track record had not yet accumulated.
Using the above procedures, we identified additional ability-based assessments for emotional intelligence from Rivers et al. (2007), Schlegel et al. (2019b), and Olderbak et al. (2018). Further measures of social intelligence were gathered from the review by Walker and Foley (1973), and more recent assessments in Conzelmann et al. (2013). A full list of indexed measures can be found in Table 1.

7. Literature Search

Drawing on the people-centered assessments in Table 1, we conducted a series of keyword searches using the full name of each ability-based assessment of emotional, social, and personal intelligence in Table 1, and entered it into PsycINFO to identify relevant works that correlated the measures of people-centered intelligences with one or more mixed or thing-centered assessments. This yielded over 4000 potentially relevant works. We also included an additional 167 articles we identified from three earlier meta-analyses on related issues (Bryan and Mayer 2020; Olderbak et al. 2018; Schlegel et al. 2019b).

8. Inclusion Criteria

For each set of search results, the first author read through the titles and abstracts, and excluded irrelevant and/or duplicate articles that had emerged from previous searches. Each article was then screened further, on the basis of a series of inclusion criteria (see middle Figure 2). For inclusion, the work had to (a) be a peer-reviewed journal article, (b) employ at least one ability-based assessment of people-centered ability (i.e., emotional, personal, or social), and (c) report at least one Pearson correlation across possible types of comparisons (i.e., people-to-people, people-to-mixed, or people-to-thing). Although some experts have suggested that beta coefficients can be used to impute correlations in meta-analysis (e.g., Peterson and Brown 2005), more recent evidence argues against their use (Roth et al. 2018), as well as against the use of partial correlations more generally (Aloe 2014). Therefore, our focus was on identifying only zero-order Pearson correlations.
Sixty-nine articles reporting 87 studies met these criteria. The list of included studies can be found in Table 2.

9. Coding of Articles

The first author coded all 87 studies, with the assistance of a trained undergraduate research assistant who coded approximately half of the studies. The studies coded by both the first author and the research assistant were cross-checked to ensure coding accuracy. Discrepancies were resolved through discussions between the author and research assistant or by consulting the respective studies. Each study was coded for (a) year of publication; (b) sample characteristics (including sample type, average age, and gender); (c) the specific measure(s) used for each type of intelligence; (d) the reliabilities of each measure, if provided; (e) the correlation reported between people, mixed, or thing-centered intelligences; and (f) the sample size associated with each correlation.
Additionally, while the people–thing continuum sets aside more foundational intelligences such as working memory and processing speed, we noted several assessments in our sample of studies that reported correlations with such abilities. In instances such as this, we recorded the measures used to assess both abilities as well as any relevant correlations with people-, mixed, or thing-centered abilities. Any correlations with working memory and processing speed were kept separate from any people-to-thing analyses discussed here, although we include them in separate analyses presented later (see More Specific Comparisons in the Results). See also Bryan and Mayer (2021b) for the full open-source data set and (Bryan and Mayer 2021c) for the full R script.

9.1. Designation of Assessments as People-Centered, Mixed, or Thing-Centered

People-centered assessments included any measures of social, emotional, or personal intelligence as earlier indexed in Table 1. We then drew on theoretical work by Mayer (2018) and the subgroups of intelligences noted by Bryan and Mayer (2020) to inform our designation of assessments as people-, mixed, or thing-centered. A measure was designated as “mixed” if it assessed skills underlying either comprehension knowledge, reading and writing ability, or long-term retrieval, because these skills pertain to both people and things. Thing-centered assessment included visuospatial processing, quantitative knowledge, and measures of fluid intelligence because those often involve deciphering abstract patterns (e.g., Raven’s Matrices; Raven 2009).
When the immediate classification of a scale was uncertain, the two authors discussed the instance, proceeding through such matters on a case-by-case basis. For example, the Reading the Mind in the Eyes scale, for which respondents examine a rectangular area around the eyes of faces and then describe what is conveyed by it, includes response alternatives involving emotion recognition (e.g., irritated), but also items pertaining to emotion-related traits (comforting) and motivational or behavioral-descriptive traits (e.g., playful). In this instance, we classified the test as a measure of emotional intelligence because a number of items pertain to emotion recognition and the scale is frequently used as an ERA; however, there was arguably justification to place it with personal intelligence. Each assessment and its respective designation as people, mixed, or thing can be found in the “Full List of Included Works” section of the Technical Supplement (Bryan and Mayer 2021a). See also the “Designation of Assessments” in the Technical Supplement for additional information regarding the designations of more ambiguous assessments.

9.2. Distinguishing between Broad and Specific Assessments of Abilities

Each study was reviewed further, and correlations were distinguished according to whether they were between individual tasks, or between broad assessments, or between an individual task and a broad assessment. For example, studies that employed the Mayer–Salovey–Caruso Emotional Intelligence Test (MSCEIT) could have used the total EI score (a broad-based score), one or more of the eight individual task scores (e.g., Blends; Mayer et al. 2003), or something in-between (i.e., branch scores). Akin to the broad and narrow abilities reflected in the Cattell–Horn–Carroll (CHC) model, broader assessments included measures that tapped into multiple areas of reasoning within a broad ability (e.g., MSCEIT total scores are calculated from participants’ performance on all tasks involved in all four areas of emotion reasoning) while more narrow assessments were those that tapped into a single area of problem solving (e.g., emotion recognition; see Appendix A for our designations of assessments).
Correlations for the broader measures of a mental ability were common across assessments of people-centered abilities (e.g., MEIS total, GWSIT total, GECo total, etc.), but less so for the measures pertaining to mixed and thing-centered areas of reasoning, which tended to focus on more narrow skills (e.g., Raven’s Matrices, Wordsumplus). We matched scale types for each article where possible, for example, pairing a MSCEIT branch score, if reported, to Raven’s Matrices, as opposed to employing the more heterogeneous overall MSCEIT score. The only instance where we did not match scale types was when a study included only a correlation between mismatched measures (e.g., broad and narrow). Fortuitously, for the studies to date, the process outlined above left no window for selecting between two correlations that equally met criteria but differed in magnitude. For that reason, this process was independent of any potential researcher bias.
To ensure this process did not impact our results, we ran a separate set of analyses, including any broad assessments not included due to our method of matching of scale types. The findings were not substantively different from those reported here. See the “Distinguishing Between Broad and Specific Assessments” section of the Technical Supplement (Bryan and Mayer 2021a).

9.3. Coding Intelligence Contrasts

Each correlation between a pair of intelligence scores represented one of six comparison types: a people-to-people correlation, people-to-mixed, people-to-thing, thing-to-thing, thing-to-mixed, or mixed-to-mixed correlation. We dummy coded those comparisons using six variables that could take on a 0 or 1 depending upon which of the comparisons was present.
This process led to the identification of 1973 effect sizes reflecting correlations between people-to-people, people-to-mixed, and people-to-thing intelligences. Three-hundred and forty-nine additional correlations were found between mixed-to-mixed, mixed-to-thing, and thing-to-thing intelligences, for a total of 2322 effect sizes.

10. Statistical Analyses

All effect sizes and standard errors were corrected for attenuation due to unreliability (Schmidt and Hunter 2015; Wiernik and Dahlke 2020). Reliability estimates used to disattenuate effect sizes and standard errors were predominantly obtained from the studies in which a given effect size was reported. However, in instances where the reliability of a measure was not reported, a value was estimated by drawing on either the measure’s reliability as indicated in other studies included in our sample, or by consulting other sources (e.g., test manuals, journal articles). Effect sizes were then transformed to Fisher’s Z’s and entered into the meta-analytic software to carry out tests of the central hypotheses.
We conducted a three-level multilevel meta-analysis using the metafor package in R (Viechtbauer 2010), where the Fisher’s Z-transformed, disattenuated effect sizes were nested within studies to account for any statistical dependence due to the nesting of multiple effect sizes within studies (Konstantopoulos 2011). Each analysis involved entering the relevant dummy-coded predictors reflecting different intelligence correlation contrasts (e.g., people-to-thing or thing-to-thing correlations) as moderators into the model (Enders 2013). Similarly, we created dummy codes for more specific comparisons (e.g., emotional intelligence-to-fluid intelligence). Following recommendations by Hall and Rosenthal (2018), all estimates of the average correlation among different intelligence contrasts were taken from the unweighted random effects model, which produces average effect estimates that are more generalizable across research methods. All reports of average estimated correlations between the different intelligence pairs have been transformed backed to disattenuated Pearson rs in this article.

11. Results

11.1. Study Characteristics

The 87 studies included in our analyses spanned from those published in the 1930s that examined social intelligence to studies published in 2020 examining emotional intelligence. A total of 50 of the 87 studies included in our sample (57%) were published during or after 2010, reflecting the recent upsurge of research in people-centered intelligences. Across the studies, sample sizes ranged from as few as 24 participants to more than 4000, for an overall sample size of 24,627 (M = 283.07). Samples were predominantly comprised of college students (58 studies) but with some child/adolescent, community, and other samples. The overall average age of participants was 25.52 years. Of the 83 studies that reported information regarding gender, samples were on average 56% female (Nmale = 9773, Nfemale = 12,439). Scale reliabilities for social intelligence assessments averaged α = 0.63 (range = 0.10 to 0.98), for emotional intelligence, α = 0.74 (range = 0.42 to 0.95), and for personal intelligence, α = 0.87 (range = 0.71 to 0.94; see Table 3).

11.2. Types of Abilities Represented

The number of effect sizes reflecting each type of people-centered intelligence followed their order of introduction in the field of intelligence, with social intelligence first at 1108, closely followed by emotional intelligence at 1053, and most recently by personal intelligence at 14 effect sizes. The people-centered intelligences were compared 424 times with mixed ability measures such as WAIS Vocabulary and SAT Verbal, and 464 times to thing-centered abilities such as Raven’s matrices, SAT-Math, and O*Net spatial.

12. Preliminary Analyses

12.1. Examination of Outliers

Prior to entering the effect sizes into our model, we examined the disattenuated-for-reliability (but not yet Fisher’s Z-transformed) correlations for the presence of outliers, defined as +/− 3 standard deviations from the mean. Four effect sizes, all from the same study, were flagged but their removal did not substantively affect the pooled correlation estimates or the confidence intervals, and the data were therefore retained.

12.2. Examination of between (Level 3) and Within-Study (Level 2) Heterogeneity

We drew on the intercept-only model to calculate heterogeneity estimates for the distribution of variance across levels of our model. The majority of heterogeneity was attributed to within-study (level 2; I2 = 65.05%) variance, followed by between-study (level 3; I2 = 30.81%) variance. Heterogeneity estimates were significant at both level 2 and 3 (p’s < 0.001). The effect sizes varied sufficiently within and between studies as to imply the presence of moderators (Assink and Wibbelink 2016). For this reason, we proceeded to test our hypotheses.

13. Test of Hypotheses

13.1. People-Centered Intelligences Will Correlate Most Highly among Themselves, Next-Most-Highly with Mixed Intelligences, and Least Highly with Thing-Centered Intelligence (Hypothesis 1)

To test whether people-to-people measures were more highly correlated relative to people-to-mixed and people-to-thing comparisons, we entered the three dummy-coded comparison types as moderators into our model and found significant differences between the pairings (F(3, 2318) = 78.78, p < 0.001) (see Figure 3a). As predicted, people-centered intelligences exhibited average correlations more highly among themselves, at r = 0.43, 95% CI [0.39, 0.48], compared to r = 0.36, 95% CI [0.31, 0.40] for people-with-mixed, and r = 0.29, 95% CI [0.24, 0.34] for people-with-thing correlations. Recall that these correlations are corrected for attenuation due to reliability; all the differences among the comparisons were significant at the p < 0.001 level or beyond. Sample and effect size information for each of the people-centered intelligence contrasts can be found in Table 4.

13.2. Thing-Centered Intelligences Will Correlate Most Highly among Themselves, Next-Most-Highly with Mixed Intelligences, and Least Highly with People-Centered Intelligences (Hypothesis 2)

Did Hypothesis 2 hold also? That is, did thing-centered intelligences demonstrate similarly high within-group relations relative to the others? We carried out a second analysis in which the dummy-coded contrasts reflecting thing-to-thing, thing-to-mixed, and people-to-thing comparisons were then entered as moderators in our model. The three yielded significant between group differences overall (F(3, 2318) = 93.55, p < 0.001). As predicted, thing-centered intelligences correlated most highly among themselves (r = 0.74, 95% CI [0.70, 0.78]), followed next by mixed intelligences (r = 0.43, 95% CI [0.37, 0.49]), and least with people-centered intelligences (r = 0.29, CI [0.24, 0.34]; see Figure 3b). Pairwise comparisons revealed that the average correlation among thing-centered abilities was significantly higher than the average correlations among mixed-to-thing and people-to-thing intelligences (all p’s < 0.001). Sample and effect size information for each of the thing-centered and mixed intelligence contrasts can be found in Table 4.

13.2.1. More Specific Comparisons

As noted in the earlier “Statistical Analysis” section, we had further coded for more specific groups of intelligences. Table 5 shows the average correlation of social, emotional, and personal intelligences (top row) with more specific groups of intelligences. For example, mixed intelligences are divided into comprehension knowledge, long-term memory, and reading and writing types. Additional correlation estimates comparing general assessments of emotional intelligence and more specific emotion recognition ability (ERA) assessments can be found in the “Breakdown of Emotional Intelligence Measures” section of the Technical Supplement (Bryan and Mayer 2021a). That breakdown indicates that general emotional intelligence and ERAs exhibit similar patterns to one another and to other intelligences. Yet although measures of general emotional intelligence formed a relatively cohesive group, correlating with one another at r = 0.52, as did ERAs among themselves at r = 0.57, the two sets of measures were a bit less related when compared one to the another (r = 0.33).

13.2.2. Average Correlations among People-Centered Intelligences

The intercorrelations among people-centered abilities varied (F(4, 2317) = 16.66, p < 0.001) (see top rows, Table 4): Emotional intelligence correlated most highly with personal intelligence (r = 0.70) and least with social intelligence (r = 0.23), a statistically significant difference (p < 0.010). The within-group average for emotional intelligence (r = 0.50) also was significantly higher than the social intelligence assessments (r = 0.33; p < 0.001). No data were present regarding the correlation between personal and social intelligences.

13.2.3. Average Correlations among People-Centered and Mixed Intelligences

The average correlations for social, emotional, and personal intelligences with assessments of comprehension knowledge were also robust and comparable across each person-centered ability (range = 0.35 to 0.41, p’s, n.s.). Some variability did exist in the correlations between each people-centered ability with reading and writing ability, most likely due to the number of available effect sizes for each contrast (see middle rows, Table 5). Indeed, while the average estimated correlation between social intelligence and reading and writing ability (r = 0.78) appears much higher than the correlation between emotional intelligence and reading and writing ability (r = 0.32; p = 0.04), the former estimate was based on only one identified effect size. Additional research correlating reading and writing ability with social reasoning is likely to modify that value.
These more specific comparisons laid the groundwork for testing Hypothesis 3.

13.3. Personal and Emotional Intelligences Exhibit a Greater Difference (i.e., Lower Correlation) with Thing-Centered Intelligences than Does Social Intelligence (Hypothesis 3)

Our third and final hypothesis was that emotional and personal intelligences, when compared with social intelligences, would be relatively distinct from thing-centered intelligences, and that social intelligence would be most highly related to the mixed- and thing-centered abilities. As shown towards the bottom of Table 5, this was not exactly the case. Looking at the thing-centered rows, where one might expect the greatest difference, the correlations for social intelligence ranged from r = 0.22 to 0.30; those for emotional intelligence, from r = 0.17 to 0.29; and for personal intelligence, from r = 0.18 to 0.26. The differences seemed fairly small and did not reach statistical significance, excepting a marginally significant difference in the estimated relations of social and emotional intelligences with visuospatial processing (r = 0.29 versus r = 0.17, respectively, p = 0.06). Of note also, however, is that social intelligence was quite distinct from emotional intelligence, correlating at r = 0.23––no higher than with thing intelligences. Measures of social intelligence also correlated with speed and short-term memory, whereas the relation was near-absent for personal and emotional intelligences.

14. Exploratory Factor Analyses

In further analyses, we constructed a matrix of the 11 intelligences identified in our work by drawing on the estimated correlations reported in Table 5 (the full matrix can be found in Supplemental Table 5, in the “Exploratory Factor Analysis” section of (Bryan and Mayer 2021a)). Missing correlations for a given intelligence were imputed by taking the average of its correlations with the other intelligences in the table. Note that each correlation in the matrix stemmed from a different number of studies, with different Ns. We used both SPSS and R to conduct exploratory factor (and principal components) analyses. Unsurprisingly, perhaps, our attempts to factor analyze the matrix sometimes identified the matrix as singular, and also generated frequent ultra-Heywood cases (factor loadings above 1.00) regardless of the software.
To ameliorate the issue, we focused on a smaller matrix of six intelligences that were especially relevant to our hypotheses: three people-centered intelligences and three mixed and thing-centered intelligences (see Table 5). We excluded fluid intelligence and comprehension knowledge because their breadth tends to promote Heywood cases in factor loadings (Bryan and Mayer 2020) and excluded processing speed and short-term memory because as process-based or ‘utility’ intelligences, they were less-relevant to the people-thing continuum (see the Introduction section).
Using the six-by-six matrix, we were able to obtain a solution that extracted three principal components in R shown in Table 6 (left) (a solution in SPSS using a slightly modified matrix was nearly identical). We also report a Schmid–Leiman factor transformation in Table 6 (right). The Schmid–Leiman allows for a hierarchical factor analysis that includes a first general factor in the context of exploratory factor analysis (Most hierarchical factor solutions today are in the context of confirmatory factor analysis). The root mean square of the residuals (RMSR) was 0.09 for the principal components solution and 0.05 for the Schmid–Leiman; the latter, however, generated an ultra-Heywood case that was then reduced by the software in the solution indicated.
Encouragingly, however, both the principal components and Schmid–Leiman share key aspects in common. They begin with a g or g-like intelligence factor. Beyond that, the principal components analysis next extracted a clear bipolar people versus thing-centered dimension of broad intelligences, with visuospatial processing and quantitative knowledge loading negatively and emotional and personal intelligences positively. The third factor was defined by a social–verbal composite at one end (social and reading and writing intelligences), versus visuospatial processing at the other. The Schmid–Lehman solution similarly contrasted personal and emotional intelligences on their own second factor (after g), separating them from a third, visuospatial and quantitative knowledge factor. The first factor after g was the reading and writing ability and social intelligence combination that appeared third in the principal components analysis. These results are consistent with our prediction of a distinct subgroups of mental abilities, akin to a people–thing continuum. A viable alternative interpretation is possible, however: that emotional and personal intelligences at least might form a single broad intelligence. As more studies accumulate, the distinction (or lack thereof) ought to become clearer.

15. Publication Bias

Lastly, we tested for the presence of publication bias among our sample of studies. Because it was unclear what kind of bias might exist in this heterogeneous group of studies, we created four funnel plots each comparing the Fisher’s Z-transformed, disattenuated correlations against their respective standard errors. The first funnel plot represented all the effect sizes, while the remaining three plotted effect sizes that belonged to people-to-people, people-to-mixed, and people-to-thing pairs, given the importance of these to our central hypotheses (see Figure 4). The effect sizes clustered in the middle and towards the apex of the funnel in all four plots.
There existed noticeable spread across the upper portions of each plot, suggesting that studies with larger samples reported greater variability in effect sizes, especially for people-to-people-centered abilities (see Figure 4b). Sample size was a significant, positive moderator of effect size among the people-to-people and people-to-mixed although not so for people-to-thing contrasts, suggesting that larger correlations were reported by studies with larger sample sizes (p’s < 0.01) (Hox and De Leeuw 2003). Effect sizes associated with studies using smaller sample sizes were less common but demonstrated some small spread across the lower portions of each plot. Funnel plot asymmetry depicting such large-study effects has been noted in the intelligence literature, although bias in the field generally trends in the opposite direction, indicating small-study effects (Nuijten et al. 2020).
To our knowledge, little research has focused on understanding the factors that contribute to large-study effects in funnel plots. Some possibilities might include distortions stemming from corrected correlations of scales with low initial reliabilities, or certain people-centered measures relative to others. More detailed examinations of the dispersion of effect sizes for specific intelligence contrasts were inconclusive (see “Publication Bias” in the Technical Supplement; Bryan and Mayer 2021a). Aside from that, the substantial clustering of effect sizes, particularly in the plots depicting all effect sizes and the people-to-people correlations, may be due to chance, or an artifact of the presence of additional moderators beyond the scope of our focus here (Sterne et al. 2005).

16. Discussion

A century of intelligence research followed upon Thorndike’s (1920) proposed social intelligence up to our present-day understanding of people-centered intelligences (Mayer 2018). Despite the hundred years of research since, remarkably little direct evidence has addressed whether people-centered intelligences are psychometrically distinct from other broad intelligences such as comprehension knowledge, visuospatial processing, or quantitative knowledge, until recently.
The present research reports a key, direct test of the distinction between people-centered and more traditional thing-focused mental abilities. Our aim was to establish an understanding of the average relation between different types of mental abilities, classified according to the problem-solving area of each—i.e., people-centered, thing-centered, or mixed. We proposed that people-centered abilities, which people draw upon to reason about themselves and others, relate more highly with one another than with mental abilities about things (e.g., numbers and spatial relations). Additionally, we sought to explore whether the relations formed a relatedness-gradient (i.e., through mixed intelligences) consistent with a people-versus-thing continuum.

17. Are People-Centered Intelligences Distinct from Other Abilities?

Across 87 studies including more than 2000 effect sizes, our findings provide evidence supporting distinct correlational differences between people- and thing-centered intelligences. Specifically, people-centered intelligences were plainly more highly correlated among themselves than they were with thing-centered intelligences at r = 0.43 compared to r = 0.29 (and relations with mixed intelligences were in-between at r = 0.36). This pattern was robust when examining each people-centered ability and their respective relations to different types of thing-centered and mixed abilities. In a parallel fashion, thing-centered intelligences were correlated more highly with one another than they were with people-centered intelligences, at r = 0.74, versus 0.29. Together, these findings provide key evidence supporting the proposed distinction between classes of people- and thing-centered intelligences, with mixed-centered abilities in between—a people–thing continuum that arguably also showed up in a provisional factor analysis of a composite matrix.

18. An Observation on the “Cohesiveness” of the Intelligence Groups

Alongside the evidence for the continuum above, there were differences in the magnitude of the average correlations of the people-to-people intelligences versus thing-to-thing and mixed-to-mixed intelligences (r = 0.43 compared to 0.74 and 0.62, respectively). Most obviously, this appears due to the relatively lower correlations that existed between measures of social intelligence with personal and emotional intelligence (e.g., r = 0.23 with emotional intelligence). Indeed, the exploratory factor analyses indicated that social intelligence was closer to Grw than to its neighboring people-centered measures. A further distinction between social and other people-centered intelligences was that measures of social intelligence correlated with speed and short-term memory, whereas the relation was near-absent for personal and emotional intelligences. It may be that, compared to assessments of social intelligence, the tests used to assess personal and emotional intelligences represented here tap into more domain-specific knowledge than basic processing abilities. Additional research may further elucidate the relations between other people-centered abilities and more low-level processing capacities (Fiori et al. 2019).
Lee J. Cronbach had pointed out 60 years ago that social intelligence was challenging to distinguish from general reasoning (Conzelmann et al. 2013; Cronbach 1960). Some experts have remarked that differences in social reasoning skills may be more heavily tied to an individual’s verbal and abstract reasoning skills than initially supposed (Kihlstrom and Cantor 2000; R. L. Thorndike and Stein 1937). Here, it appeared not-so-highly related to any other intelligence (except, in one study, to reading-writing ability). The findings here are obviously discrepant with Cronbach’s concern that social intelligence merged into general intelligence. If these results are taken at face value, social intelligence appears to correlate with little else; perhaps, however, as more reliable and better-defined assessments of the social intelligence are developed they may prove more highly related to other people-centered intelligences (Conzelmann et al. 2013; Lee et al. 2000; Walker and Foley 1973).

19. Strengths and Limitations

The current meta-analysis complements previous research examining the relations among people-centered intelligences and more traditionally studied mental abilities such as fluid intelligence and comprehension knowledge (Olderbak et al. 2018; Schlegel et al. 2019b; Völker 2020). Indeed, our estimates for the correlations between emotional intelligence and fluid intelligence (r = 0.29) and emotional intelligence with comprehension knowledge (r = 0.35) are well in range of the values found by Olderbak et al. (2018), who reported estimates for the branches of emotional intelligence with fluid intelligence ranging from 0.21 to 0.50 (total EI r = 0.33), and with comprehension knowledge ranging from 0.18 to 0.39 (total EI r = 0.26). Our findings also complement this body of literature by providing some of the first estimates for how other people-centered intelligences, such as social and personal intelligence, correlate with other mental abilities.
That said, our study exhibits some important limitations. Our literature search focused on identifying relevant works that correlated people-centered intelligences with thing-centered and mixed abilities. As such, the average thing-to-thing, thing-to-mixed, and mixed-to-mixed correlations were estimated only from those studies in our sample that provided such correlations rather than the broader intelligence literature. Nonetheless, the average correlations among mixed and thing-centered abilities found here were comparable to estimates reported in both meta-analyses and large-scale psychometric studies in the field (e.g., Bryan and Mayer 2020; Phelps et al. 2005; Sanders et al. 2007). For example, Bryan and Mayer reported correlations among thing-centered abilities, including fluid intelligence, visuospatial processing, and quantitative knowledge, averaging r = 0.69 (range r = 0.58 to 0.81)—approximating the average thing-to-thing centered estimate of r = 0.74 (range r = 0.68 to 0.77) found in this report. (Estimates taken from Bryan and Mayer were produced from factor modeling, which are considered corrected for unreliability and are, therefore, approximately comparable to the findings here.) Additionally, the estimates for correlations between thing-centered and mixed intelligences reported by Bryan and Mayer (2020) averaged r = 0.53 (range r = 0.42 to 0.73), whereas our values here overlapped, albeit they were somewhat lower, averaging r = 0.43 (range 0.34 to 0.62).
A further limitation of our findings includes possible alternative interpretations of what we regard as a people–thing continuum among select broad intelligences. Although our factor analyses support such a dimension, it does not rule out the possibility that two or three of the emotional, personal, and social intelligences may constitute a single “broader” people-centered reasoning capacity with subfacets of personal, emotional, and (possibly) social intelligences. Additionally, our middle, “mixed” category of intelligences might exhibit its “betweenness” because verbal ability is required to some degree to understand and answer both the people- and thing-centered measures.
Finally, the persistent criticisms of people-centered intelligences—and especially social intelligence—through the mid-to-late 20th century tamped down research in the area (e.g., Walker and Foley 1973), with notable exceptions (e.g., Guilford 1988). Researchers shied away from the construct for some decades thereafter, although interest was reignited by Gardner’s (1983) biopsychological conception of multiple intelligences. Such interest was further piqued by the advent of emotional intelligence in the early 1990s (Mayer et al. 1990; Salovey and Mayer 1990), and more recent conceptualizations of emotion recognition ability and social intelligence (Conzelmann et al. 2013; Kihlstrom and Cantor 2000; Lee et al. 2000; Schlegel et al. 2019b). Personal intelligence, in particular, was introduced so recently that its comparisons with the other scales are limited to date. Our empirical tests were more constrained in scope than we might have liked given these realities, but we are hopeful further research will remediate these issues moving forward.

20. Conclusions

There is converging evidence as to the importance of people-centered intelligences in real life. Such findings include that people-centered intelligences out-predict thing-centered intelligences in regard to certain life outcomes, especially in relation to work and school settings that require reasoning about people (Mayer et al. 2018; Mayer and Skimmyhorn 2017). In addition, interventions designed to improve interpersonal understanding positively affects behavior (Durlak et al. 2011; Taylor et al. 2017). When these findings are combined with the present analyses, they collectively argue for the existence of a partially distinct group of intelligences that concern reasoning about people.
Because we have found evidence in support of such distinctions, it is worth considering the new reality of enhanced research activity in regard to these broad intelligences. Continued research in the area is warranted, focusing on the relations among people- and thing-centered broad intelligences, as well as continuing investigations as to what such people-centered intelligences uniquely predict. Such work can consolidate our understanding of how this diverse set of abilities best fit among other, more traditionally studied intelligences, as well as their practical importance to individuals and society.

Author Contributions

Both authors developed the conceptualization of the work, based on a suggestion by J.D.M. Similarly, both authors developed the methodology involved for the literature review and meta-analysis. The work on the project was co-administered by both authors. V.M.B. took chief responsibility for the search and selection of articles for the meta-analysis in consultation with J.D.M. She supervised coding by a trained undergraduate assistant and coded articles for the analyses as well. All meta-analyses were carried out by V.M.B. in R with the exception of the factor analysis, which was carried out by J.D.M. in R and concurrently in SPSS. Both authors wrote, rewrote, and edited sections of this article, as well as materials in the Technical Supplement (i.e., Bryan and Mayer 2021a). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors gratefully acknowledge the assistance of Jay Ivanof, who assisted the first author with the article coding process.

Conflicts of Interest

The first author (Bryan) declares no conflict of interest. The second author (Mayer) receives royalties for sales of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), one of the people-centered tests included in the review.

Appendix A. Designations of Assessments as Broad or Narrow

Akin to the broad and narrow abilities reflected in the Cattell–Horn–Carroll model, broader assessments included total scores on assessment that tapped into multiple areas of reasoning within a broad ability (e.g., MSCEIT total scores were calculated from participants’ scores on all four branches and eight tasks) while more narrow assessments were those that tapped into a single area of problem solving (e.g., we coded both the MSCEIT Faces task, and the Emotion Perception Branch as a whole (the Faces and Pictures tasks combined) both as narrow tasks relative to the total scale). Broader assessments were coded as “0” and narrow assessments were assigned a “1”. A table depicting the designations of each assessment identified in the current work is presented below (see Table A1).
Table A1. Categorization of broad and specific assessments, organized by intelligence type.
Table A1. Categorization of broad and specific assessments, organized by intelligence type.
Intelligence Type and MeasureCategorization
People-Centered Assessments
MSCEITWhen MSCEIT total scores were reported, we considered the assessment to be broad. When branch or task scores were reported, the individual tasks were considered specific assessments of emotional reasoning.
MEISWhen MEIS total scores were reported, we considered the assessment to be broad. When branch or task scores were reported, the individual tasks were considered specific assessments of emotional reasoning;
STEUNarrow; assesses a single area of emotion reasoning (Understanding).
STEMNarrow; assesses a single area of emotion reasoning (Management).
TIENarrow; assesses four areas of emotion reasoning
GECoNarrow; assesses four areas of emotion reasoning.
Chapin Social Insight Narrow; assesses social insight, a specific skill underlying social intelligence
GWSITBroad; includes a composite score of social intelligence. Subscales including Judgements in Social Situations, Recognition of Mental States, Observations, Memory for Names and Faces, and Sense of Humor were treated as specific assessments when scores were reported for each individually.
Four Factor TestsNarrow; all four major subscales were assessed individually and kept separate throughout analyses. Each of the four subscales also pertains to different areas of social reasoning (Cartoon Predictions and Missing Cartoons each pertain to social insight; Expressions Grouping pertains to social perception).
Magdeburg TestNarrow; all four subscales were assessed individually and kept separate throughout analyses. Each assesses different skills pertaining to social reasoning (i.e., social perception, social memory, social understanding/insight).
RMETNarrow; assesses ability to perceive emotions in the eyes.
TOPIBroad; assesses four areas of reasoning about personality.
Tacit Knowledge InventoryNarrow; all items assessing social etiquette/social knowledge.
IPT-15Narrow; assesses social perception drawing on 15 videos of social interactions.
SJT-EINarrow; scores are reported four three areas of emotional reasoning: facilitating emotions, perceiving emotions, and understanding emotions. The items that comprised each area were homogenous.
GERTNarrow; assesses emotion recognition/perception.
MERTNarrow; assesses emotion recognition/perception.
MEMANarrow; assesses emotion recognition/perception.
ERINarrow; assesses emotion recognition/perception.
DANVANarrow; assesses emotion recognition/perception.
JACBARTNarrow; assesses emotion recognition/perception.
Ekman-60Narrow; assesses emotion recognition/perception.
GEMOKNarrow; assesses emotion knowledge.
Vocal-INarrow; assesses emotion recognition/perception.
Nim-Stim FacesNarrow; assesses emotion recognition/perception.
SEI-TNarrow; assesses emotion recognition/perception.
PONS and MiniPONSNarrow; assesses emotion recognition/perception.
Mixed Assessments
WordsumplusNarrow; assesses vocabulary or lexical knowledge by having participants identify synonyms of words.
Modified VocabNarrow; assesses vocabulary or lexical knowledge by having participants pick out the meaning of a given word.
SAT VerbalBroad; diverse set of items related to verbal reasoning and reading comprehension.
Cattell–Horn Word ClassificationNarrow; task assessing verbal reasoning.
IST VerbalBroad; authors calculated scores based on performance on three distinct subtests (sentence completion, analogies, and similarities).
Phonetic Word Association TestNarrow; assesses verbal fluency.
Quickie Battery VocabNarrow; assesses vocabulary or lexical knowledge.
ACT ReadingBroad; diverse set of items related to reading comprehension. Similar to SAT Verbal.
ACT English Broad; diverse set of items related to verbal reasoning. Similar to SAT Verbal.
Thorndike Intelligence Examination VocabularyNarrow; assesses vocabulary or lexical knowledge.
Thorndike Intelligence Examination ComprehensionNarrow; assesses reading comprehension.
BIS Verbal Broad; scores calculated from multiple subtests assessing different types of verbal reasoning.
KBIT Verbal CompositeBroad; scores calculated based on performance on two subscales.
Quickie Battery AnalogiesNarrow; assesses verbal knowledge.
French Kit VocabNarrow; assesses vocabulary or lexical knowledge.
French Kit Word EndingsNarrow; assesses lexical speed/fluency.
French Kit Word BeginningsNarrow; assesses lexical speed/fluency.
French Kit OppositesNarrow; assesses lexical speed/fluency.
ETS AnalogiesNarrow; assesses verbal knowledge.
ETS Sentence CompletionNarrow; assesses verbal knowledge.
Henmon–Nelson VocabNarrow; assesses vocabulary or lexical knowledge.
WAIS VocabNarrow; assesses vocabulary or lexical knowledge.
ICAR Verbal ReasoningBroad; includes items related to vocabulary, logic, and general knowledge.
Co-operative Reading Comp TestNarrow; participants only given a subset of items from this assessment.
IST Knowledge Narrow; participants only given a subset of items from this assessment.
ACER VocabBroad; information ascertained about the assessment suggests it is similar to the SAT.
Mill Hill VocabNarrow; assesses vocabulary or lexical knowledge.
WJ Broad ReadingBroad; score was calculated based on performance on three subtests (calculation, math fluency, and applied problems).
Quickie Vocab/Analogies CompositeBroad; composite of performance on vocab and analogies subtests from Quickie Battery. See Farrelly and Austin (2007).
Thing-Centered Assessments
Backward digit spanNarrow; task assessing working memory capacity.
Gf TestBroad; 50 diverse items including numeric, verbal, and figural narrows assessing fluid intelligence. See Libbrecht and Lievens (2012) for description.
SAT MathBroad; diverse items pertaining to math comprehension, mathematical reasoning, and mathematical knowledge.
O*Net Spatial AbilityNarrow; task assessing visualization/spatial reasoning.
Raven’s MatricesNarrow; assesses abstract/inductive reasoning.
Culture Fair Test Scale 2Broad; comprised of scales including matrix reasoning, classifications, sequences, and geometric reasoning.
IST NumericNarrow; task related to number sequence completion.
IST FiguralNarrow; task involving matrix reasoning.
Quickie Battery Letter SeriesNarrow; task assessing abstract reasoning. One study by Farrelly and Austin (2007) combined Quickie letter series and matrices to form composite fluid reasoning scale. In that instance, the combination was treated as a scale.
Quickie Battery MatricesNarrow
ACT MathBroad; diverse items. Similar to SAT Math.
Thorndike Intelligence Examination Arithmetical ReasoningNarrow; measure assessing mathematical reasoning.
Embedded Figures TestNarrow; narrow task assessing field dependence/independence.
BIS FiguralBroad; scores calculated from multiple subtests assessing different types of figural reasoning.
BIS NumericBroad; scores calculated from multiple subtests assessing different types of numeric reasoning.
BIS ReasoningBroad; includes verbal, numeric, and figural reasoning.
KBIT PerformanceNarrow; single subtest (matrices) assessing abstract/inductive reasoning.
SwapsNarrow; task assessing abstract reasoning.
French Kit Letter SeriesNarrow; task assessing abstract reasoning.
French Kit Figure ClassificationNarrow; task assessing abstract reasoning.
French Kit Calendar TestNarrow; task assessing abstract reasoning.
French Kit Math AptitudeNarrow; task assessing mathematical knowledge.
French Kit Math OperationsNarrow; task assessing mathematical knowledge.
French Kit Subtraction/MultiplicationNarrow; task assessing mathematical reasoning.
French Kit Cube ComparisonsNarrow; task assessing visual reasoning.
French Kit Hidden PatternsNarrow; task assessing visual reasoning.
French Kit Surface Development Narrow; task assessing visual reasoning.
DAT Abstract ReasoningNarrow; single task assessing abstract/inductive reasoning.
ITED Quantitative Thinking TestNarrow; single task assessing quantitative reasoning.
WAIS Picture CompletionNarrow; task assessing visual closure.
ICAR Letter and Number SeriesNarrow; assesses abstract/logical reasoning.
WASI-II MatrixNarrow; single task assessing abstract/inductive reasoning.
Spatial AnalogiesNarrow; single task assessing visual processing.
WJ Math Broad; scores calculated from performance on three subtests, including calculation, math fluency, and applied problems.
WASI Performance Broad; scores calculated from performance on block design and matrix reasoning tasks.

References

  1. Aloe, Ariel M. 2014. Inaccuracy of regression results in replacing bivariate correlations. Research Synthesis Methods 6: 21–27. [Google Scholar] [CrossRef]
  2. Allen, Veleka, Nasia Rahman, Alexander Weissman, Carolyn MacCann, Charles Lewis, and Richard D. Roberts. 2015. The situational test of emotional management—Brief (stem-b): Development and validation using item response theory and latent class analysis. Personality and Individual Differences, Journal Article 81: 195–200. [Google Scholar] [CrossRef]
  3. Assink, Mark, and Carlijn J. M. Wibbelink. 2016. Fitting three level meta-analytic models in R: A step-by-step tutorial. The Quantitative Methods for Psychology 12: 154–74. [Google Scholar] [CrossRef] [Green Version]
  4. Austin, Elizabeth J. 2004. An investigation of the relationship between trait emotional intelligence and emotional task performance. Personality and Individual Differences 36: 1855–64. [Google Scholar] [CrossRef]
  5. Austin, Elizabeth J. 2005. Emotional intelligence and emotional information processing. Personality and Individual Differences 39: 403–14. [Google Scholar] [CrossRef]
  6. Austin, Elizabeth J. 2010. Measurement of ability emotional intelligence: Results for two new tests. British Journal of Psychology 101: 563–78. [Google Scholar] [CrossRef] [Green Version]
  7. Austin, Elizabeth J., and Donald H. Saklofske. 2005. Far too many intelligences? On the communalities and differences between social, practical, and emotional intelligences. Emotional Intelligence: An International Handbook; Edited by Richard D. Roberts. Cambridge: Hogrefe & Huber Publishers, Vol. 1–Book, Section, pp. 107–28. Available online: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2005-06828-006&site=ehost-live (accessed on 9 September 2021).
  8. Bänziger, Tanja, Didier Grandjean, and Klaus Scherer. 2009. Emotion recognition from expressions in face, voice, and body: The Multimodal Emotion Recognition Test (MERT). Emotion 9: 691–704. [Google Scholar] [CrossRef] [Green Version]
  9. Bänziger, Tanja, Klaus Scherer, Judith A Hall, and Robert Rosenthal. 2011. Introducing the MiniPONS: A short multichannel version of the Profile of Nonverbal Sensistivity (PONS). Journal of Nonverbal Behavior 35: 189–204. [Google Scholar] [CrossRef]
  10. Barchard, Kimberly A. 2003. Does emotional intelligence assist in the prediction of academic success. Educational and Psychological Measurement 63: 840–58. [Google Scholar] [CrossRef]
  11. Bar-On, Reuven. 1997. The Emotion Quotient Inventory Inventory (EQ-i): Technical Manual. Louisville: MHS. [Google Scholar]
  12. Baron-Cohen, Simon, Sally Wheelwright, Jacqueline Hill, Yogini Raste, and Ian Plumb. 2001. The “Reading the Mind in the Eyes” Test revised version: A study with normal adults, and adults with Asperger Syndrome and high-functioning Autism. Journal of Child Psychology and Psychiatry 42: 241–51. [Google Scholar] [CrossRef] [PubMed]
  13. Bastian, Veneta A, Nicholas R. Burns, and Ted Nettelbeck. 2005. Emotional intelligence predicts life skills, but not as well as personality and cognitive abilities. Personality and Individual Differences 39: 1135–45. [Google Scholar] [CrossRef]
  14. Blickle, Gerard, Tassilo Momm, Yongmei Liu, Alexander Witzki, and Ricarda Steinmayr. 2011. Construct validation of the Test of Emotional Intelligence (TEMINT): A two-study investigation. European Journal of Psychological Assessment 27: 282–89. [Google Scholar] [CrossRef]
  15. Brackett, Marc A., and John D. Mayer. 2003. Convergent, Discriminant, and Incremental Validity of Competing Measures of Emotional ntelligence. Personality and Social Psychology Bulletin 29: 1147–58. [Google Scholar] [CrossRef] [Green Version]
  16. Brackett, Marc A, Susan E. Rivers, Sara Shiffman, Nicole Lerner, and Peter Salovey. 2006. Relating emotional abilities to social functioning: A comparison of self-report and performance measures of emotional intelligence. Journal of Personality and Social Psychology 91: 780–95. [Google Scholar] [CrossRef] [Green Version]
  17. Broom, M. E. 1930. A further study of the validity of a test of social intelligence. The Journal of Educational Research 22: 403–5. [Google Scholar] [CrossRef]
  18. Bryan, Victoria M. 2018. Does Personal Intelligence Promote Constructive Conflict in Romantic Relationships? Master’s Thesis, University of New Hampshire, Durham, NH, USA, 2018. Available online: https://mypages.unh.edu/sites/default/files/jdmayer/files/ppq_final_2018-11-18.pdf (accessed on 26 September 2021).
  19. Bryan, Victoria M., and John D. Mayer. 2017. People versus Thing Intelligences? Poster Presented at the 14th Meeting of the Association for Research in Personality, Sacramento, CA, USA, June 8–10; Available online: https://mypages.unh.edu/sites/default/files/jdmayer/files/arp_posterfinal2017.pdf (accessed on 26 September 2021).
  20. Bryan, Victoria. M., and John D. Mayer. 2020. A meta-analysis of the correlations among broad intelligences: Understanding their relations. Intelligence 81: 101469. [Google Scholar] [CrossRef]
  21. Bryan, Victoria M., and John D. Mayer. 2021a. Technical Supplement for People vs. Thing Intells 2021-09-03.docx. Available online: https://osf.io/rjsc9/ (accessed on 26 September 2021).
  22. Bryan, Victoria M., and John D. Mayer. 2021b. People w Thing List of Studies 2021-09-03.docx. Available online: https://osf.io/9zmrb/ (accessed on 26 September 2021).
  23. Bryan, Victoria M., and John D. Mayer. 2021c. Are-People-Centered-Intelligences-Distinct - R Script. Available online: https://osf.io/bpc52/ (accessed on 26 September 2021).
  24. Buck, Ross. 1984. The Communication of Emotion. New York: Guilford Press. [Google Scholar]
  25. Campbell, Johnathan M., and David M. McCord. 1996. The WAIS-R Comprehension and Picture Arrangement Subtests as measures of social intelligence: Testing traditional interpretations. Journal of Psychoeducational Assessment 14: 240–49. [Google Scholar] [CrossRef]
  26. Carroll, John B. 1993. Human Cognitive Abilities: A Survey of Factor-Analytic Studies. Cambridge: Cambridge University Press. [Google Scholar]
  27. Cattell, Raymond Bernard. 1943. The description of personality: Basic traits resolved into clusters. The Journal of Abnormal and Social Psychology 38: 476–506. [Google Scholar] [CrossRef]
  28. Cattell, Raymond Bernard. 1961. Fluid and Crystallized Intelligence. In Studies in Individual Differences: The Search for Intelligence. Edited by James J. Jenkins and Donald G. Paterson. 2006-10257-064. New York: Appleton-Century-Crofts, Washington, DC: APA PsycInfo, pp. 738–46. [Google Scholar] [CrossRef]
  29. Cattell, Raymond, Bernard, and John L. Horn. 1978. A check on the theory of fluid and crystallized intelligence with description of new subtest designs. Journal of Educational Measurement 15: 139–64. [Google Scholar] [CrossRef]
  30. Checa, Purificación, and Pablo Fernández-Berrocal. 2015. The role of intelligence quotient and emotional intelligence in cognitive control processes. Frontiers in Psychology 6: 1853. [Google Scholar] [CrossRef] [Green Version]
  31. Ciarrochi, Joseph, Peter Caputi, and John D. Mayer. 2003. The distinctiveness and utility of a measure of trait emotional awareness. Personality and Individual Differences 34: 1477–90. [Google Scholar] [CrossRef]
  32. Conzelmann, Kirstin, Susanne Weis, and Heinz-Martin Süß. 2013. New findings about social intelligence: Development and application of the Magdeburg Test of Social Intelligence (MTSI). Journal of Individual Differences 34: 119–37. [Google Scholar] [CrossRef]
  33. Cook, Charles M., and Deborah M. Saucier. 2010. Mental rotation, targeting ability and Baron-Cohen’s Empathizing–Systemizing Theory of Sex Differences. Personality and Individual Differences 49: 712–16. [Google Scholar] [CrossRef]
  34. Costanzo, Mark, and David Archer. 1989. Interpreting the expressive behavior of others: The Interpersonal Perception Task. Journal of Nonverbal Behavior 13: 225–45. [Google Scholar] [CrossRef]
  35. Côté, Stephane, and Christopher T. H. Miners. 2006. Emotional intelligence, cognitive intelligence, and job performance. Administrative Science Quarterly 51: 1–28. [Google Scholar] [CrossRef]
  36. Coyle, Thomas, Karrie E. Elpers, Miguel C. Gonzalez, Jacob Freeman, and Jacopo A. Baggio. 2018. General intelligence (g), ACT scores, and theory of mind: (ACT)g predicts limited variance among theory of mind tests. Intelligence 71: 85–91. [Google Scholar] [CrossRef]
  37. Cronbach, Lee J. 1960. Essentials of Psychological Testing, 2nd ed. New York: Harper, Available online: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=1962-01016-000&site=ehost-live (accessed on 9 September 2021).
  38. Curci, Antonietta, Tiziana Lanciano, Emanuela Soleti, Vanda Lucia Zammuner, and Peter Salovey. 2013. Construct validity of the Italian version of the Mayer–Salovey–Caruso Emotional Intelligence Test (MSCEIT) v2.0. Journal of Personality Assessment 95: 486–94. [Google Scholar] [CrossRef]
  39. Dacre Pool, Lorraine, and Pamela Qualter. 2012. Improving emotional intelligence and emotional self-efficacy through a teaching intervention for university students. Learning and Individual Differences 22: 306–12. [Google Scholar] [CrossRef]
  40. Davies, Michaela, Lazar Stankov, and Richard D. Roberts. 1998. Emotional intelligence: In search of an elusive construct. Journal of Personality and Social Psychology 75: 989–1015. [Google Scholar] [CrossRef]
  41. DePaulo, Bella, and Robert Rosenthal. 1979. Telling lies. Journal of Personality and Social Psychology 37: 1713–22. [Google Scholar] [CrossRef]
  42. Di Fabio, Annamaria, and Letizia Palazzeschi. 2009. Emotional intelligence, personality traits and career decision difficulties. International Journal for Educational and Vocational Guidance 9: 135–46. [Google Scholar] [CrossRef]
  43. Di Fabio, Annamaria, and Donald H. Saklofske. 2014. Comparing ability and self-report trait emotional intelligence, fluid intelligence, and personality traits in career decision. Personality and Individual Differences 64: 174–78. [Google Scholar] [CrossRef]
  44. Durlak, Joseph A., Roger P. Weissberg, Allison B. Dymnicki, Rebecca D. Taylor, and Kriston B. Schellinger. 2011. The impact of enhancing students’ social and emotional learning: A meta-analysis of school-based universal interventions. Child Development 82: 405–32. [Google Scholar] [CrossRef]
  45. Enders, Craig K. 2013. Centering Predictors and Contextual Effects. In The SAGE Handbook of Multilevel Modeling. Edited by Marc Scott, Jeffrey Simonoff and Brian Marx. Southend Oaks: SAGE Publications Ltd., pp. 89–108. [Google Scholar] [CrossRef]
  46. Evans, Thomas Rhys, David J. Hughes, and Gail Steptoe-Warren. 2020. A conceptual replication of emotional intelligence as a second-stratum actor of intelligence. Emotion 20: 507–12. [Google Scholar] [CrossRef] [Green Version]
  47. Farrelly, Daniel, and Elizabeth J. Austin. 2007. Ability EI as an intelligence? Associations of the MSCEIT with performance on emotion processing and social tasks and with cognitive ability. Cognition and Emotion 21: 1043–63. [Google Scholar] [CrossRef]
  48. Fiori, Marina, and John Antonakis. 2011. The ability model of emotional intelligence: Searching for valid measures. Personality and Individual Differences 50: 329–34. [Google Scholar] [CrossRef] [Green Version]
  49. Fiori, Marina, and John Antonakis. 2012. Selective attention to emotional stimuli: What IQ and openness do, and emotional intelligence does not. Intelligence 40: 245–54. [Google Scholar] [CrossRef] [Green Version]
  50. Fiori, Marina, Udayar Shagini, and Ashley Vesely-Maillefer. 2019. Introducing a New Component of Emotional Intelligence: Emotion Information Processing. Available online: https://journals.aom.org/doi/abs/10.5465/AMBPP.2019.17276abstract (accessed on 26 September 2021).
  51. Flanagan, Dawn P., and Shauna G. Dixon. 2014. The Cattell-Horn-Carroll Theory of Cognitive Abilities. In Encyclopedia of Special Education. Atlanta: American Cancer Society. [Google Scholar] [CrossRef]
  52. Gardner, Howard. 1983. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books, Available online: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2004-18831-000&site=ehost-live (accessed on 9 September 2021).
  53. Gottfredson, Linda S. 1997. Mainstream science on intelligence: An editorial with 52 signatories, history and bibliography. Intelligence 24: 13–23. [Google Scholar] [CrossRef]
  54. Gough, Harrison G. 1965. A validation study of the Chapin Social Insight Test. Psychological Reports 17: 355–68. [Google Scholar] [CrossRef]
  55. Guilford, Joy Paul. 1966. Intelligence: 1965 model. American Psychologist 21: 20–26. [Google Scholar] [CrossRef]
  56. Guilford, Joy Paul. 1988. Some changes in the structure-of-intellect model. Educational and Psychological Measurement 48: 1–4. [Google Scholar] [CrossRef]
  57. Habota, Tina, Skye N. McLennan, Jan Cameron, Chantal F. Ski, David R. Thompson, and Peter G. Rendell. 2015. An investigation of emotion recognition and theory of mind in people with chronic heart failure. PLoS ONE 10: e0141607. [Google Scholar] [CrossRef] [Green Version]
  58. Haier, Richard J. 2017. The Neuroscience of Intelligence. New York: Cambridge University Press. [Google Scholar]
  59. Hall, Judith A., and Robert Rosenthal. 2018. Choosing between random effects models in meta-analysis: Units of analysis and the generalizability of obtained results. Social and Personality Psychology Compass 12: e12414. [Google Scholar] [CrossRef]
  60. Hedlund, Jennifer, and Robert J. Sternberg. 2000. Too many intelligences? In Integrating social, emotional, and practical intelligence. In The Handbook of Emotional Intelligence: Theory, Development, Assessment, and Application at Home, school, and in the Workplace. Edited by James D. A. Parker. San Francisco: Jossey-Bass, Vol. 1–Book, Section, pp. 136–67. Available online: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2001-00355-007&site=ehost-live (accessed on 9 September 2021).
  61. Holmes, Douglas S., Rita Politzer, Allen L. Kovacic, and Joseph H. Wexler. 1976. Some behavioral and test correlates of the Chapin Social Insight Test. Psychological Reports 39: 481–82. [Google Scholar] [CrossRef]
  62. Horn, John L, and Raymond Bernard Cattell. 1966. Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology 57: 253–70. [Google Scholar] [CrossRef]
  63. Horn, John L, and Raymond Bernard Cattell. 1967. Age differences in fluid and crystallized intelligence. Acta Psychologica 26: 107–29. [Google Scholar] [CrossRef]
  64. Hox, Joop J., and Edith De Leeuw. 2003. Multilevel models for meta-analysis. In Multilevel Modeling: Methodological Advances, Issues, and Applications. Edited by Steven P. Reise and Naihua Duan. Mahwah: Lawrence Erlbaum Associates Publishers, pp. 90–111. [Google Scholar]
  65. Hu, Li-tze, and Peter M. Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling 6: 1–55. [Google Scholar] [CrossRef]
  66. Hunsley, John, and Gregory Meyer. 2003. The Incremental Validity of Psychological Testing and Assessment: Conceptual, Methodological, and Statistical Issues. Psychological Assessment 15: 446–55. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Hunt, Thelma. 1928. The measurement of social intelligence. Journal of Applied Psychology 12: 317–34. [Google Scholar] [CrossRef]
  68. Ickes, William. 2016. Empathic accuracy: Judging thoughts and feelings. In The Social Psychology of Perceiving Others Accurately. Edited by Judith A. Hall, Marianne Schmid Mast and Tessa V. West. Cambridge: Cambridge University Press, pp. 52–70. [Google Scholar] [CrossRef]
  69. Ivcevic, Zorana, Marc A. Brackett, and John D. Mayer. 2007. Emotional intelligence and emotional creativity. Journal of Personality 75: 199–235. [Google Scholar] [CrossRef]
  70. Jensen, Arthur R. 1998. The g Factor: The Science of Mental Ability. Westport: Praeger Publishers/Greenwood Publishing Group, Available online: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=1998-07257-000&site=ehost-live (accessed on 9 September 2021).
  71. Joreskog, Karl G. 1969. A general approach to confirmatory maximum likelihood factor analysis. Psychometrika 34: 183–202. [Google Scholar] [CrossRef]
  72. Karim, Jahanvash, and Robert Weisz. 2010. Cross-cultural research on the reliability and validity of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Cross-Cultural Research: The Journal of Comparative Social Science 44: 374–404. [Google Scholar] [CrossRef]
  73. Keating, Daniel P. 1978. A search for social intelligence. Journal of Educational Psychology 70: 218–23. [Google Scholar] [CrossRef]
  74. Kihlstrom, John F., and Nancy Cantor. 2000. Social intelligence. In Handbook of Intelligence. Edited by Robert J. Sternberg. Cambridge: Cambridge University Press, Vol. 1–Book, Section, pp. 359–79. Available online: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2000-07612-016&site=ehost-live (accessed on 9 September 2021).
  75. Kokkinakis, Athanasios V., Peter I. Cowling, Anders Drachen, and Alex R. Wade. 2017. Exploring the relationship between video game expertise and fluid intelligence. PLoS ONE 12: e0186621. [Google Scholar] [CrossRef] [Green Version]
  76. Konstantopoulos, Spyros. 2011. Fixed effects and variance components estimation in three-level meta-analysis. Research Synthesis Methods 2: 61–76. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Kovacs, Kristof, and Andrew R. A. Conway. 2016. Process overlap theory: A unified account of the general factor of intelligence. Psychological Inquiry 27: 151–17. [Google Scholar] [CrossRef]
  78. Lane, Richard, Donald Quinlan, Gary Schwartz, Pamela Walker, and Sharon Zeitlin. 1990. The Levels of Emotional Awareness Scale: A cognitive-developmental measure of emotion. Journal of Personality Assessment 55: 124–34. [Google Scholar]
  79. Lanciano, Tiziana, and Antonietta Curci. 2014. Incremental validity of emotional intelligence ability in predicting academic achievement. The American Journal of Psychology 127: 447–61. [Google Scholar] [CrossRef]
  80. Lee, Jong-Eun, Chau-Ming T. Wong, Jeanne D. Day, Scott E. Maxwell, and Pamela Thorpe. 2000. Social and academic intelligences: A multi-trait–multimethod study of their crystallized and fluid characteristics. Personality and Individual Differences 29: 539–53. [Google Scholar] [CrossRef]
  81. Letzring, Tera D., and David C. Funder. 2018. Interpersonal accuracy in trait judgments. In The SAGE Handbook of Personality and Individual Differences: Applications of Personality and Individual Differences. Edited by Virgil Zeigler-Hill and Todd K. Shackelford. Los Angeles: Sage Reference, pp. 253–82. [Google Scholar] [CrossRef]
  82. Libbrecht, Nele, and Filip Lievens. 2012. Validity evidence for the situational judgment test paradigm in emotional intelligence measurement. International Journal of Psychology 47: 438–47. [Google Scholar] [CrossRef]
  83. Lopes, Paulo N., Daisy Grewal, Jessica Kadis, Michelle Gall, and Peter Salovey. 2006. Evidence that emotional intelligence is related to job performance and affect and attitudes at work. Psicothema 18: 132–38. [Google Scholar] [PubMed]
  84. Lopes, Paulo N., Peter Salovey, Stéphanie Côté, and Michael Beers. 2005. Emotion Regulation Abilities and the Quality of Social Interaction. Emotion 5: 113–18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Lopes, Paulo N., Peter Salovey, and Rebecca Straus. 2003. Emotional intelligence, personality, and the perceived quality of social relationships. Personality and Individual Differences 35: 641–58. [Google Scholar] [CrossRef]
  86. Lumley, Mark A., Britta J. Gustavson, R. Ty Partridge, and Gisela Labouvie-Vief. 2005. Assessing alexithymia and related emotional ability constructs using multiple methods: Interrelationships among measures. Emotion 5: 329–42. [Google Scholar] [CrossRef] [PubMed]
  87. MacCann, Carolyn, Dana L. Joseph, Daniel A. Newman, and Richard D. Roberts. 2014. Emotional intelligence is a second-stratum factor of intelligence: Evidence from hierarchical and bifactor models. Emotion 14: 358–74. [Google Scholar] [CrossRef] [Green Version]
  88. MacCann, Carolyn, Filip Lievens, Nele Libbrecht, and Richard D. Roberts. 2016. Differences between multimedia and text-based assessments of emotion management: An exploration with the multimedia emotion management assessment (MEMA). Cognition and Emotion 30: 1317–31. [Google Scholar] [CrossRef] [Green Version]
  89. MacCann, Carolyn, Nicola Pearce, and Richard D. Roberts. 2011. Emotional intelligence as assessed by situational judgement and emotion recognition tests: Building the nomological net. Psychological Topics 20: 393–412. [Google Scholar]
  90. MacCann, Carolyn, and Richard D. Roberts. 2008. New paradigms for assessing emotional intelligence: Theory and data. Emotion 8: 540–51. [Google Scholar] [CrossRef] [PubMed]
  91. Martin, Scott L., and Justin Thomas. 2011. Emotional intelligence: Examining construct validity using the emotional stroop. International Journal of Business and Social Science 2: 209–15. [Google Scholar]
  92. Matsumoto, David, Jeff LeRoux, Carina Wilson-Cohn, Jake Raroque, Kristie Kooken, Paul Ekman, Nathan Yrizarry, Sherry Loewinger, Hideko Uchida, Albert Yee, and et al. 2000. A new test to measure emotion recognition ability: Matsumoto and Ekman’s Japanese and Caucasian Brief Affect Recognition Test (JACBART). Journal of Nonverbal Behavior 24: 179–209. [Google Scholar] [CrossRef]
  93. Mayer, John D. 2008. Personal intelligence. Imagination, Cognition and Personality 27: 209–32. [Google Scholar] [CrossRef]
  94. Mayer, John D. 2018. Intelligences about things and intelligences about people. In The Nature of Human Intelligence. Edited by Richard J. Sternberg. Cambridge: Cambridge University Press, pp. 270–86. [Google Scholar] [CrossRef] [Green Version]
  95. Mayer, John D., Peter Salovey, and David R. Caruso. 2008. Emotional intelligence: New ability or eclectic traits? American Psychologist 63: 503–17. [Google Scholar] [CrossRef] [PubMed]
  96. Mayer, John D., David R. Caruso, and Abigail T. Panter. 2019. Advancing the measurement of personal intelligence with the Test of Personal Intelligence, Version 5 (TOPI 5). Journal of Intelligence 7: 4. [Google Scholar] [CrossRef] [Green Version]
  97. Mayer, John D., David R. Caruso, and Abigail T. Panter. 2021. How do people think about understanding personality—And what do such thoughts reflect? Personality and Individual Differences 178: 110671. [Google Scholar] [CrossRef]
  98. Mayer, John D., Maria DiPaolo, and Peter Salovey. 1990. Perceiving affective content in ambiguous visual stimuli: A component of emotional intelligence. Journal of Personality Assessment 54: 772. [Google Scholar]
  99. Mayer, John D., David R. Caruso, and Peter Salovey. 1999. Emotional intelligence meets traditional standards for an intelligence. Intelligence 27: 267–98. [Google Scholar] [CrossRef]
  100. Mayer, John D., David R. Caruso, and Peter Salovey. 2016. The ability model of emotional intelligence: Principles and updates. Emotion Review 8: 290–300. [Google Scholar] [CrossRef]
  101. Mayer, John D., Peter Salovey, David R. Caruso, and Gill Sitarenios. 2003. Measuring emotional intelligence with the MSCEIT V2.0. Emotion 3: 97–105. [Google Scholar] [CrossRef] [Green Version]
  102. Mayer, John D., Abigail T. Panter, and David R. Caruso. 2012. Does personal intelligence exist? Evidence from a new ability-based measure. Journal of Personality Assessment 94: 124–40. [Google Scholar] [CrossRef]
  103. Mayer, John D., Brendan Lortie, Abigail T. Panter, and David R. Caruso. 2018. Employees high in personal intelligence differ from their colleagues in workplace perceptions and behavior. Journal of Personality Assessment 100: 539–50. [Google Scholar] [CrossRef] [PubMed]
  104. Mayer, John D, and William Skimmyhorn. 2017. Personality attributes that predict cadet performance at West Point. Journal of Research in Personality 66: 14–26. [Google Scholar] [CrossRef] [Green Version]
  105. McGrew, Kevin S. 2009. CHC theory and the human cognitive abilities project: Standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37: 1–10. [Google Scholar] [CrossRef]
  106. McIntyre, Heather H. 2010. Gender differences in the nature and linkage of higher-order personality factors to trait and ability emotional intelligence. Personality and Individual Differences 48: 617–22. [Google Scholar] [CrossRef]
  107. Miller, Allison B., and Mark F. Lenzenweger. 2012. Schizotypy, social cognition, and interpersonal sensitivity. Personality Disorders: Theory, Research, and Treatment 3: 379–92. [Google Scholar] [CrossRef] [Green Version]
  108. Nowicki, Stephen, and Marshall P. Duke. 1994. Individual differences in the nonverbal communication of affect: The diagnostic analysis of nonverbal accuracy scale. Journal of Nonverbal Behavior 18: 9–35. [Google Scholar] [CrossRef]
  109. Nuijten, Michèle B., Marcel A. L. M. van Assen, Hilde E. M. Augusteijn, Elise A. V. Crompvoets, and Jelte M. Wicherts. 2020. Effect sizes, power, and bias in intelligence research: A meta-meta-analysis. Journal of Intelligence 8: 36. [Google Scholar] [CrossRef] [PubMed]
  110. O’Sullivan, Maureen, and Joy P. Guilford. 1975. Six factors of behavioral cognition: Understanding other people. Journal of Educational Measurement 12: 255–71. [Google Scholar] [CrossRef]
  111. Olderbak, Sally, Martin Semmler, and Phillip Doebler. 2018. Four-branch model of ability emotional intelligence with fluid and crystallized intelligence: A meta-analysis of relations. Emotion Review 11. [Google Scholar] [CrossRef]
  112. Olderbak, Sally, Oliver Wilhelm, Gabriel Olaru, Mattis Gieger, Meghan W. Brenneman, and Richard D. Roberts. 2015. A psychometric analysis of the reading the mind in the eyes test: Toward a brief form for research and applied settings. Frontiers in Psychology 6: 1503. [Google Scholar] [CrossRef] [Green Version]
  113. Peters, Christine, John G Kranzler, and Eric Rossen. 2009. Validity of the Mayer-Salovey-Caruso Emotional Intelligence Test: Youth Version-Research Edition. Canadian Journal of School Psychology 24: 76–81. [Google Scholar] [CrossRef]
  114. Peterson, Robert A., and Steven P. Brown. 2005. On the use of beta coefficients in meta-analysis. Journal of Applied Psychology 90: 175–81. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  115. Peterson, Eric, and Stephanie F. Miller. 2012. The eyes test as a measure of individual differences: How much of the variance reflects verbal IQ? Frontiers in Psychology 3: 220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  116. Phelps, Leadelle, Kevin S. McGrew, Susan N. Knopik, and Lauri Ford. 2005. The general (g), broad, and narrow CHC stratum characteristics of the WJ III and WISC-III tests: A confirmatory cross-battery investigation. School Psychology Quarterly 20: 66–88. [Google Scholar] [CrossRef]
  117. Pickett, Cynthia L., Wendi L. Gardner, and Megan Knowles. 2004. Getting a Cue: The Need to Belong and Enhanced Sensitivity to Social Cues. Personality and Social Psychology Bulletin 30: 1095–97. [Google Scholar] [CrossRef]
  118. Pitterman, Hallee, and Stephen Nowicki. 2004. A Test of the ability to identify emotion in human standing and sitting postures: The Diagnostic Analysis of Nonverbal Accuracy-2 Posture Test (DANVA2-POS). Genetic, Social, and General Psychology Monographs 130: 146–162. [Google Scholar] [CrossRef] [PubMed]
  119. Raven, John. 2009. The Raven Progressive Matrices and measuring aptitude constructs. The International Journal of Educational and Psychological Assessment 2: 2–38. [Google Scholar]
  120. Ree, Malcolm James, and Thomas R. Carretta. 2002. G2K. Human Performance 15: 3–24. [Google Scholar] [CrossRef]
  121. Riggio, Ronald E., Jack Messamer, and Barbara Throckmorton. 1991. Social and academic intelligence: Conceptually distinct but overlapping constructs. Personality and Individual Differences 12: 695–702. [Google Scholar] [CrossRef]
  122. Rivers, Susan, Marc A. Brackett, Peter Salovey, and John D. Mayer. 2007. Measuring emotional intelligence as a set of mental abilities. In The Science of Emotional Intelligence: Knowns and Unknowns. Edited by Gerald Matthews, Moshe Zeidner and Richard D. Roberts. New York: Oxford University Press, pp. 230–57. [Google Scholar]
  123. Roberts, Richard D., Ralf Schulze, Kristin OBrien ’, Carolyn MacCann, John Reid, and Andy Maul. 2006. Exploring the validity of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) with established emotions measures. Emotion 6: 663–69. [Google Scholar] [CrossRef] [Green Version]
  124. Roth, Philip L., Le Huy, In-Sue Oh, Chad H. Van Iddekinge, and Philip Bobko. 2018. Using beta coefficients to impute missing correlations in meta-analysis research: Reasons for caution. Journal of Applied Psychology 103: 644–58. [Google Scholar] [CrossRef]
  125. Rosete, David, and Joseph Ciarrochi. 2005. Emotional intelligence and its relationship to workplace performance outcomes of leadership effectiveness. Leadership & Organization Development Journal 26: 388–99. [Google Scholar] [CrossRef] [Green Version]
  126. Salovey, Peter, and John D. Mayer. 1990. Emotional intelligence. Imagination, Cognition and Personality 9: 185–211. [Google Scholar] [CrossRef]
  127. Sanders, Sarah, David E. McIntosh, Mardis Dunham, Barbara A. Rothlisberg, and Holmes Finch. 2007. Joint confirmatory factor analysis of the Differential Ability Scales and the Woodcock- Johnson Test of Cognitive Abilities-Third Edition. Psychology in the Schools 44: 119–38. [Google Scholar] [CrossRef]
  128. Schellenberg, Glenn E. 2011. Music lessons, emotional intelligence, and IQ. Music Perception 29: 185–94. [Google Scholar] [CrossRef]
  129. Schipolowski, Stefan, Oliver Wilhelm, and Ulrich Schroeders. 2014. On the nature of crystallized intelligence: The relationship between verbal ability and factual knowledge. Intelligence 46: 156–68. [Google Scholar] [CrossRef]
  130. Scherer, Klaus, R. Banse, and H. G. Wallbott. 2001. Emotion inferences from vocal expression correlate across languages and cultures. Journal of Cross-Cultural Psychology 32: 76–92. [Google Scholar] [CrossRef] [Green Version]
  131. Scherer, Klaus, and Ursula Scherer. 2011. Assessing the ability to recognize facial and vocal expressions of emotion: Construction and validation of the Emotion Recognition Index. Journal of Nonverbal Behavior 35: 305–26. [Google Scholar] [CrossRef] [Green Version]
  132. Schlegel, Katja, Johnny Fontaine, and Klaus Scherer. 2019a. The nomological network of emotion recognition ability: Evidence from the Geneva Emotion Recognition Test. European Journal of Psychological Assessment 35: 352–63. [Google Scholar] [CrossRef]
  133. Schlegel, Katja, Tristan Palese, Mariann Schmid Mast, Thomas H. Rammsayer, Judith A. Hall, and Nora A. Murphy. 2019b. A meta-analysis of the relationship between emotion recognition ability and intelligence. Cognition and Emotion 34: 329–51. [Google Scholar] [CrossRef]
  134. Schlegel, Katja, and Marcello Mortillaro. 2019. The Geneva Emotional Competence Test (GECo): An ability measure of workplace emotional intelligence. Journal of Applied Psychology 104: 559–80. [Google Scholar] [CrossRef]
  135. Schlegel, Katja, and Klaus Scherer. 2016. Introducing a short version of the Geneva Emotion Recognition Test (GERT-S): Psychometric properties and construct validation. Behavior Research Methods 48: 1383–92. [Google Scholar] [CrossRef]
  136. Schlegel, Katja, and Klaus Scherer. 2018. The nomological network of emotion knowledge and emotion understanding in adults: Evidence from two new performance-based tests. Cognition and Emotion 32: 1514–30. [Google Scholar] [CrossRef] [PubMed]
  137. Schlegel, Katja, Didier Grandjean, and Klaus Scherer. 2014. Introducing the Geneva emotion recognition: An example of Rasch-based test development. Psychological Assessment 26: 666–72. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  138. Schlegel, Katja, Joëlle S. Witmer, and Thomas H. Rammsayer. 2017. Intelligence and sensory sensitivity as predictors of emotion recognition ability. Journal of Intelligence 5: 35. [Google Scholar] [CrossRef] [Green Version]
  139. Schmidt, Frank L., and John E. Hunter. 2015. Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. Thousand Oaks: Sage Publications, Inc. [Google Scholar]
  140. Schneider, W. Joel, and Kevin S. McGrew. 2018. The Cattell-Horn-Carroll Theory of Cognitive Abilities. In Contemporary Intellectual Assessment: Theories, Tests and Issues. Edited by Dawn P. Flanagan and Erin M. McDonough. New York: Guilford Press, pp. 73–163. [Google Scholar]
  141. Schneider, W. Joel, and Daniel A. Newman. 2015. Intelligence is multidimensional: Theoretical review and implications of specific cognitive abilities. Human Resource Management Review 25: 12–27. [Google Scholar] [CrossRef]
  142. Schulte, Melanie J., Malcolm James Ree, and Thomas R. Carretta. 2004. Emotional intelligence: Not much more than G and personality. Personality and Individual Differences 37: 1059–68. [Google Scholar] [CrossRef]
  143. Schutte, Nicola S., John M. Malouff, Lena E. Hall, Donald J. Haggerty, Joan T. Cooper, Charlse J. Golden, and Liane Dornheim. 1998. Development and validation of a measure of emotional intelligence. Personality and Individual Differences 25: 167–77. [Google Scholar] [CrossRef]
  144. Sechrest, Lee. 1963. Incremental validity: A recommendation. Educational and Psychological Measurement 23: 153–58. [Google Scholar] [CrossRef]
  145. Sharma, Sudeep, Mugdha Gangopadhyay, Elizabeth Austin, and Manas K. Mandal. 2013. Development and validation of a situational judgment test of emotional intelligence. International Journal of Selection and Assessment 21: 57–73. [Google Scholar] [CrossRef]
  146. Śmieja, Magdalena, Jaroslaw Orzechowski, and Maciej S. Stolarsk. 2014. TIE: An ability test of emotional intelligence. PLoS ONE 9: e103484. [Google Scholar] [CrossRef]
  147. Spearman, Charles. 1904. “General intelligence”, objectively determined and measured. The American Journal of Psychology 15: 201–93. [Google Scholar] [CrossRef]
  148. Sternberg, Robert J., and Craig Smith. 1985. Social intelligence and decoding skills in nonverbal communication. Social Cognition 3: 168–92. [Google Scholar] [CrossRef]
  149. Sterne, Jonathan A., Betsy Jane Becker, and Matthias Egger. 2005. The funnel plot. In Publication Bias in Meta-Analysis. Edited by Hannah R. Rothstein, Alexander J. Sutton and Michael Borenstein. West Sussex: John Wiley & Sons, Ltd. [Google Scholar] [CrossRef]
  150. Taylor, Rebecca D., Eva Oberle, Joseph A. Durlak, and Roger P. Weissberg. 2017. Promoting positive youth development through school-based social and emotional learning interventions: A meta-analysis of follow-up effects. Child Development 88: 1156–71. [Google Scholar] [CrossRef]
  151. Tett, Robert P., Kevin E. Fox, and Alvin Wang. 2005. Development and Validation of a Self-Report Measure of Emotional Intelligence as a Multidimensional Trait Domain. Personality and Social Psychology Bulletin 31: 859–88. [Google Scholar] [CrossRef]
  152. Thorndike, Edward L. 1920. Intelligence and its uses. Harper’s Magazine 140: 227–35. [Google Scholar]
  153. Thorndike, Robert L., and Saul Stein. 1937. An evaluation of the attempts to measure social intelligence. Psychological Bulletin 34: 275–85. [Google Scholar] [CrossRef]
  154. Thurstone, Louis Leon. 1938. Primary Mental Abilities. Chicago: University of Chicago Press, Available online: http://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=1938-15070-000&site=ehost-live (accessed on 9 September 2021).
  155. Viechtbauer, Wolfgang. 2010. Conducting meta-analyses in R with the metafor package. Journal of Statistical Software 36: 1–48. [Google Scholar] [CrossRef] [Green Version]
  156. Völker, Juliane. 2020. An examination of ability emotional intelligence and its relationships with fluid and crystallized abilities in a student sample. Journal of Intelligence 24: 18. [Google Scholar] [CrossRef] [PubMed]
  157. Wagner, Richard K., and Robert J. Sternberg. 1987. Tacit knowledge in managerial success. Journal of Business and Psychology 1: 301–213. [Google Scholar] [CrossRef]
  158. Walker, Ronald E., and Jeanne M. Foley. 1973. Social intelligence: Its history and measurement. Psychological Reports 33: 839–64. [Google Scholar] [CrossRef]
  159. Warwick, Janette, and Ted Nettelbeck. 2004. Emotional intelligence is…? Personality and Individual Differences 37: 1091–100. [Google Scholar] [CrossRef]
  160. Webb, Christian A., Sophie DelDonno, and William D.S. Killgore. 2014. The role of cognitive versus emotional intelligence in Iowa Gambling Task performance: What’s emotion got to do with it? Intelligence 44: 112–19. [Google Scholar] [CrossRef] [Green Version]
  161. Weis, Susanne, and Heinz-Martin Süß. 2007. Reviving the search for social intelligence—A multitrait-multimethod study of its structure and construct validity. Personality and Individual Differences 42: 3–14. [Google Scholar] [CrossRef]
  162. Westfall, Jacob, and Tal Yarkoni. 2016. Statistically controlling for confounding constructs is harder than you think. PLoS ONE 11: 1–22. [Google Scholar]
  163. Wickline, Virginia B., Stephen Nowicki, Annie M. Bollini, and Elaine F. Walker. 2012. Vocal and facial emotion decoding difficulties relating to social and thought problems: Highlighting schizotypal personality disorder. Journal of Nonverbal Behavior 36: 59–77. [Google Scholar] [CrossRef]
  164. Wiernik, Brenton M., and Jeffrey A. Dahlke. 2020. Obtaining unbiased results in meta-analysis: The importance of correcting for statistical artifacts. Advances in Methods and Practices in Psychological Science 3: 94–123. [Google Scholar] [CrossRef]
  165. Wong, Chau-Ming T., Jeanne D. Day, Scott E. Maxwell, and Naomi M. Meara. 1995. A multitrait-multimethod study of academic and social intelligence in college students. Journal of Educational Psychology 87: 117–33. [Google Scholar] [CrossRef]
  166. Zeidner, Moshe, Gerald Matthews, and Richard D. Roberts. 2001. Slow down, you move too fast: Emotional intelligence remains an “elusive” intelligence. Emotion 1: 265–75. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Modified CHC model organizing the broad intelligences across a people–thing continuum. In the model, a subset of broad intelligences are arranged along a continuum from topics of reasoning that primarily concern things to those that concern people.
Figure 1. Modified CHC model organizing the broad intelligences across a people–thing continuum. In the model, a subset of broad intelligences are arranged along a continuum from topics of reasoning that primarily concern things to those that concern people.
Jintelligence 09 00048 g001
Figure 2. Article identification and screening process.
Figure 2. Article identification and screening process.
Jintelligence 09 00048 g002
Figure 3. A Comparison of the Distinctiveness and Relatedness of the Six Types of Broad Intelligences Examined Here. Estimates relative to people-centered intelligences are to the left (a); those for the remaining categories are to the right (b). The average correlation estimates across studies were taken from the random effects model. The box sizes of the forest plots reflect the relative number of effect sizes (k) associated with a given estimate. The specific numerical values of the effect sizes, sample sizes and point estimates for each contrast can be found in Table 4.
Figure 3. A Comparison of the Distinctiveness and Relatedness of the Six Types of Broad Intelligences Examined Here. Estimates relative to people-centered intelligences are to the left (a); those for the remaining categories are to the right (b). The average correlation estimates across studies were taken from the random effects model. The box sizes of the forest plots reflect the relative number of effect sizes (k) associated with a given estimate. The specific numerical values of the effect sizes, sample sizes and point estimates for each contrast can be found in Table 4.
Jintelligence 09 00048 g003
Figure 4. Funnel plots depicting the disattenuated, transformed effect sizes (a) for all studies in our sample and those (b) people-to-people, (c) people-to-mixed, and (d) people-to-thing comparisons. The white, light gray, and dark gray shading on each funnel plot represent the 90, 95, and 99% pseudo confidence intervals. The small, dotted vertical line represents the unweighted, average effect sizes. a Includes mixed-to-mixed, mixed-to-thing, and thing-to-thing correlations.
Figure 4. Funnel plots depicting the disattenuated, transformed effect sizes (a) for all studies in our sample and those (b) people-to-people, (c) people-to-mixed, and (d) people-to-thing comparisons. The white, light gray, and dark gray shading on each funnel plot represent the 90, 95, and 99% pseudo confidence intervals. The small, dotted vertical line represents the unweighted, average effect sizes. a Includes mixed-to-mixed, mixed-to-thing, and thing-to-thing correlations.
Jintelligence 09 00048 g004
Table 1. Index of key assessments for people-centered abilities.
Table 1. Index of key assessments for people-centered abilities.
Emotional Intelligence
Broad Scales a
Omnibus measures of multiple areas of emotional intelligence.
Mayer–Salovey–Caruso Emotional Intelligence Scale (MSCEIT; Mayer et al. 2003).
Multifactor Emotional Intelligence Scale (MEIS; Mayer et al. 1999).
Test of Emotional Intelligence (TEMINT; Blickle et al. 2011).
Geneva Emotional Competence Test (GECo; Schlegel and Mortillaro 2019).
Situational Judgement Test of Emotional Intelligence (SJIT; Sharma et al. 2013).
Specific Scales
Emotion Recognition Ability
Measures of specific ability to accurately identify emotions in oneself and others. Includes perceiving emotions across expression modalities, including faces, voices, and the body.
Geneva Emotion Recognition Test (GERT and GERT-S; Schlegel et al. 2014; Schlegel and Scherer 2016).
Multimodal Emotion Recognition Test (MERT; Bänziger et al. 2009).
Emotion Recognition Index (ERI; Scherer and Scherer 2011).
Japanese and Caucasian Brief Affect Recognition Test (JACBART; Matsumoto et al. 2000).
Profile of Nonverbal Sensitivity (PONS and MiniPONS; (Bänziger et al. 2011; DePaulo and Rosenthal 1979).
Diagnostic Analysis of Nonverbal Accuracy (DANVA; Pitterman and Nowicki 2004).
Reading the Mind in the Eyes Test (RMET; Baron-Cohen et al. 2001).
Index of Vocal Emotion Recognition (Vocal-I; Scherer et al. 2001).
Moreover, relevant subscales of the MSCEIT
Emotion Understanding and Management
Assessments of understanding how situations or events are linked to emotional experiences and, for management, of effective regulation of emotions in the self and others. Involves strategies aimed at maintaining or enhancing positive emotional experiences and reducing/regulating negative ones.
Geneva Emotional Knowledge Test (GEMOK; Schlegel and Scherer 2018).
Multimedia Emotion Management Assessment (MEMA; MacCann et al. 2016).
Situational Test of Emotional Understanding (MacCann and Roberts 2008).
Situational Test of Emotional Management (MacCann and Roberts 2008).
Moreover, relevant subscales of the MSCEIT, MEIS, and GECO
Social Intelligence
Broad Scales b
Omnibus measures of multiple areas of social intelligence.
George Washington Social Intelligence Test (GWSIT; Hunt 1928).
Magdeburg Test of Social Intelligence (MTSI; Conzelmann et al. 2013).
Four Factor Test (O’Sullivan and Guilford 1975).
Social Perception
Measures of the capacity to understand behavioral expressions that convey people’s attitudes, or underlying intentions, and feelings. Modalities include facial expression, hand gestures, posture, and vocalizations.
Interpersonal Perception Task (IPT; Costanzo and Archer 1989).
Moreover, relevant subscales of the GWSIT, Four Factor Test, and MTSI
Social Knowledge
Tests for knowledge of social etiquette and rules. Largely tied to environmental or cultural factors.
Tacit Knowledge Inventory (Wagner and Sternberg 1987).
Social Insight, Memory and Understanding
Assessments of the capacity to reason about behavioral sequences, including the antecedents of behavior and the resulting consequences of one’s behavioral choices. Involves understanding social cues and choosing behaviors that lead to desired social outcomes.
Chapin Social Insight Test (Gough 1965).
Moreover, relevant subscales of the GWSIT and Four Factor Test
Personal Intelligence
Broad Scales
Measures of the capacity to understand personality in oneself and others.
Test of Personal Intelligence (TOPI and TOPI-MINI; Mayer et al. 2012, 2019).
a Many of the broad scales of emotional intelligence also provide subscale scores for measuring specific areas of emotional intelligence and sometimes appeared in the research corpus studied in this meta-analysis. b As above, many of the broad scales of social intelligence also provide subscale scores for measuring specific areas of social intelligence and sometimes appeared in the research corpus studied in this meta-analysis.
Table 2. List of included studies.
Table 2. List of included studies.
ArticleNMental Ability Represented and Assessment(s)
Person-CenteredMixedThing-Centered
Mental AbilityAssessmentsMental AbilityAssessmentsMental AbilityAssessments
Austin (2004)92GeiEkman-60GrwNational Adult Reading Test
Austin (2005)95GeiEkman-60 GfRaven’s Matrices
Austin (2010)135GeiMSCEIT; STEU; STEMGcQuickie Battery VocabularyGfQuickie Battery Letter Series
Barchard (2003)150GeiMSCEITGcFrench Kit
GsiFour Factor Test
Bastian et al. (2005)246GeiMSCEITGlrPWATGfRaven’s Matrices
Brackett and Mayer (2003)207GeiMSCEITGc/GrwVerbal SAT
Brackett et al. (2006)316GeiMSCEITGc/GrwVerbal SAT
Broom (1930)646GsiGWSITGrw Thorndike Reading Comprehension
Campbell and McCord (1996)50GsiChapin Social Insight TestGcWAIS—R ComprehensionGvWAIS—R Pic. Arrangement
Checa and Fernández-Berrocal (2015)92GeiMSCEITGcKBIT VocabularyGfKBIT Matrices
Conzelmann et al. (2013)
Study 1127GsiMagdeburg TestGcBIS Verbal GvBIS Figural
GsmBIS Memory
GsBIS Speed
Study 2190GsiMagdeburg TestGcBIS VerbalGvBIS Figural
GsmBIS Memory
GsBIS Speed
Cook and Saucier (2010)88GeiEyes Test GvMental Rotation Test
Côté and Miners (2006)175GeiMSCEIT GfCulture Fair Test
Coyle et al. (2018)249GeiEyes TestGcACT English; ReadingGqACT Math
Curci et al. (2013)183GeiMSCEITGcWAIS VocabularyGfRaven’s Matrices
Dacre Pool and Qualter (2012)1086GeiMSCEIT GfRaven’s Matrices
Di Fabio and Palazzeschi (2009)124GeiMSCEIT GfRaven’s Matrices
Di Fabio and Saklofske (2014)194GeiMSCEIT GfRaven’s Matrices
Evans et al. (2020)830GeiSTEU; STEM; Eyes TestGcICAR Verbal GfICAR Letter and Number Series
Farrelly and Austin (2007)
Study 199GeiMSCEITGcQuickie Battery Vocab/AnalogiesGfQuickie Battery Letter Series/ Matrices
Study 2199GeiMSCEITGcQuickie Battery Vocab/AnalogiesGfRaven’s Matrices
Fiori and Antonakis (2011)149Gei MSCEIT GfCulture Fair Test
Fiori and Antonakis (2012)85GeiMSCEIT GfCulture Fair Test
Habota et al. (2015)69GeiEkman-60; Eyes TestGlrRey Auditory Verbal Learning
Holmes et al. (1976)45GsiChapin Social Insight Test Gf Shipley Abstract Reasoning
Ivcevic et al. (2007)
Study 1107GeiMSCEITGlrRemote Associates Test
Study 2113GeiMSCEITGc/GrwSAT Verbal GqSAT Math
GlrRemote Associates Test
Karim and Weisz (2010)192GeiMSCEIT GfRaven’s Matrices
Keating (1978)117GsiChapin Social Insight TestGc Gf Raven’s Matrices
Kokkinakis et al. (2017)56GeiEyes Test GfWASI-II Matrix
Lanciano and Curci (2014)89Gei MSCEIT GfRaven’s Matrices
Lee et al. (2000)169GsiGWSIT; Four Factor TestGcWAIS-R Vocabulary; Verbal AnalogiesGfSpatial Analogies; WAIS-R Pic. Completion
Libbrecht and Lievens (2012)764GeiSTEU; STEM GfFlemish Gf test
Lopes et al. (2006)44GeiMSCEITGcMill Hill Vocabulary
Lopes et al. (2003)103GeiMSCEITGcWAIS-III Vocabulary
Lopes et al. (2005)76GeiMSCEITGcMill Hill VocabularyGfCulture Fair Test
Gc/GrwSAT VerbalGqSAT Math
Lumley et al. (2005) 140GeiMSCEITGrwWide Range Achievement Test
MacCann et al. (2014)688GeiMSCEITGcFrench Kit Vocab; ETS Analogies and Sentence Completion GfFrench Kit Letter Sets, Figure Class. and Calendar
GlrFrench Kit Word Endings, Word Beginnings, and Opposites. GvFrench Kit Cube Comp., Hidden Patterns, Surface Development
GqFrench Kit Math Aptitude, Necessary Math., Subtraction and Multiplication.
MacCann et al. (2016)394GeiMSCEITGcFrench Kit Vocab, Analogies, SentencesGfFrench Kit Letters, Figures, Calendar
MacCann et al. (2011)118GeiSTEU; STEMGcIST Knowledge GfRaven’s Matrices
Grw ACER—Reading Comprehension
MacCann and Roberts (2008)200GeiSTEU; STEM; MEIS StoriesGcGf/Gc Quickie Battery Vocabulary
Martin and Thomas (2011)87GeiMSCEIT GfRaven’s Matrices
Mayer et al. (1999)500GeiMEISGcArmy Alpha Vocabulary
Mayer et al. (2018)
Study 1394GpiTOPI MINIGcWordsumplus; Modified VocabularyGfBackwards digit span
Study 2492GpiTOPI 1.4GcWordsumplus
Mayer et al. (2012)
Study 1241GpiTOPI 1.0GcModified Vocabulary
Study 2308GpiTOPI 1.1GcModified Vocabulary
Study 3385GpiTOPI 1.2GcModified Vocabulary
GeiMSCEIT; Eyes Test
Mayer and Skimmyhorn (2017)
Study 1932GpiTOPIGc/GrwSAT VerbalGqSAT Math
GvO*Net Spatial Ability
Study 2893GpiTOPIGc/GrwSAT VerbalGqSAT Math
GvO*Net Spatial Ability
McIntyre (2010)420GeiMSCEITGcFrench Kit Vocabulary
Miller and Lenzenweger (2012)93GeiPONS GvDigit Symbol Coding
Nowicki and Duke (1994)1144GeiDANVAGcCTBS—Vocab; Word RecognitionGqCTBS—Math Concepts; Comprehension; Counting
GrwCTBS—Reading Comp.; Spelling
O’Sullivan and Guilford (1975)240GsiFour Factor TestGcHenmon-Nelson Vocab; Verbal Analogies, Classification, ComprehensionGfDAT Abstract Reasoning; Figure Matrix
GqITED Quantitative Thinking
Olderbak et al. (2015)
Study 1484GeiDANVA; Eyes TestGcETS Vocabulary
Study 2210GeiDANVA; Eyes TestGcETS Vocabulary
Peters et al. (2009)50GeiMSCEIT-YVGrwWJ-III Reading; SAT ReadingGqWJ-III Math; SAT Math
Peterson and Miller (2012)45GeiEyes TestGcWASI VocabularyGfWASI Matrix Reasoning
Pickett et al. (2004)46GeiDANVA GqETS-Quantitative
Riggio et al. (1991)171GsiFour Factor TestGcWAIS-R Vocabulary; Shipley VocabularyGfShipley Abstract Reasoning
Roberts et al. (2006)138GeiMSCEITGc Quickie Battery Vocabulary, Esoteric Analogies GfMatrices, Swaps
Rosete and Ciarrochi (2005)41GeiMSCEITGcWASI VerbalGfWASI Performance
Schellenberg (2011)106GeiMSCEITGcKBIT VerbalGfKBIT Performance
Schlegel and Mortillaro (2019)
Study 1149GeiERI; STEU; STEM; GECo GfNV5-R Inductive Reasoning
Study 2187GeiMSCEIT; STEU; STEM; GECo GfCulture Fair Test
Study 4206GeiGECoGcIST VerbalGvIST Figural
GqIST Numeric
Schlegel and Scherer (2016)128GeiGERT; STEU; STEM GfCulture Fair Test
Schlegel and Scherer (2018)
Study 1159GeiGERT; DANVA; ERI; GEMOKGcShipley Vocabulary
Study 4103GeiGERT; DANVA; ERI; GEMOK GfCulture Fair Test
Schlegel et al. (2019a)131GeiGERT; MERT; MiniPONS; JACBART; MSCEITGcNV5-R VocabularyGfNV5-R Reasoning
Schlegel et al. (2017)214GeiGERT GfCulture Fair Test
Sharma et al. (2013)147GeiSJT-EIGcMill Hill VocabularyGf Raven’s Matrices
Śmieja et al. (2014)4624GeiTIEGcCattell-Horn Word ClassificationGfRaven’s Matrices
Sternberg and Smith (1985)101GsiChapin Social Insight Test; GWSIT GfCulture Fair Test
GvEmbedded Figure Test
Thorndike and Stein (1937)500GsiGWSITGcThorndike VocabularyGqThorndike Arithmetic Reasoning
GrwThorndike Comprehension
Völker (2020)188GeiGECoGcINSBAT General Knowledge; Verbal Fluency; Word MeaningGfINSBAT Inductive; Verbal Deductive
GvINSBAT Figural
Warwick and Nettelbeck (2004)84GeiMSCEIT GfDAT Abstract Reasoning
Webb et al. (2014)65GeiMSCEITGc WASI VerbalGfWASI Performance
Weis and Süß (2007)101GsiMagdeburg Test GfBIS Reasoning
GsmBIS Memory
GsBIS Speed
Wickline et al. (2012)42GeiDANVA GvWISC-III Picture Arrangement
Wong et al. (1995)
Study 1143Gsi GWSIT; Four Factor TestGcWAIS-R VocabularyGvWAIS-R Pic. Completion
Study 2240GsiGWSIT; Four Factor TestGcVerbal AnalogiesGvSpatial Analogies
Note. Gc = comprehension knowledge; Gei = emotional intelligence; Gf = fluid intelligence; Glr = long-term retrieval; Gpi = personal intelligence; Grw = reading and writing ability; Gs = processing speed; Gsi = social intelligence; Gsm = short-term memory; Gv = visuospatial processing; Gq = quantitative knowledge.
Table 3. Characteristics of included studies.
Table 3. Characteristics of included studies.
Sample Type a
University58 studies
Community13 studies
Online7 studies
Child/adolescent 7 studies
Clinical 4 studies
Other4 studies
Sample sizemean = 283.20, total = 24,638; range = 41 to 4642
Gender (males to females)56% female; males = 9773; females = 12,439
Age of participantsmean = 25.52; range = 13.3 to 69.8
Publication yearmean = 2007; median = 2010; range = 1930 to 2020
Reliability b
Social intelligencemean = 0.63; range = 0.10 to 0.98.
Emotional intelligencemean = 0.74; range = 0.42 to 0.99
Emotion recognition abilitymean = 0.73; range = 0.43 to 0.95
Personal intelligencemean = 0.87, range = 0.71 to 0.94
a Some studies recruited participants of more than one type and so the total exceeds 87 (e.g., participants were recruited from the community and university). b Average reliability for social, emotional, and personal intelligences included instances where the reliability was estimated from other sources (see Estimating Reliabilities, above).
Table 4. Associated statistics for the estimated average correlation among intelligence comparison types.
Table 4. Associated statistics for the estimated average correlation among intelligence comparison types.
ContrastkNAvg. Reliabilityrest.95% CI
People–with–People108515,8930.680.43[0.39, 0.48]
People–with–Mixed42416,9530.720.36[0.31, 0.40]
People–with–Thing46413,7510.730.29[0.24, 0.34]
Thing–with–Mixed11766300.780.43[0.37, 0.49]
Mixed–with–Mixed6633290.720.62[0.57, 0.67]
Thing–with–Thing5834630.760.74[0.70, 0.78]
Table 5. Average estimated correlations a among people-centered, mixed, and thing-centered intelligences organized by type of people-centered ability.
Table 5. Average estimated correlations a among people-centered, mixed, and thing-centered intelligences organized by type of people-centered ability.
Social IntelligenceEmotional Intelligence b,cPersonal Intelligence
Class and Subclass of IntelligencekNr95% CIkNr95% CIkNr95% CI
People-centered intelligences
Social intelligence (Gsi)62118940.33[0.28, 0.38]214680.23[0.07, 0.37]--------
Emotional intelligence (Gei) b,c214680.23[0.07, 0.37]440136930.50[0.45, 0.54]33520.70[0.40, 0.87]
Personal intelligence (Gpi)--------33520.70[0.40, 0.87]--------
Mixed intelligences
Comprehension knowledge (Gc)16922090.38[0.32, 0.44]17390150.35[0.29, 0.41]632180.41[0.14, 0.62]
Long-term retrieval (Glr)82250.10[−0.13, 0.32]3213070.14[0.02, 0.25]--------
Reading and writing ability (Grw)16460.78[0.35, 0.94]4224530.32[0.22, 0.42]218250.35[−0.06, 0.66]
Thing-centered intelligences
Fluid intelligence (Gf)9813140.30[0.23, 0.38]16891790.29[0.22, 0.35]--------
Visuospatial processing (Gv)739800.29[0.21, 0.37]3113450.17[0.05, 0.28]220990.26[−0.15, 0.60]
Quantitative knowledge (Gq)358480.22[0.11, 0.33]6328370.24[0.14, 0.32]218250.18[−0.24, 0.54]
Other mental abilities d
Processing speed (Gs)413910.29[0.18, 0.39]22010.09[−0.37, 0.51]--------
Short-term memory (Gsm)413910.38[0.28, 0.47]4164-0.03[−0.37, 0.32]1394−0.02[−0.56, 0.53]
a All average correlation estimates are taken from the unweighted random effects models. Values are presented as Pearson r’s corrected for disattenuation due to reliability b The estimated correlations for emotional with social intelligence and emotional with personal intelligence have been duplicated in other columns. c Includes both measures labeled as emotional intelligence and emotion recognition ability. d The “other” abilities were regarded as process-based or “utility” intelligences and, although included here, were otherwise excluded from the people-to-thing intelligence analyses.
Table 6. Principal component loadings for the three-component unrotated solution testing the people–thing continuum.
Table 6. Principal component loadings for the three-component unrotated solution testing the people–thing continuum.
Broad IntelligencePrincipal Components Solution aSchmid–Lehman Analysis b
IIIIIIgIIIIII
Thing-centered intelligences
Visuospatial processing0.60−0.460.510.33 0.57
Quantitative knowledge0.72−0.520.300.56 0.84
Mixed intelligences
Reading and writing0.83−0.13−0.480.890.47
People-centered intelligences
Emotional intelligence0.620.630.260.34 0.61
Personal intelligence0.660.610.210.41 0.92
Social intelligence0.71−0.06−0.600.670.37
Note. Factor loadings above 0.30 are bolded. a The principal components solution converged without warnings or issues. b The Schmid–Lehman was adjusted because it contained an ultra-Heywood case and the estimated weights may be somewhat incorrect.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bryan, V.M.; Mayer, J.D. Are People-Centered Intelligences Psychometrically Distinct from Thing-Centered Intelligences? A Meta-Analysis. J. Intell. 2021, 9, 48. https://doi.org/10.3390/jintelligence9040048

AMA Style

Bryan VM, Mayer JD. Are People-Centered Intelligences Psychometrically Distinct from Thing-Centered Intelligences? A Meta-Analysis. Journal of Intelligence. 2021; 9(4):48. https://doi.org/10.3390/jintelligence9040048

Chicago/Turabian Style

Bryan, Victoria M., and John D. Mayer. 2021. "Are People-Centered Intelligences Psychometrically Distinct from Thing-Centered Intelligences? A Meta-Analysis" Journal of Intelligence 9, no. 4: 48. https://doi.org/10.3390/jintelligence9040048

APA Style

Bryan, V. M., & Mayer, J. D. (2021). Are People-Centered Intelligences Psychometrically Distinct from Thing-Centered Intelligences? A Meta-Analysis. Journal of Intelligence, 9(4), 48. https://doi.org/10.3390/jintelligence9040048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop