Next Article in Journal
Annotating Throughout or Annotating Afterward: Preservice Teachers’ Experiences with the ANNOTO Hyper-Video in Blended Learning
Previous Article in Journal
Promoting Sustainability Together with Parents in Early Childhood Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Validation of Perception of Wisdom Exploratory Rating Scale: An Instrument to Examine Teachers’ Perceptions of Wisdom

1
Department of Counseling, Higher Education Leadership, Educational Psychology, and Foundations, Mississippi State University, Mississippi State, MS 39762, USA
2
Department of Education Reform, College of Education and Health, University of Arkansas, Fayetteville, AR 72701, USA
3
College of Education, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Deceased author.
Educ. Sci. 2024, 14(5), 542; https://doi.org/10.3390/educsci14050542
Submission received: 15 March 2024 / Revised: 3 May 2024 / Accepted: 13 May 2024 / Published: 17 May 2024

Abstract

:
The purpose of this study was to develop and validate the Perception of Wisdom Exploratory Rating Scale based on the Polyhedron Model of Wisdom (PMW). A total number of 585 responses from in-service and preservice teachers was collected. In the EFA, the items fit a seven-factor structure, producing the following subscales: knowledge management, self-regulation, moral maturity, openness, tolerance, sound judgment, and creative thinking. CFA was performed to test the construct validity of the scale. The model produced a good fit to the data (χ2/df = 1.67, CFI = 0.92, TLI = 0.91, RMSEA = 0.049, and SRMR = 0.06). With continued testing and revisions, this instrument could be useful for the cross-cultural comparison of perceptions of wisdom and identification of barriers to promoting wisdom instruction.

1. Wisdom

Throughout human history, people from different philosophical traditions, cultures, and religions have considered wisdom as a supreme and valuable concept [1]. Thinking wisely plays a role in any situation that is social in nature [2]. As social beings, social considerations and interactions are common and are often unavoidable in most everyday tasks in the lives of individuals [2]. Some social situations like the current COVID-19 crisis become complex very quickly when diverse interests arise. Furthermore, decisions made by individuals likely yield consequences that affect people outside that interaction [2]. Wisdom’s role in balancing diverse interests, immediate and/or lasting consequences, and environmental responses is vital to positive, constructive decision making [3].
Although empirical studies of wisdom in psychology have been conducted only relatively recently, wisdom research has gained in popularity during the last three decades. However, a generally agreed upon definition of wisdom does not exist, and there is significant variation among the definitions and models of wisdom [4,5]. Most researchers refer to wisdom as an aggregate of other components: Balance Theory of Wisdom [3], Berlin Wisdom Paradigm [6,7], and the Three-Dimensional Model of Wisdom [4,8,9]. There have been attempts made with the aim to identify points of consensus on the definitions of wisdom [10,11,12], however, all such attempts have been conducted in the field of psychology [5,13,14]. Nevertheless, wisdom is an interdisciplinary and complex concept that goes far beyond psychology [15]. Since its reappearance in the scientific literature during the past century, wisdom has been adopted by different scientific communities such as psychology, education, business, neurology, and computer/information science. Therefore, we broadened these efforts and systematically reviewed articles in psychology, management and leadership as well as education to investigate points of consensus. Based on the review, we offer the Polyhedron Model of Wisdom (PMW) (see Figure 1). We suggest components that characterize wisdom including knowledge, reflectivity and self-regulation, pro-social behaviors and moral maturity, openness and tolerance, critical thinking, intelligence, creativity, and dynamic balance and synthesis [16]. We have discussed and explained all the components using COVID-19 as a context in our previous work [17].

Wisdom Can Be Fostered

All of the articles in our systematic review stated that acquiring wisdom was a developmental process. In fact, wisdom is more of a process than a product [16]. Among the articles we reviewed, 82% of the authors claimed that wisdom could be taught and fostered, and the others made no such claim [16]. Bruya and Ardelt (2018) reviewed some of the pedagogies that aimed to promote wisdom in the classroom and concluded that wisdom could be taught and fostered in formal education [18]. However, the existing literature on theories of wisdom pedagogy is very limited [19], and many questions remain unanswered regarding fostering and cultivating wise thinking [2]. Researchers have been investigating lay beliefs about wisdom, and lay theories have demonstrated some variability in how wisdom is defined across age groups, professions, cultures, and situations. However, we did not find any study that investigated the teachers’ beliefs regarding wisdom [16]. Since the possibility for developing wisdom in the classroom exists, the factors that influence the teachers’ commitment to the wisdom development of students become important to understand.
The beliefs, attitudes, and perceptions of teachers regarding constructs affect the educational practices and outcomes [20,21]. How teachers feel or think about wisdom and its components may influence classroom instruction strategies that support wisdom development among learners. For example, a teacher holding misconceptions about wisdom and its importance may deliberately overlook supporting student development through the inclusion of wise thinking. Thus, the teachers’ perceptions are important and integral to the efficacy of any learning program [22]. Teachers bring the different beliefs that they embrace to the classroom. Their beliefs “serve as [an] epistemological base, or a theoretical underpinning, orchestrating cognitive, affective, and behavioral decisions that manifest in the classroom” ([23], p. 106). To this end, it is important to investigate the teachers’ beliefs regarding wisdom. An understanding of the teachers’ beliefs and their development facilitates an understanding of the disagreements between the teachers’ implicit theories of wisdom and explicit theories in the field. It also provides opportunities to promote better teacher preparation and in-service development [24]. Hence, the precise measurement of the teachers’ beliefs is a prerequisite to help teachers [23], researchers, policymakers, and teacher-preparation programs foster wisdom. Additionally, teacher belief systems aid teacher education training and professional development by providing foundational research-based knowledge to address and align the educators’ personal beliefs, attitudes, and perceptions with the best practice in the field.

2. Purpose of the Study

The purpose of the proposed study was to develop and validate the Perception of Wisdom Exploratory Rating Scale based on the Polyhedron Model of Wisdom. Specific research questions were:
  • To what extent does the POWER Scale demonstrate evidence of content validity?
  • To what extent does the POWER Scale demonstrate evidence of construct validity?
  • What evidence of internal-consistency reliability exists in the data used to develop the POWER Scale?

3. POWER Scale Development

The goal of this study was to develop an instrument to capture the teachers’ perceptions of wisdom. We followed the steps of affective instrument design suggested by [25]. The first five steps involve specifying the purpose of the instrument, making sure that no existing instrument serves the same purpose, describing the construct and its dimensions, and then developing final conceptual definitions for each dimension through an extensive literature review. The first five steps of this scale were addressed through a systematic review [16]. In this study, we addressed steps 6 to 14 as follows:
6.
Develop operational definitions;
7.
Select a scaling technique;
8.
Match items back to the dimensions, ensuring adequate content representation on each dimension;
9.
Conduct a judgmental review of items;
10.
Develop directions for responding; create final pilot version of the instrument;
11.
Pre-pilot the instrument with a small number of respondents from the target group and make necessary revisions based on their feedback=;
12.
Gather pilot data from a sample that is as representative as possible of the target population;
13.
Analyze the pilot data (including factor analysis, item analysis, and reliability estimation);
14.
Revise the instrument based on the initial pilot data analysis and re-administer if needed.

4. Operational Definitions

According to the Polyhedron Model of Wisdom, components of wisdom are knowledge management, self-regulation, altruism and moral maturity, openness and tolerance, sound judgment, creative thinking, and dynamic balance and synthesis translated into action. However, the last component, dynamic balance and synthesis translated into action, is different from the other components. Dynamic balance and synthesis translated into action determines the variation of each component depending on the context, situations, and circumstances, which is why we did not include it in this study. In fact, dynamic balance and synthesis translated into action is a component that needs to be investigated through in-depth interviews. Moreover, while we grouped openness and tolerance in the PMW because they are closely related [26,27,28], they are different concepts [26,27]. Therefore, we defined them separately and treated them as two different components for this study.

4.1. Knowledge Management

Knowledge management involves applying appropriate knowledge (factual, procedural, conceptual, and meta-knowledge) in a given situation. It also involves adding value to, improving, and advancing the frontiers of knowledge.

4.2. Self-Regulation

Self-regulation refers to the ability to be self-aware and contemplative about the sort of person one is and is becoming, and the kind of personal character that is emerging through one’s actions. Self-regulation is the ability to intentionally plan, monitor, revise, and adapt one’s behavior, attention, emotions, and cognitive strategies in an attempt to attain personally relevant goals.

4.3. Moral Maturity

Moral maturity includes prosocial behaviors and realizing one’s own interests and potentials while at the same time considering the well-being of other people and society mediated by virtue and morality.

4.4. Tolerance of Uncertainty

Tolerance of uncertainty and ambiguity acknowledges that the validity of information available to humans is essentially limited, and that individuals only have access to select parts of reality in which the present and future cannot be fully known in advance. An understanding of such limitations leads to tolerance for unexpected events and the vagueness of situations.

4.5. Openness

Openness involves an openness for and appreciation of values and socio-cultural phenomena that are different from one’s own scheme of values and beliefs.

4.6. Sound Judgment

Sound judgement involves purposeful judgment that results in interpretation, analysis, evaluation, and inference as well as an explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based. It involves thinking through problematic situations about what to believe or how to act that facilitate the decision-making process.

4.7. Creative Thinking

Creative thinking is the cognitive/affective interaction in which the generation or recognition of ideas, alternatives, or possibilities enhances solving problems, communication with others, and otherwise improves a situation. It is comprised of the capacity to detect gaps, produce novel and useful ideas (fluency, originality), produce alternative ideational categories (flexibility), and introduce details to ideas (elaboration), all the while recombining them, adapting them, and sensing novel relationships among and between ideas.

5. Scale Development Process and Result

The POWER Scale is comprised of seven subscales: knowledge management, self-regulation, altruism and moral maturity, openness; tolerance, sound judgment and decision making, and creative thinking. We constructed a pool of items including 78 items to reflect the seven components of wisdom.

5.1. Establishing Content Validity

To achieve content validation, five eminent experts in the field of wisdom, whom we identified based on their theories and peer-reviewed publications on wisdom, evaluated the preliminary scale. Both qualitative and quantitative feedback were collected simultaneously. Experts were asked to provide qualitative feedback such as suggestions regarding the definition of the dimensions, wording, additional items that could enhance the representativeness of the item pool, and items that needed to be eliminated from the pool [25]. We asked the experts to complete a content-validity form rating the item relevance to each subscale, with 1 representing “not relevant” and 3 representing “very relevant” [25]. Table 1 shows an example of the expert content validation form.
After collecting the responses, the items that were not rated 2 or 3 by at least three experts were eliminated from the item pool. Then, we made decisions regarding retaining, eliminating, and rewording items based on the theoretical framework and the experts’ qualitative and quantitative feedback. Table 2 shows the list of original items and modifications.

5.2. Pilot Study

After revising the questions, the first author created the Qualtrics questionnaire with the remaining 40 items. Items were randomized from different specific content categories to reduce the occurrence of bias associated with survey item categories. Six senior graduate students with K–12 teaching experience took the survey to ensure that the instructions and language in the scale were clear and appropriate, without obvious errors or omissions [25]. We also assessed the approximate duration of the survey, which took an average response time of about 20 min for completion. In a follow-up cognitive interview, we discussed the clarity of the directions and the appropriateness of the response scales. We asked the participants to identify any confusing or unclear items [25]. Items were revised based on the participants’ feedback. One of the most frequent suggestions was not to intersperse the items. As some of the items within particular categories were related and even similar, intermixing them caused confusion or impeded comprehension. Hence, all items related to each category were blocked together. Items related to tolerance and openness were put in one block. To avoid bored or biased responses to particular categories, blocks were randomly presented in different orders to different participants. In other words, different participants took the survey in different block order. The final instrument consisted of 40 items at this point.

5.3. Participants

A total number of 583 responses were collected (Table 3). In-service teachers were recruited through gifted education organization listservs and through email communication with local school districts. A total of 365 in-service teachers completed all the survey questions. By gender, 84% of teachers self-identified as female. The racial-ethnic diversity of the sample resembled the U.S. public-school teacher demographics with 89% of participants identifying as White. Additionally, 24% of teachers had a Bachelor’s degree, 70% had a Master’s degree, and 3% a doctorate. The preservice teacher sample consisted of 218 teacher education undergraduates from a Midwestern University. The majority of preservice participants were female (86%). As expected, 68% of the participants were younger than 21. Like in-service teachers, 86% of preservice teacher participants were White. Other demographics included: Asian (4%) and Black (2%) Participants who completed the survey were entered into a draw for one of twenty USD$40 Amazon gift cards. Preservice teachers were compensated with extra-credit points allocated by head professors in participating courses.

5.4. Procedure

Participants were asked to complete an online survey including the 40 items. Respondents were asked to help us understand how they perceived wisdom and what characteristics they thought were necessary to consider someone as wise. The survey did not ask if the participants considered themselves to be wise, but based on their personal understanding of wisdom, we asked them to rate the importance of each item that characterized wisdom. We used a 6-point scale, with the following response options: 1 (Unimportant), 2 (Not very Important), 3 (Moderately Important), 4 (Important), 5 (Very important), and 6 (Essential). Six points can usually be treated as continuous indicators and provide the maximum number of scale points that are differentiable and cover the entire measurement continuum [25]. The 6-point level of importance response scale was consistently used throughout the survey as it made it simple and clear for the respondents who had to respond to all items.

5.5. Sample Size and Data Screening

In-service and preservice samples were randomly split into two halves for EFA (n = 290) and CFA (n = 295). After splitting the data into two halves, we examined the accuracy of data entry, missing values, outliers (using Mahalanobis Distance, Cook’s Distance (gCD)), multicollinearity (using Variance Inflation Factor, Tolerance Values, and Squared Multiple Correlations), and singularity within both halves. Normality was reviewed through all four groups of normality tests: Chi-squared plot, Mardia’s tests of Skewness and Kurtosis, Doornik–Hansen’s omnibus tests, and HenzeZirkler [29], which indicated that the data were non-normal (See Table 4). As SPSS does not provide these tests, we used Stata 16 to conduct these analyses.
After removing the outliers, the samples included 280 observations for EFA and 284 for CFA. We checked the Kaiser–Meyer–Olkin test of sampling adequacy (KMO) to ensure sampling adequacy. KMO was greater than 0.90, which is considered adequate [30] (see Table 5).

5.6. Exploratory Factor Analysis

To assess the construct validity and the initial factor structure of the POWER Scale, we conducted EFA using SPSS. In this process, we evaluated the number of factors to be extracted using three methods: Eigenvalues greater-than-one rule (EV > 1) [31], minimum average partial correlation [32], and parallel analysis [33,34]. Based on the suggested number of factors and the quality of our data, we conducted factor extraction and factor rotation to adjust the initial solution.

5.6.1. Number of Factors

Principal-axis factoring Eigenvalues suggested a seven-factor model. While a popular method, the Eigenvalues-greater-than-one rule can overestimate or underestimate the correct number of factors to retain, and sometimes underestimates the number of components [35]. We conducted the MAP test to confirm the suggested number of factors. The MAP technique has been shown to perform quite well in determining the number of factors to retain in multiple simulation studies [36,37]. By examining a series of matrices of partial correlations, components are maintained if the variance in the correlation matrix represents systematic variance, as opposed to residual or error variance [32]. As more components are partialed out, the average squared partial correlation decreases. The smallest MAP was 0.0142, which suggests a 6- or 7-factor model. According to the revised MAP test partial correlation, the smallest average 4th power partial correlation was 0.0008, which suggested a 7-factor model. Finally, we performed parallel analysis on SPSS [38]. To decide on the number of factors, we compared the raw data eigenvalues with eigenvalues generated from a random dataset with the same number of cases and variables. The number of factors was determined using the 95th percentile generated eigenvalue column [34,39,40]. The parallel analysis indicated that the lowest eigenvalue for a factor to be retained in the solution should be greater than 1.02, the smallest eigenvalue greater than 1. According to the original solution from the principal-axis factoring, seven factors had eigenvalues greater than this number. Table 6 shows the extraction strategies.

5.6.2. Determining the Extraction Technique

Whereas it is advised to use different extraction techniques to assess the outcomes from different methods [41], the EFA results obtained with different extraction methods are often remarkably similar [42]. Considering the sample size and non-normality of our data, we determined that the Unweighted least squares (ULS) was the most appropriate extraction model for our data [43]. The ULS estimation method makes no assumptions regarding the observed variable distributions and many variables, and is adequate for small sample sizes [41].

5.6.3. Determining Extraction Rotation Method

There is no ultimate answer in terms of selecting the ‘‘best rotation’’ criterion. However, certain rotation criterion works better for certain phases of instrument validation [44]. For example, rotation criteria that attempt to reduce cross-loading magnitudes such as Geomin or Quartimax should result in more comparable solutions to CFA. Such rotations are preferable for use with well-developed measures in which researchers expect fewer and smaller cross-loadings [44]. Because this is a new measure, we followed the suggestion by Schmitt and Sass to consider a rotation that is better suited for complex data structures such as Equamax and Facparsim. Such rotations are preferred when the quality of the items could be questionable due to limited prior structural validity and reliability evidence. Because this is a new instrument, it is possible that some items can measure multiple factors, therefore, we sought to remove items with larger cross-loadings to reduce the interfactor correlation. This simplifies variable and factor pattern matrix loadings and spreads variances more equally across the factors providing a clean solution. Therefore, we used the Equamax rotation method as it is more appropriate for use in instrument development [44].

5.6.4. Determining the Item Retention

Items with loadings below 0.4, crossloading items with values ≥ 0.32 on at least two factors, and items that load on two factors with absolute difference ≥ 0.30 were deleted (see Appendix A). Twelve items were deleted through EFA. The final EFA model explained 65.10% of the variance in the data (Table 7). Appendix B shows the detailed item scores and distributions.

5.7. Reliability

We evaluated the internal-consistency estimates of the data for each subscale using McDonald Omega (ω). This reliability estimate ensures accuracy in the consistency of each subscale, with the estimated confidence intervals [45]. The ω estimates ranged from 0.74 to 0.88, so they exceeded the minimum recommended reliability estimate of 0.70 [25].

5.8. Confirmatory Factor Analysis

After establishing the preliminary evidence of the factor structure using EFA, CFA was used to test the construct validity of the POWER Scale. Due to the non-normality of our data, small sample size, and the nature of the ordinal variables, we performed CFA in the R package lavaan [46] using the diagonally weighted least squares (DWLS) estimator and robust SEs [47,48,49]. To assess the model quality, we followed the well-established fit-indices: (a) χ2 statistic (χ2/df) with values below 3 represent a good model [47], (b) the Comparative Fit Indexcomparative fit index (CFI) and Tucker–Lewis Index (TLI) values greater than 0.9 are indicative of an acceptable fit, (c) the Root Mean Square Error of Aproximation (RMSEA) values should be less than 0.05, and (d) the Standardized Root Mean Square Residual (SRMR) should be less than 0.08 [50,51]. The CFA model fit was adequate (Table 8).
Table 9 shows the CFA solution with standardized coefficients.
All the correlations among subscales were less than 0.70 (see Table 10). The final Scale’s items fit a 7-factor extraction, producing seven subscales with items loading on the intended factor and only on the intended factor, indicating the unidimensionality of each subscale. Each subscale showed good reliability and inter-item correlations without being too highly correlated. The graphical model of the POWER scale is presented in Appendix C.

6. Discussion

Models of wisdom need to evolve and be tested against empirical evidence [13,19]. This study provided important empirical evidence for PMW. Building scales advances theory development and contributes to understanding the concepts, constructs, and the relationships among them [52]. Additionally, since wisdom can be developed in the classroom, validating the POWER Scale adds evidence suggesting that the PMW model can be used to measure and understand the perceptions of wisdom of preservice and in-service teachers. This is a powerful step to prepare and enable teachers to integrate wisdom in their classrooms. As a result, the teachers’ existing knowledge, beliefs, and attitudes should be further studied. In fact, many reform efforts in the past have been ineffective because they failed to take the teachers’ existing knowledge, beliefs, and attitudes into consideration [53]. The POWER Scale can help provide insights into the prerequisites for professional development programs that can support how teachers foster wisdom in the classroom.
Through EFA, we found evidence to support the internal structure of the POWER Scale. This result supports the underlying seven distinct, latent factors that were addressed by POWER Scale items and proposed in the PMW [54]. This 7-factor model was then further evaluated using CFA methods, which further supported the 7-factor model with 34 items. The findings of this study challenged the PMW and made us rethink and illustrate some theoretical components [16,54] as well as help us address operational definitions and consider future modifications to the PMW and its applications. Some overlap existed in operational definitions. For example, during the EFA, we addressed nuanced similarities between knowledge management and sound judgment. For example, knowledge item 4 ‘Synthesizing knowledge from opposing points of view’ was loaded on sound judgment. We then decided that considering contrary points of view was related to sound judgment. Hence, we refined the definitions of each component of the PMW to reduce overlaps and confusion.

Limitations and Future Directions

Developing an instrument is an ongoing process [54]. This is the first step I the development and validation of the POWER Scale. The scale will be revised and tested based on the results of this study. The main limitation of this study was splitting the dataset into two randomly selected subsamples. Despite being a common practice in validation studies across different fields, it is not without problems. We collected the data at the same time because of time limits. Hence, we did not have a chance to modify the items before conducting the CFA. For example, tolerance and openness were the most problematic components. Two questions from each factor were loaded on both factors. There might be two possible explanations for these cross-loadings. It is possible that the participants might have been confused due to the similarities in the format and content of the items. For example, the item in tolerance for ambiguity question ‘Having tolerance for unexpected events’ was similar to an openness item ‘Having tolerance for beliefs and actions that are unfamiliar.’ Self-regulation was another subscale that would have benefited from modifications before CFA. The item ‘focusing their attention on what is most important at the time’ referred to setting goals, but was not clear enough. Adding items could have benefited the scale by providing more nuance to the goal setting aspect of self-regulation.
We had a restriction of range in the responses and the data were negatively skewed. This means that teachers did not use the full 6-point scale. There are two possible explanations for the skewness and kurtosis of our data. First, one of the limitations of this study was using convenience sampling. The teachers who decided to donate their time to this research project were a self-selecting group who truly valued wisdom; teachers who were uninterested or who did not value wisdom may have not volunteered to complete the survey. An additional possibility is that the teachers responded in a socially expected manner. In other words, teachers may have given socially desirable responses instead of choosing responses that were reflective of their true beliefs. Either of these conditions would result in negatively skewed responses and require further investigation.
Although participants in the cognitive interview considered that intermixing the items was extremely confusing and distracting, it is possible that item blocking of the scale influenced the responses [55]. Randomizing items from different categories may help reduce the occurrence of possible response biases [55]. However, Sparfeldt et al. (2006) found that there was little or no effect of item blocking on the factorial structure, psychometric properties, and scale means [56]. Moreover, item blocking also improved the respondents’ attention and motivation [57,58]. Maintaining the respondents’ attention and motivation was important due to the large number of items on the instrument. We only conducted cognitive interviews for in-service teachers. Thus, we could not assess whether there were different understandings of the items between the two groups in the sample. Personal interpretation could be an explanation for some of the constructs with large variation. Whereas the in-service teacher sample was geographically diverse, undergraduates primarily came from the same institution. Similarly, due to the resemblance to national teacher demographics, our studied group relied primarily on responses from female and White participants. Future research should consider the intentional inclusion of diverse participants. Moreover, studies could evaluate whether the POWER Scale yields invariant results across preservice and in-service teachers, and across demographic characteristics (ethnicity, gender, age). Although research suggests that understanding the perceptions of preservice and in-service teachers is pivotal to understanding teacher instruction [21,22], it is possible that the preservice and in-service teachers’ beliefs regarding wisdom differ due to years of experience and the quality of teaching experiences. This nuanced analysis could shed light on differential support via training for teacher preparation programs and PD for in-service teachers. However, following prior studies on the lack of association between wisdom and age [7,14], we did not consider age differences to be a deterrent to collecting data from preservice and in-service teachers.
The POWER Scale aims to explore teachers’ implicit beliefs about wisdom that affect their ability to teach wisdom in their classrooms. Therefore, it is necessary to conduct a mixed-methods study incorporating the POWER Scale as well as in-depth interviews to investigate the teachers’ implicit beliefs of wisdom in different cultures and contexts. Outcomes from the results will enable a cross-cultural comparison of wisdom and the identification of barriers to promoting wisdom instruction. Teaching and cultivating wisdom in educational settings can be accomplished, but the teachers’ attitudes toward such endeavors are critical. The teachers’ perceptions are one of the determining factors affecting the efficacy of a learning program. Future studies of the POWER Scale could highlight areas to support preservice and in-service teachers through professional development. A need exists for the development of empirically grounded interventions with the aim to promote wisdom-related processes in schools, work settings, and daily life [59]. Based on the PMW, researchers can design and validate interventions and curriculum that promote wisdom in the classroom. There is potential to create online and face-to-face workshops for preservice and in-service teachers to address the misconceptions about wisdom and methods to promote it in their classrooms. Finally, the POWER Scale needs to be tested in other professional populations in various fields and contexts. Multidisciplinary applications of the PMW may reduce the misconceptions about wisdom in society at large. By identifying the importance of wisdom in relation to fields such as education, management and leadership, and STEM, perhaps wisdom research can have a greater impact on the pursuit of the common good.

Author Contributions

This project was conceptualized by S.K. However, all authors (S.K., A.P.-M., M.G. (Mehdi Ghahremani) and M.G. (Marcia Gentry)) have contributed to conceptualization, data collection, data analysis and writing a draft of paper. All authors have read and agreed to the published version of the manuscript. Dr. Marcia Gentry passed away on 31 August 2022.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Purdue University (1901021625).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is contained within the article.

Acknowledgments

We acknowledge Donald Ambrose and Robert Sternberg for their support and guidance throughout this study. Additionally, we appreciate the assistance of Alissa Cress as well as Arvid Chowkase during the data collection process.

Conflicts of Interest

The authors declare no conflict of interests.

Appendix A. Changes during Exploratory Factor Analysis

ItemReason for Deletion
Knowledge Management
Acquiring broad knowledge of the world.
Acquiring specialized forms of knowledge about the challenge at hand.
Acquiring experience-based knowledge in the face of a challenging situation
Synthesizing knowledge from opposing points of view.C
Knowing how to apply appropriate knowledge in a given situation.
Knowing when to apply appropriate knowledge in a given situation.C
Self-Regulation
Knowing oneself
Reflecting on the sort of person they are becoming
Reflecting on what happens around them
Willing to admit one’s mistakes
Correcting one’s mistakes
Adapting behavior appropriate to the specific situation
Focusing their attention on what’s most important at the timeC
Identifying subtle emotions within oneself C
Expressing emotions without losing control (e.g., showing anger without losing control)
Moral Maturity
Treating another person, the way they would like to be treated
Behaving in a manner that also benefits other people rather than just themself
Considering the well-being of other people and society
Understanding moral principles
Thinking ethically
Tolerance for Uncertainty
Considering that the validity of information available to humans could be limited
Understanding that all people have limitations in how much they know
Considering that the future cannot be fully known in advance
Being comfortable with unknown situationsC
Having tolerance for unexpected eventsC
Openness
Having tolerance for beliefs and actions that are unfamiliarC
Having tolerance for beliefs and actions that are different from their own
Being curious about other religious and/or philosophical belief systemsC
Willing to explore ideas with those who have different perspectives and beliefs
Willing to work with people from different backgrounds
Willing to be around people whose views are strongly different from their own
Sound Judgment
Incorporating reasonable criteria for judgment
Evaluating the credibility of an information source
Evaluating the relevance of an information sourceC
Recognizing differences among opinion, reasoned judgment, and factC
Evaluating whether their assumptions are justifiable
Thinking about different probabilities to improve decision making
Recognizing and considering the need to seek contradictory evidenceC
Perceiving possible compromises between opposing positions
Considering the context in which they are making a judgment
Evaluating the consistency and relevance of the conclusion
Creativity
Generating unique and novel ideas
Elaborating on ideas by adding details
Seeing relationships among ideas
Synthesizing and recombining ideas to improve the solution
Having an ability to sense when problems are about to ariseC
C: Item deleted because of high crossloadings

Appendix B. Descriptive Statistics of the Items Retained in the Scale after EFA

ItemMeanMeanStandard
Deviation
Standard
Deviation
Response Percentage
123456
Know14.874.930.910.900.00.45.430.035.428.9
Know24.720.910.00.76.834.336.421.8
Know34.880.910.40.45.026.840.027.5
Know55.270.800.00.40.717.933.247.9
Creat14.324.701.020.910.05.413.237.931.412.1
Creat24.460.960.03.212.132.540.012.1
Creat35.030.830.03.212.132.540.012.1
Creat44.990.840.00.43.220.744.331.4
Self15.085.170.860.870.00.73.221.845.428.9
Self25.150.890.00.05.716.142.535.7
Self35.160.830.00.05.715.736.442.1
Self45.430.780.00.03.915.441.139.6
Self55.260.820.00.02.510.428.658.6
Self65.040.810.00.42.920.046.130.7
Self95.041.030.70.78.215.434.640.4
Prosoc15.005.251.150.931.13.26.416.829.343.2
Prosoc25.130.980.01.16.815.032.544.6
Prosoc35.320.890.01.12.115.726.155.0
Prosoc45.360.820.00.43.210.032.553.9
Prosoc55.420.820.00.43.29.328.658.6
Tolera14.464.531.111.151.82.911.134.630.718.9
Tolera24.551.212.13.611.128.629.325.4
Tolera34.581.131.12.912.927.132.123.9
Openn25.115.130.890.920.00.44.318.936.440.0
Openn45.120.960.00.76.816.132.943.6
Openn55.290.900.00.44.613.927.953.2
Openn65.020.940.01.15.021.835.436.8
Judg14.935.030.910.870.01.15.021.835.436.8
Judg25.260.820.00.76.122.541.129.6
Judg54.990.870.02.915.035.446.831.1
Judg64.910.890.00.05.727.137.130.0
Judg84.950.880.01.15.418.946.827.9
Judg95.100.840.00.72.519.341.436.1
Judg105.050.880.00.73.621.438.635.7
n = 280

Appendix C. Graphical Model of the POWER Scale after CFA

Education 14 00542 g0a1

References

  1. Brienza, J.P.; Kung, F.Y.; Santos, H.C.; Bobocel, D.R.; Grossmann, I. Wisdom, bias, and balance: Toward a process-sensitive measurement of wisdom-related cognition. J. Personal. Soc. Psychol. 2016, 115, 1093. [Google Scholar] [CrossRef]
  2. Santos, H.C.; Huynh, A.C.; Grossmann, I. Wisdom in a complex world: A situated account of wise reasoning and its development. Soc. Personal. Psychol. Compass 2017, 11, e12341. [Google Scholar] [CrossRef]
  3. Sternberg, R.J. Why schools should teach for wisdom: The balance theory of wisdom in educational settings. Educ. Psychol. 2001, 36, 227–245. [Google Scholar] [CrossRef]
  4. Ardelt, M. Intellectual versus wisdom-related knowledge: The case for a different kind of learning in the later years of life. Educ. Gerontol. 2000, 26, 771–789. [Google Scholar] [CrossRef]
  5. Webster, J.D. Measuring the character strength of wisdom. Int. J. Aging Hum. Dev. 2007, 65, 163–183. [Google Scholar] [CrossRef]
  6. Baltes, P.B.; Smith, J. Toward a psychology of wisdom and its ontogenesis. In Wisdom; Cambridge University Press: Cambridge, UK, 1990; pp. 87–120. [Google Scholar] [CrossRef]
  7. Baltes, P.B.; Staudinger, U.M. Wisdom: A metaheuristic (pragmatic) to orchestrate mind and virtue toward excellence. Am. Psychol. 2000, 55, 122–136. [Google Scholar] [CrossRef]
  8. Ardelt, M. Empirical assessment of a three-dimensional wisdom scale. Res. Aging 2003, 25, 275–324. [Google Scholar] [CrossRef]
  9. Ardelt, M. The measurement of wisdom: A commentary on Taylor, Bates, and Webster’s comparison of the SAWS and 3D-WS. Exp. Aging Res. 2011, 37, 241–255. [Google Scholar] [CrossRef]
  10. Aldwin, C.M. Gender and wisdom: A brief overview. Res. Hum. Dev. 2009, 6, 1–8. [Google Scholar] [CrossRef]
  11. Jeste, D.V.; Ardelt, M.; Blazer, D.; Kraemer, H.C.; Vaillant, G.; Meeks, T.W. Expert consensus on characteristics of wisdom: A delphi method study. Gerontol. 2010, 50, 668–680. [Google Scholar] [CrossRef] [PubMed]
  12. Strauss, C.; Taylor, B.L.; Gu, J.; Kuyken, W.; Baer, R.; Jones, F.; Cavanagh, K. What is compassion and how can we measure it? A review of definitions and measures. Clin. Psychol. Rev. 2016, 47, 15–27. [Google Scholar] [CrossRef]
  13. Ardelt, M. Wisdom as expert knowledge system: A critical review of a contemporary operationalization of an ancient concept. Hum. Dev. 2004, 47, 257–285. [Google Scholar] [CrossRef]
  14. Jeste, D.V.; Oswald, A.J. Individual and societal wisdom: Explaining the paradox of human aging and high well-being. Psychiatry Interpers. Biol. Process. 2014, 77, 317–330. [Google Scholar] [CrossRef]
  15. Ambrose, D. Expanding Visions of Creative Intelligence: An Interdisciplinary Exploration; Hampton Press: New York, NY, USA, 2009. [Google Scholar]
  16. Karami, S.; Ghahremani, M.; Parra-Martinez, F.A.; Gentry, M. A polyhedron model of wisdom: A systematic review of the wisdom studies in psychology, management and leadership, and education. Roeper Rev. 2020, 42, 241–257. [Google Scholar]
  17. Karami, S.; Parra-Martinez, F.A. Foolishness of COVID-19: Applying the polyhedron model of wisdom to understand behaviors in a time of crisis. Roeper Rev. 2021, 43, 42–52. [Google Scholar] [CrossRef]
  18. Bruya, B.; Ardelt, M. Wisdom can be taught: A proof-of-concept study for fostering wisdom in the classroom. Learn. Instr. 2018, 58, 106–114. [Google Scholar] [CrossRef]
  19. Ardelt, M. Can wisdom and psychological growth be learned in univeristy courses? J. Moral Educ. 2020, 49, 30–45. [Google Scholar] [CrossRef]
  20. Garner, J.K.; Kaplan, A. A complex dynamic systems perspective on teacher learning and identity formation: An instrumental case. Teach. Teach. 2018, 25, 7–33. [Google Scholar] [CrossRef]
  21. Pajares, M.F. Teachers’ beliefs and educational research: Cleaning up a messy construct. Rev. Educ. Res. 1992, 62, 307–332. [Google Scholar] [CrossRef]
  22. Stronge, J.H.; Ward, T.J.; Tucker, P.D.; Hindman, J.L. What is the relationship between teacher quality and student achievement? An exploratory study. J. Pers. Eval. Educ. 2007, 20, 165–184. [Google Scholar] [CrossRef]
  23. Hoffman, B.H.; Seidel, K. Measuring teachers’ beliefs: For what purpose? In International Handbook of Research on Teachers’ Beliefs; Routledge: London, UK, 2014; pp. 118–139. [Google Scholar]
  24. Schraw, G.; Olafson, L. Assessing teachers’ beliefs. In International Handbook of Research on Teachers’ Beliefs; Routledge: London, UK, 2015; pp. 87–105. [Google Scholar]
  25. McCoach, D.B.; Gable, R.K.; Madura, J.P. Instrument Development in the Affective Domain; Springer: New York, NY, USA, 2013. [Google Scholar] [CrossRef]
  26. Bardi, A.; Guerra, V.M.; Ramdeny, G.S.D. Openness and ambiguity intolerance: Their differential relations to well-being in the context of an academic life transition. Personal. Individ. Differ. 2009, 47, 219–223. [Google Scholar] [CrossRef]
  27. Jach, H.K.; Smillie, L.D. To fear or fly to the unknown: Tolerance for ambiguity and Big Five personality traits. J. Res. Personal. 2019, 79, 67–78. [Google Scholar] [CrossRef]
  28. McCrae, R.R. Social consequences of experiential openness. Psychol. Bull. 1996, 120, 323–337. [Google Scholar] [CrossRef] [PubMed]
  29. Mecklin, C.J.; Mundfrom, D.J. A Monte Carlo comparison of the Type I and Type II error rates of tests of multivariate normality. J. Stat. Comput. Simul. 2005, 75, 93–107. [Google Scholar] [CrossRef]
  30. Kaiser, H.F. An index of factorial simplicity. Psychmetrika 1974, 39, 31–36. [Google Scholar] [CrossRef]
  31. Kaiser, H.F. The application of electronic computers to factor analysis. Educ. Psychol. Meas. 1960, 20, 141–151. [Google Scholar] [CrossRef]
  32. Velicer, W.F. Determining the number of components from the matrix of partial correlations. Psychometrika 1976, 41, 321–327. [Google Scholar] [CrossRef]
  33. Horn, J.L. A rationale and test for the number of factos in factor analysis. Psychometrika 1965, 30, 179–185. [Google Scholar] [CrossRef]
  34. Turner, N.E. The effect of common variance and structure pattern on random data eigenvalues: Implications for the accuracy of parallel analysis. Educ. Psychol. Meas. 1998, 58, 541–568. [Google Scholar] [CrossRef]
  35. Zwick, W.R.; Velicer, W.F. Comparison of five rules for determining the number of components to retain. Psychol. Bull. 1986, 99, 432–442. [Google Scholar] [CrossRef]
  36. Garrido, L.E.; Abad, F.J.; Ponsoda, V. Performance of Velicer’s minimum average partial factor rentention method with categorical variables. Educ. Psychol. Meas. 2011, 71, 551–570. [Google Scholar] [CrossRef]
  37. Ruscio, J.; Roche, B. Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychol. Assess. 2012, 24, 282–292. [Google Scholar] [CrossRef] [PubMed]
  38. O’connor, B.P. SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behav. Res. Methods Instrum. Comput. 2000, 32, 396–402. [Google Scholar] [CrossRef] [PubMed]
  39. Cota, A.A.; Longman, R.S.; Holden, R.R.; Fekken, G.C. Comparing different methods for implementing parallel analysis: A practical index of accuracy. Educ. Psychol. Meas. 1993, 53, 865–876. [Google Scholar] [CrossRef]
  40. Glorfield, L.W. An improvement on Horn’s parallel analysis methodology for selecting the correct number of factors to retain. Educ. Psychol. Meas. 1995, 55, 377–393. [Google Scholar] [CrossRef]
  41. Zygmont, C.; Smith, M.R. Robust factor analysis in the presence of normality violations, missing data, and outliers: Empirical questions and possible solutions. Quant. Methods Psychol. 2014, 10, 40–55. [Google Scholar] [CrossRef]
  42. Nunnally, J. Psychometric Theory; Sage Publications: Thousand Oaks, CA, USA, 1978. [Google Scholar]
  43. MacCallum, R.C. Factor analysis. In The Sage Handbook of Quantitative Methods in Psychology; Sage Publications Ltd.: Thousand Oaks, CA, USA, 2009; pp. 123–147. [Google Scholar] [CrossRef]
  44. Schmitt, T.A.; Sass, D.A. Rotation criteria and hypothesis testing for exploratory factor analysis: Implications for factor pattern loadings and interfactor correlations. Educ. Psychol. Meas. 2011, 71, 95–113. [Google Scholar] [CrossRef]
  45. Dunn, T.J.; Baguley, T.; Brunsden, V. From Alpha to Omega: A Practical Solution to the Pervasive Problem of Internal Consistency Estimation. Br. J. Psychol. 2014, 105, 399–412. [Google Scholar] [CrossRef] [PubMed]
  46. Rosseel, Y. Lavaan: An R package for structural equation modeling. J. Stat. Softw. 2012, 48, 1–36. [Google Scholar] [CrossRef]
  47. Wheaton, B.; Muthen, B.; Alwin, D.F.; Summers, G.F. Assessing reliability and stability in panel models. Sociol. Methodol. 1977, 8, 84–136. [Google Scholar] [CrossRef]
  48. Li, C.-H. The Performance of ML, DWLS, and ULS Estimation with Robust Corrections in Structural Equation Models with Ordinal Variables. Psych. Meth. 2016, 21, 369–387. [Google Scholar] [CrossRef]
  49. Forero, C.G.; Maydeu-Olivares, A.; Gallardo-Pujol, D. Factor analysis with ordinal indicators: A Monte Carlo study comparing DWLS and ULS estimation. Struct. Equ. Model. 2009, 16, 625–641. [Google Scholar] [CrossRef]
  50. Brown, T.A. Confirmatory Factor Analysis for Applied Research; Guilford Publications: New York, NY, USA, 2015. [Google Scholar]
  51. Hu, L.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  52. Shoemaker, P.; Tankard, J.; Lasorsa, D. How to Build Social Science Theories; Sage Publications: New York, NY, USA, 2004. [Google Scholar] [CrossRef]
  53. van Driel, J.H.; Beijaard, D.; Verloop, N. Professional development and reform in science education: The role of teachers’ practical knowledge. J. Res. Sci. Teach. 2001, 38, 137–158. [Google Scholar] [CrossRef]
  54. Alvesson, M.; Kärreman, D. Qualitative Research and Theory Development: Mystery as Method; SAGE Publications Ltd.: New York, NY, USA, 2011. [Google Scholar] [CrossRef]
  55. Tourangeau, R.; Rasinski, K.A. Cognitive processes underlying context effects in attitude measurement. Psychol. Bull. 1988, 103, 299–314. [Google Scholar] [CrossRef]
  56. Sparfeldt, J.R.; Schilling, S.R.; Rost, D.H.; Thiel, A. Blocked versus randomized format of questionnaires. Educ. Psychol. Meas. 2006, 66, 961–974. [Google Scholar] [CrossRef]
  57. Schriesheim, C.A.; Kopelman, R.E.; Solomon, E. The effect of grouped versus randomized questionnaire format on scale reliability and validity: A three-study investigation. Educ. Psychol. Meas. 1989, 49, 487–508. [Google Scholar] [CrossRef]
  58. Solomon, E.; Kopelman, R.E. Questionnaire format and scale reliability: An examination of three modes of item presentation. Psychol. Rep. 1984, 54, 447–452. [Google Scholar] [CrossRef]
  59. Grossman, I. Wisdom in context. Perspect. Psychol. Sci. 2017, 12, 233–257. [Google Scholar] [CrossRef]
Figure 1. Polyhedron Model of Wisdom.
Figure 1. Polyhedron Model of Wisdom.
Education 14 00542 g001
Table 1. Sample form for the expert content validation.
Table 1. Sample form for the expert content validation.
ItemSubscaleRelevance
Acquiring broad knowledge of the world 123
Adapting behavior when the situation changes 123
Considering the well-being of other people and society 123
Willing to explore ideas with those who have different perspectives and beliefs 123
Recognizing and considering the need to seek contradictory evidence 123
Table 2. POWER Scale content validity based on expert feedback.
Table 2. POWER Scale content validity based on expert feedback.
NumberItemChange Reason
Knowledge Management
1Acquiring broad knowledge of the world.
2Acquiring specialized forms of knowledge about the challenge at hand.
3Acquiring experience-based knowledge in the face of a challenging situation
4Synthesizing knowledge from opposing points of view.
5Transferring knowledge into different contextsNR
6Making intentional effort to advance knowledgeNR
7Knowing how to apply appropriate knowledge in a given situation.
8Knowing when to apply appropriate knowledge in a given situation.
Self-Regulation
1Knowing oneself
2Reflecting on the sort of person they are becoming
3Reflecting on what happens around them
4Adjusting cognitive strategiesNR
5Being aware of the limits of their knowledgeO
6Frequently thinking about connections between their past and presentNR
7Willing to admit one’s mistakes
8Correcting one’s mistakes
9Considering the possibility that their beliefs or behaviors may be wrongNR
10Delaying gratificationNR
11Adapting behavior when the situation changes appropriate to the specific situation
12Focusing their attention on what’s most important at the time
13Monitoring their attentionO
14Adjusting their attention when the situation changesNR
15Considering the possibility that their beliefs or behaviors may be wrongO
16Adjusting their emotions to the situation at handO
17Identifying subtle emotions within oneself
18Expressing emotions without losing control (e.g., showing anger without losing control)
Moral Maturity
1Taking on situations where they know their help will be neededNR
2Treating another person, the way they would like to be treated
3Behaving in a manner that also benefits other people rather than just themself
4Considering the well-being of other people and society
5Understanding moral principles
6Considering what is good for humanity in their decisionsO
7Thinking ethically
8Understanding ethical rulesO
9Considering virtue as central to their decisionsO
Tolerance for Uncertainty
1Considering that the validity of information available to humans could be limited
2Understanding that all people have limitations in how much they know
3Considering that the future cannot be fully known in advance
4Being comfortable with unknown situations
5Having tolerance for unexpected events
Openness
1Respect for Having tolerance for beliefs and actions that are unfamiliar
2Respect for Having tolerance for beliefs and actions that may be different from their own
3Being curious about other religious and/or philosophical belief systems
4Willing to explore ideas with those who have different perspectives and beliefs
5Reading works that challenge the reader to think differently about issuesO
6Considering differences in points of viewNR
7Considering contrary positionsNR
8Willing to work with people from different backgrounds
9Being open to new experience such as food and musicO
10Willing to be around people whose views are strongly different from their own
Sound Judgment
1Incorporating reasonable criteria for judgment
2JudgingEvaluating the credibility of an information source
3JudgingEvaluating the relevance of an information source
4Recognizing differences among opinion, reasoned judgment, and fact
5DeterminingEvaluating whether their assumptions are justifiable
6Thinking about different probabilities to improve decision making
7Recognizing and considering the need to seek contradictory evidence
8Perceiving possible compromises between opposing positionsA
9Considering the context in which they are making a judgment
10Making risk-benefit ratio assessmentsO
11Raising vital questions and problems clearly and preciselyNR
12Generating a reasoned method for selecting between several possible courses of actionO
13Presenting a coherent and persuasive argument on a controversial topicNR
14Identifying their assumptions clearlyO
15DeterminingEvaluating the consistency and relevance of the conclusion
Creativity
1Generating unique and novel ideas
2Elaborating on ideas by adding details
3Seeing relationships among ideas
4Synthesizing and recombining ideas to improve the solution
5Having an ability to sense when problems are about to arise
6Having a problem-sensitivity attitudeNR
7Generating useful ideasO
8Generating many ideasNR
9Making new connections among ideasO
10Generating different categories of ideasNR
11Having a risk-taking attitudeNR
12Using analogies to make the unfamiliar knownNR
13Defining a problem in multiple ways and from different viewpointsNR
Note. O: Item eliminated because it overlapped with other items. NR: Item eliminated because the item was not relevant to the component. A: Item was added based on the experts’ suggestions. Strikethrough was used for items or words eliminated and italics for words or items added.
Table 3. Participant demographic.
Table 3. Participant demographic.
VariableIn-Service Teachers
Frequency
Preservice Teachers
Frequency
Gender
Female305 (84%) 187 (86%)
Male59 (16%) 31 (14%)
Agender1 (>1%) 0 (0%)
Ethnicity
White315 (86%) 188 (86%)
Black11 (3%) 5 (2%)
White, Other12 (3%) 7 (3%)
Asian7 (2%) 8 (4%)
Latino4 (1%) 0 (0%)
Native Hawaiian or Pacific Islander2 (>1%) 1 (>1%)
Preferred not to answer11 (3%) 2 (>1%)
Other3 (>1%) 7 (3%)
Age Group
Younger than 210 (0%) 149 (68%)
21–2410 (3%) 67 (31%)
25–3488 (24%) 1 (>1%)
35–4487 (24%) 1 (>1%)
45–54102 (28%)
54 or older74 (20%)
Prefer not to answer4 (1%)
Education
Bachelor’s degree86 (24%)Freshman24 (11%)
Master’s degree257 (70%)Junior71 (33%)
Doctoral degree11 (3%)Senior44 (20%)
Professional degree11 (3%)Sophomore79 (36%)
Note: In-service teachers n = 365; preservice teachers n = 218.
Table 4. Test of normality and skewness.
Table 4. Test of normality and skewness.
TestEFA Sample (n = 280)CFA Sample (n = 284)
Mardia
Skewness28,376.919 *21,469.257 *
Kurtosis51.94259 *2113.769 *
Doornik–Hansen(df = 92) 996.496 *(df = 84) 710.787 *
* p value < 0.001.
Table 5. KMO and Bartlett’s Test.
Table 5. KMO and Bartlett’s Test.
Kaiser–Meyer–Olkin Measure of Sampling Adequacy.0.902
Bartlett’s Test of SphericityApprox. Chi-Square8429.395
df1035
Sig.0.001
Table 6. Factor extraction strategies.
Table 6. Factor extraction strategies.
Method 1Method 2Method 3
Number of FactorsTotal%
Variance
Cumulative
%
Number of
Factors
MAP
Squared Correlation
Power 4RootMeans95th
Percentile
114.46931.45331.45300.10630.194011.051.16
23.4507.50038.95310.02570.003220.951.02 3
32.0294.41243.36520.01930.002030.880.93
41.8353.98947.35430.01800.001540.820.87
51.4963.25350.60740.01730.001250.760.80
61.3883.01853.62550.01580.0010
7 11.2132.63756.26260.01490.0009
80.8541.85658.11870.0142 20.0008
90.7581.64859.76680.01420.007
100.6401.39061.156
1 Eigenvalues from raw data applying >1 rule of thumb, 7 factors. 2 Minimum average partial correlation MAP, 7 factors. 3 Parallel analysis, Eigenvalues generated from the simulated data, 7 factors.
Table 7. Final model from the ULS Equamax rotated factor matrix.
Table 7. Final model from the ULS Equamax rotated factor matrix.
1234567Omega ω (SE)[95% CI]
Know10.46 0.74 (0.02)[0.68, 0.79]
Know20.73
Know30.74
Know50.43
Creat1 0.68 0.79 (0.02)[0.72, 0.83]
Creat2 0.78
Creat3 0.55
Creat4 0.56
Self1 0.62 0.85 (0.01)[0.71, 0.88]
Self2 0.67
Self3 0.58
Self4 0.65
Self5 0.64
Self6 0.46
Self9 0.44
Prosoc1 0.70 0.87 (0.01)[0.83, 0.90]
Prosoc2 0.81
Prosoc3 0.78
Prosoc4 0.65
Prosoc5 0.71
Toler1 0.75 0.81 (0.02)[0.74, 0.85]
Toler2 0.72
Toler3 0.72
Openn2 0.52 0.83 (0.02)[0.78, 0.86]
Openn4 0.60
Openn5 0.78
Openn6 0.70
Judg1 0.590.88 (0.01)[0.85, 0.90]
Judg2 0.61
Judg5 0.66
Judg6 0.56
Judg8 0.57
Judg9 0.70
Judg10 0.70
Table 8. CFA Model Fit Indices for the 7-factor solution model as specified by the EFA.
Table 8. CFA Model Fit Indices for the 7-factor solution model as specified by the EFA.
Model Descriptionχ2dfχ2/dfCFITLIRMSEA
(95% CI)
RMSEA
(95% CI)
SRMR
Improved Seven-Factor Model Using DWLS (Robust)973.192 *5061.920.990.990.0570.052, 0.0630.062
* p value < 0.001.
Table 9. CFA solution.
Table 9. CFA solution.
ItemStandardized DWLS [95% CI]
CoefficientsStd. Err.
Know10.700.020.660.74
Know20.770.020.740.81
Know30.750.020.720.79
Know50.780.020.740.82
Creat10.730.020.700.77
Creat20.780.020.740.82
Creat30.760.020.720.79
Creat40.900.020.860.93
Self10.710.020.680.75
Self20.770.020.740.80
Self30.800.020.760.83
Self40.770.020.740.81
Self50.760.020.730.80
Self60.750.020.710.78
Self90.720.020.690.76
Prosoc10.780.020.750.81
Prosoc20.930.010.900.95
Prosoc30.940.010.910.97
Prosoc40.920.010.890.95
Prosoc50.920.010.900.95
Tolera10.770.020.720.81
Tolera20.770.020.730.81
Tolera30.900.020.850.94
Openn60.830.020.800.86
Openn20.850.020.820.88
Openn40.810.020.780.84
Openn50.840.020.810.87
Judg10.740.020.710.78
Judg20.780.020.750.81
Judg50.780.010.750.81
Judg60.840.010.810.87
Judg80.810.010.780.84
Judg90.840.010.810.87
Judg100.880.010.850.91
Note. All estimates were significant at p < 0.001.
Table 10. Correlations among subscales for the Confirmatory Factor Analysis.
Table 10. Correlations among subscales for the Confirmatory Factor Analysis.
Subscale1234567
1. Knowledge1.00
2. Creativity0.641.00
3. Self-Regulation0.560.541.00
4. Moral Maturity0.360.510.581.00
5. Tolerance0.350.340.390.431.00
6. Openness0.630.640.610.550.501.00
7. Judgment0.650.540.550.530.520.701.00
Note. All estimates were significant at p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karami, S.; Parra-Martinez, A.; Ghahremani, M.; Gentry, M. Development and Validation of Perception of Wisdom Exploratory Rating Scale: An Instrument to Examine Teachers’ Perceptions of Wisdom. Educ. Sci. 2024, 14, 542. https://doi.org/10.3390/educsci14050542

AMA Style

Karami S, Parra-Martinez A, Ghahremani M, Gentry M. Development and Validation of Perception of Wisdom Exploratory Rating Scale: An Instrument to Examine Teachers’ Perceptions of Wisdom. Education Sciences. 2024; 14(5):542. https://doi.org/10.3390/educsci14050542

Chicago/Turabian Style

Karami, Sareh, Andy Parra-Martinez, Mehdi Ghahremani, and Marcia Gentry. 2024. "Development and Validation of Perception of Wisdom Exploratory Rating Scale: An Instrument to Examine Teachers’ Perceptions of Wisdom" Education Sciences 14, no. 5: 542. https://doi.org/10.3390/educsci14050542

APA Style

Karami, S., Parra-Martinez, A., Ghahremani, M., & Gentry, M. (2024). Development and Validation of Perception of Wisdom Exploratory Rating Scale: An Instrument to Examine Teachers’ Perceptions of Wisdom. Education Sciences, 14(5), 542. https://doi.org/10.3390/educsci14050542

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop