Next Article in Journal
Sentiment Analysis of Review Data Using Blockchain and LSTM to Improve Regulation for a Sustainable Market
Previous Article in Journal
Deploying Big Data Enablers to Strengthen Supply Chain Agility to Mitigate Bullwhip Effect: An Empirical Study of China’s Electronic Manufacturers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring the Moderating Role of Readers’ Perspective in Evaluations of Online Consumer Reviews

School of Computing and Information Systems, The University of Melbourne, Melbourne 3010, Australia
*
Author to whom correspondence should be addressed.
J. Theor. Appl. Electron. Commer. Res. 2021, 16(7), 3406-3424; https://doi.org/10.3390/jtaer16070184
Submission received: 27 September 2021 / Revised: 4 December 2021 / Accepted: 7 December 2021 / Published: 13 December 2021
(This article belongs to the Section Digital Marketing and the Connected Consumer)

Abstract

:
Drawing upon the heuristic–systematic model (HSM) and considering the readers’ perspective, this study predicts that readers’ involvement and homophily between the reader and the review author (source) moderate the relationships between the credibility perception of online reviews and its antecedent factors. To test our hypotheses, we performed a user study on the Amazon Mechanical Turk platform. The results show that reader’s involvement moderates source credibility, internal consistency, review objectivity, and review sidedness on review credibility. In addition, homophily between the reader and the source also moderates the relationship between review credibility and its source. Our study contributes to information processing literature, especially in the context of online reviews, and suggests a better classification of the attributes related to online reviews using the HSM. Besides, it helps e-commerce platforms to customize online reviews for each reader to satisfy their information need and help them to make a better purchasing decision.

1. Introduction

Web 2.0 has provided tremendous opportunities for users to share their opinions and purchasing experiences in the form of online reviews. It also enables both consumers and businesses to take advantage of mass collaboration and open-source technology (i.e., Wikinomics) to make better decisions in their daily life [1,2]. Currently, consumers may have access to this information almost everywhere at any time on the internet, such as e-commerce platforms, blogs, online stores, shopping forums, and so on [3,4,5]. Prior studies [6,7,8] indicate that prospective consumers consider online reviews to make an assessment of products, services, and target stores. For instance, Filieri, McLeay [6] suggest that online review information has a positive effect on consumers’ purchase intention. In the same vein, Chong, Khong [9] find that consumers are willing to adopt information from online reviews and this information has a significant impact on their planning and decisions. Zhang, Ye [10] state that online reviews have a greater impact on the attitudes of consumers, compared to other types of information sources, such as recommendations from professional editors.
Many researchers have investigated the importance of online reviews for consumers, providers, and e-commerce platforms, e.g., [3,6,11,12,13]. While online reviews are important, there is a potential for businesses and some consumers to write fake reviews; hence, understanding the credibility of online reviews (COR) is important to prevent consumers from being misled by non-credible reviews. As a result, the antecedent factors that impact COR have become an important and imperative research topic in information systems, computer science and business domains. Prior research explores this issue and identifies antecedent factors to predict the COR [4,8]. Although these scholars make substantial contributions to realizing different effects of online reviews, according to our best knowledge, there is little research that has considered the effect of the readers’ perspective on perceived credibility.
Dual-process theories [14,15] and informational influence literature suggest that attributes related to the (i) message (textual content), (ii) source (writer), and (iii) receiver (reader) change a message’s effect. However, most prior research has only focused on two aspects: the message and the source, ignoring the reader’s perspective. We expect that it is only by examining all three of these aspects that their effect on the COR can be studied. Beyond that, we argue that readers of online reviews use multiple attributes to different degrees to assess COR. Accordingly, one piece of online review may reflect different views for different readers depending on their attitudes, such as their involvement in the product or service and how they perceive similarity between themselves and reviewers, called ‘homophily’ [16]. Because prior research does not systematically study these issues, little is known about how readers’ involvement and homophily change the COR assessment. In this study, we aim to fill this gap through an investigation of how readers’ involvement moderates the relationships between COR and its antecedent factors. In addition, we focus on exploring the moderating role of homophily on the relationship between source credibility and the COR. Thus, this research aims to answer the following questions:
  • How does the readers’ involvement moderate the effect of antecedent factors on COR?
  • Will similarity evaluation between the source and the reader (homophily) moderate the source credibility effect on COR, and if yes, to what extent?
This study makes significant theoretical and practical contributions. From a theoretical perspective, our study contributes to information processing literature and theories, especially in the online review context, and it also suggests a better classification of the attributes related to online reviews using the HSM theory. As prior studies e.g., [3,7,11,17,18,19,20] have already proposed and incorporated moderators and antecedent factors in the online review context, this study extends this research scope by considering readers’ involvement and homophily in the extant COR theoretical model. From a practical perspective, this study will help online stores and e-commerce platforms to design better approaches to customize different online reviews and information to different people, and will also help consumers to satisfy their information needs.
The rest of the paper is structured as follows. In Section 2, we explain the theoretical background of this study and propose our research hypotheses. We then describe the methodology of our study in Section 3. In Section 4, we present the results of our study. In Section 5, we discuss our findings and present the theoretical contributions and practical implications. Finally, in Section 6, we conclude the paper with directions for research.

2. Background and Hypotheses Development

In this section, we first present studies on review credibility, then we explain how reader’s involvement moderates the relationships between review credibility and its antecedent factors, using HSM as our theoretical lens; accordingly, we propose the hypotheses. Next, we discuss the interaction effect of homophily on credibility evaluation of online reviews.

2.1. Studies on Review Credibility

Assessing the COR is a specific application of the general issue of deception detection [21,22], where scholars use a variety of clues related to the text or its source to evaluate the credibility of information.
Prior studies mainly used three different approaches to tackle this problem: (i) review-centric approach, (ii) reviewer-centric approach, and (iii) combination of both review and reviewer-centric approaches. In the first approach (i.e., review-centric), researchers mainly focused on the textual (linguistic) characteristics of a review to assess its credibility [4,12,23,24]. For instance, they found that it is possible to distinguish credible reviews from fake ones using subtle linguistic characteristics in the text of a review, including term frequency, review sidedness and sentiment.
In the second approach (i.e., reviewer-centric), scholars mostly studied the characteristics of the reviewers and attempted to distinguish behaviors of spammers and bots from genuine reviewers, using different clues, including the reviewer posting rates, the total number of reviews written by a reviewer, the rating behavior of a reviewer and the number of videos, pictures or links uploaded by a reviewer [4,11,25,26].
In the third approach, researchers, e.g., [3,11,27,28] used the combination of both textual clues and reviewers’ characteristics and showed that incorporating these two can have a better result in the COR evaluation. In general, the literature shows that evaluating the COR can be more challenging than other deception detection problems on the internet [28]. This could be primarily due to the fact that online reviews are about the experiences or opinions of consumers toward a product or service and there is no authority to validate these experiences [4].

2.2. Heuristic–Systematic Model and the Moderating Role of Reader’s Involvement

We have adopted a heuristic–systematic model (HSM) to investigate the determinants of COR in different involvement conditions [15]. HSM postulates that individuals may take systematic and/or heuristic processing factors while evaluating information. Individuals take the systematic processing when they are highly motivated, capable or use high cognitive effort to elaborate information and accordingly spend more time to assess all the pieces of information carefully before making a decision [13,15,29]. On the other hand, individuals take the heuristic processing approach when they are less motivated or capable. In this case, they often use informational shortcuts, such as simple decision rules, to make a decision and evaluate information [15,30].
According to the HSM, involvement in evaluating information can be considered as a moderating attribute [15,31]. Based on this theory, when an individual reads an online review, they begin to evaluate its information. Depending on readers’ level of involvement, systematic and/or heuristic processing factors can be adopted independently or simultaneously and can affect each other in complex ways [31]. Thus, the HSM has been used to investigate how consumers adopt and evaluate information in e-commerce research [11,30,32]. With the comprehensive literature review on prior related studies [3,4,11,17,18,19,29,30,33], we consider argument quality, source credibility, review objectivity, internal consistency, review sidedness, external consistency and review fluency as the antecedent factors to predict the COR. Prior studies, e.g., [11,19], have taken into account argument quality as the only systematic processing attribute in the context of online reviews; apart from that, message quality has been consistently used as the main criterion in the communication and persuasion literature [34,35]. Likewise, we consider argument quality of a review as the only systematic processing attribute in this study, as well as source credibility, review objectivity, internal consistency, review sidedness, external consistency and review fluency as the heuristic processing factors.
Figure 1 presents our research model. We describe each of these attributes in further detail and discuss the possible effect of the reader’s involvement on them.

2.2.1. Argument Quality

Argument quality refers to the persuasive strength of information or the plausibility of the argumentation. In other words, it is the extent to which the reader of the review feels the argument as convincing [4,20]. Prior research has empirically confirmed the positive significant impact of argument quality on the COR [3,11,17,19,36]. When a consumer review does not hold a convincing argument, the reader will treat it as a fake review. However, when a consumer review has sufficient explanation to its argument, an individual tends to consider it as credible [3].
In this study, we predict that the effect of argument quality on the COR for readers with a high level of involvement is stronger than readers with a low level of involvement. As the HSM assumes that when individuals are capable or highly involved, they are more likely to evaluate the consumer review more holistically and based on the quality of its argument rather than by using shortcuts. Whereas, individuals with a low level of involvement are less willing to scrutinize the content, as such, they tend to evaluate information based on simple clues to make their decision. Thus, we hypothesize that:
Hypothesis 1 (H1).
The effect of argument quality on the COR will be stronger if consumers have a high level of involvement, compared to consumers with a low level of involvement.

2.2.2. Review Objectivity

Review objectivity refers to the extent to which a review contains logical and fact-based information around the experience of a consumer with a service or product [29]. Subjective reviews are usually colored by the source’s opinion and, consequently, do not present factual information. In contrast, objective reviews are not affected by the sources’ opinions, as they provide information on specific events or facts related to a service or product. Prior research also has shown the positive effect of review objectivity on the COR [3,11].
In this study, we examine the moderating effect of the reader’s involvement on the relationship between review objectivity and the COR. We predict that readers with a low level of involvement will be more affected by review objectivity. This is because consumers with a low level of involvement tend to use less cognitive effort to process information, using simple clues like objectivity/subjectivity of information to evaluate the COR. On the other hand, highly involved readers adopt review objectivity, along with other important attributes, to assess the COR; thus, they will be less affected by objectivity of a review to judge the COR. Thus:
Hypothesis 2 (H2).
The effect of objective reviews (compared with subjective reviews) on the COR will be stronger if consumers have a low level of involvement, compared to consumers with a high level of involvement.

2.2.3. Internal Consistency

Internal consistency refers to the consistency among different elements within a particular review including the consistency between the valence (e.g., stars rating) and the content of a review [20]. For instance, as stated by Abedin, Mendoza [4] “Considering the valence and the content of a review come from two different sources, the review valence (stars rating) and the content might not be aligned with each other.”
We expect that the reader’s involvement moderates the effect of internal consistency on the COR. The reason is similar to the case of the moderating effect of the reader’s involvement on review objectivity. Readers with a low level of involvement are inclined to use some heuristics and simple clues to assess the COR as it requires less time and cognitive effort; thus, they will be more affected by the impact of internal consistency. On the other hand, highly involved consumers will consider the effect of internal consistency together with the systematic factor to evaluate the COR. Thus:
Hypothesis 3 (H3).
The effect of internal consistency on the COR will be stronger if consumers have a low level of involvement, compared to consumers with a high level of involvement.

2.2.4. Review Fluency

Review fluency refers to the quality that makes a review easy to comprehend and readable. There are some criteria that a reader may use to evaluate the fluency of an online review, namely, the length of words and sentences, quality of grammar and spelling, text representation style and understandability of the text [7,37,38]. Previous studies have indicated that consumers consider easy-to-read materials as more familiar [7,39] and consequently, to some extent, it is easier for readers to trust reviews that look more familiar to them. Thus, easy to read reviews could be judged as more credible [7,40,41].
In this study, we examine the moderating role of the reader’s involvement on the relationship between review fluency and the COR. We expect review fluency is among the heuristic attributes that do not require high cognitive effort and time, as such, consumers with a low level of involvement use this attribute as a hint to evaluate the COR. Accordingly, these consumers will be more affected by the effect of review fluency to assess reviews credibility compared to highly involved consumers. This is because consumers with a high level of involvement often analyze different aspects of reviews simultaneously. Thus, even if the review contains poor grammar and/or spelling, they may be less suspicious about that information because they believe that the reviewer is a human, and human error is inevitable. Thus:
Hypothesis 4 (H4).
The effect of review fluency on the COR will be stronger if consumers have a low level of involvement, compared to consumers with a high level of involvement.

2.2.5. External Consistency

External consistency refers to “the extent to which information in a review is consistent with information in other reviews” [19]. Prior research has indicated that external consistency positively affects the COR [3,11,17,18]. This is because individuals will generally accept a review that is consistent across most reviews [42]. In contrast, consumers will be more skeptical toward a review which is in contrast with the majority of reviews [19].
In this study, we expect that consumers’ involvement moderates the effect of external consistency on the COR. If a consumer has a low level of involvement to make a purchase decision, they often read a couple of reviews to find the convergence among information and realize whether a particular review is similar to other reviews that discuss the same target; thus, we think that these consumers will be more affected by external consistency. However, consumers with a high level of involvement may have less tendency to judge the credibility of a review based on peripheral cues like other consumers’ opinions; instead, they tend to use more systematic factors to adopt information and assess its credibility. Thus:
Hypothesis 5 (H5).
The effect of external consistency on the COR will be stronger if consumers have a low level of involvement, compared to consumers with a high level of involvement.

2.2.6. Review Sidedness

Review sidedness means “whether a review is one-sided or two-sided. A one-sided review contains either positive or negative product comments, whereas a two-sided review contains both positive and negative comments on a product” [19]. Prior research has indicated that online reviews accompanied by two-sided information are perceived as more credible than one-sided reviews [17,43].
We consider readers’ involvement will moderate review sidedness influence on the COR. Readers with a high level of involvement will incline to make their decision through extensive cognitive processing on all the pieces of information carefully; as such, they will depend less on a simple clue like review sidedness. However, readers with a low level of involvement will depend more on the heuristic attributes such as information sidedness in order to evaluate an online review [17]. Thus, we conjecture that consumers with a low level of involvement will be more influenced by review sidedness compared with ones with a high level of involvement. Thus, we postulate that:
Hypothesis 6 (H6).
The effect of review sidedness on the COR will be stronger if consumers have a low level of involvement, compared to consumers with a high level of involvement.

2.2.7. Perceived Source Credibility

In this study, another important heuristic attribute is the characteristics of the information source, namely its credibility. Source credibility refers to “the extent to which an information source (reviewer) is perceived to be believable, trustworthy and competent by a reader” [4]. Prior research has demonstrated that readers consider source credibility as an important sign of reviews credibility [3,11,19,43]. The positive effect of source credibility also has been shown in the previous literature [7,17,44].
We predict that the influence of a source’s credibility on the COR will be higher for the readers with a low level of involvement compared to highly involved consumers. Similar to other heuristic attributes, this is because online review readers with a lower level of involvement use the reviewer’s credibility as a shortcut or peripheral cue to assess the COR and make their decisions; accordingly, source credibility has a stronger impact on these consumers. Thus, we postulate that:
Hypothesis 7 (H7).
The effect of source credibility on the COR will be stronger if consumers have a low level of involvement, compared to consumers with a high level of involvement.

2.3. The Moderating Role of Source Homophily

Homophily refers to the extent to which “pairs of individuals who interact are similar with respect to certain attributes such as beliefs, values, education, social status, etc.” [16]. According to the theory of social comparison [45], individuals tend to compare their capabilities and attitudes to others, and if they realize that there is a similarity between another person and themselves, they will implicitly presume that they also have similar preferences and requirements [6].
In an online environment, although users do not have face to face interactions, they are able to feel a connection through similarity with a source (reviewer) by reading their information (online reviews) and analyzing their profiles. For example, considering their age, gender, profile picture, country and being a novice or top reviewer. Consequently, users can discover more about preferences, values and experiences of a source of that information. Thus, in this research, we study the perceived similarities, that is, homophily, with an online source, which is involved in similarities among consumers in terms of their values, personalities, experiences, likes and dislikes [6,16,40]. As source homophily focuses on the relationship between the source (reviewer) and the reader (consumer), we expected that homophily only moderates the effect of source credibility. That is, a consumer with a high level of homophily tends to be more influenced by the writer of a review, compared to consumers with a low level of homophily. In other words, consumers with a high level of homophily would perceive the source as more similar and thus appropriate to themselves. Thus:
Hypothesis 8 (H8).
The effect of source credibility on the COR will be stronger if consumers have a high level of homophily perception with the source, compared to consumers with a low level of homophily.

3. Methodology

We created a research project design to gather our data and test the research hypotheses. The data collection process was administered through Amazon Mechanical Turk (AMT) and regression modelling was performed to analyze the data and answer research questions. In the following sections, we explain our research methodology.

3.1. Measures and Questionnaire Design

All the items for each construct used in the survey were measured through a Likert scale (from strongly disagree to strongly agree). We also embedded some control questions into the survey to check the validity of the responses and data collected.
Table 1 demonstrates the constructs and their corresponding items used in this study. As shown in Table 1, all the items (except items for internal consistency) were adapted from the existing literature, with minor modifications to adjust the context of our study. The response items for “internal consistency” are new because we could not find a reliable scale for this attribute in the existing literature. Appendix A provides an explanation of the scale development process for this attribute.
The survey incorporates four different sections. In the first section, we provide participants with an introduction, including a brief description of the project and its aims. In the second section, we present all the constructs, items and survey questions. In the third part, we ask demographics related questions from each participant. Lastly, in the fourth section, each participant receives a unique code and be informed of the process to get their payment.

3.2. Field Data

Before the main administration of the survey, we conducted pilot tests to ensure that there was no issue in the survey’s components. To do so, we gathered comments from all the authors of this paper and 44 online users. The main data collection was carried out through AMT. AMT allows us to recruit qualified respondents who are members of online communities and users of e-commerce platforms. As such, collecting our sample from actual online users strengthens the validity of our research.
During the data collection process, respondents were notified that they would receive ~1.5 AUD for their participation. If they agreed, we asked them to read one consumer review, and then respond to the survey’s questions.
We collected 471 samples; of these, 46 respondents were excluded from our pool because they did not pass the control questions, leaving 425 valid subjects. In Table 2, we demonstrate the demographic information of our sample.

4. Results

In this section, we first discuss the measurement model analyses, followed by tests to examine the common method bias. Finally, we present structure model analyses.

4.1. Measurement Model Analyses

In this research, consistent with prior studies e.g., [6,17,47], we performed confirmatory factor analysis (CFA) to examine the measurement model. The data shows a good model fit: χ2/df = 1.708, CFI = 0.979, PClose = 0.997, RMSEA = 0.041, and SRMR = 0.031 [50].
We used average variance extracted (AVE), composite reliability (CR), cronbach’s alpha (α) and item reliability for each construct to assess the convergent validity, recommended by Fornell and Larcker [51]. As displayed in Table 3, for each item, both CR and α are higher than 0.8 and the AVE is greater than 0.5, which are all well above the suggested thresholds [51]. In addition, all the factor loadings (FL) are higher than 0.7, which confirms the reliability of all the items used in this study.
In terms of discriminate validity, we calculated the square root of AVE and conducted the HTMT analysis, Heterotrait–Monotrait ratio of correlations, which is the most conservative threshold to check the discriminate validity [52,53]. As shown in Table 4, the square root of AVE for each construct is greater than the cross-correlations. In addition, as presented in Table A1 in Appendix B, all the values in the HTMT analysis are less than the threshold of 0.85 introduced by Henseler, Ringle [53], which confirms the discriminate validity of the constructs in the research model [50,51].

4.2. Common Method Bias

We performed the following statistical analyses recommended by Podsakoff, MacKenzie [54] to check the common method bias and multicollinearity in our study. Harman’s single-factor test, the marker variable test and variance inflation factor (VIF). For example, as shown in Table 5, VIFs fluctuate from 1.349 to 2.749, which are lower than the threshold of 5 [55,56].

4.3. Structural Equation Modeling Analysis

Once we standardized all the data (Z-score), we performed the structural equation modeling (SEM) to test the main effects of the seven independent variables on information credibility to (re)confirm or reject the findings of prior studies. As shown in Table 6, all of the independent variables (review objectivity, source credibility, review sidedness, argument quality, review fluency, and internal consistency) significantly affect the COR at p < 0.005. In addition, external consistency positively affects COR at p < 0.1.
Next, we tested the moderating effects of reader’s involvement (RI) on the causal relationships between information credibility and its influencing variables (H1–H7). We also tested the moderating effect of homophily on the relationship between source and information credibility (H8). To do so, we built moderated multiple regression models [3,57]. We built seven product terms by multiplying the value of RI and our seven independent variables. In addition, we built another product term by multiplying the value of homophily (SH) and source credibility (SC), eight product terms in total. Then, we added these eight product terms, the moderator variables RI and SH, and the seven independent variables to the model. The significance of the product terms will indicate the moderating effect of RI and SH on the independent factors in our proposed model.
As shown in Table 6, model 1 tests the main effects of independent variables on information credibility, whereas model 2 examines the moderating effect of the RI separately. Finally, model 3 tests the moderating effects of RI and SH together. The results indicate that RI moderates four of the seven relationships between information credibility and its influencing factors. Moreover, the results also indicate that SH moderates the relationship between SC and review credibility (RC), with a significant negative effect, which reversely supports Hypothesis 8.
In addition, to investigate the internal mechanism of these moderating effects, we performed the simple slopes test [3,57,58]. This test is useful for understanding the interaction effects of two continuous variables [3,57]. Following the instruction [3,57], we calculated and graphed different regression lines. We also explained the significance level of the casual relationships between dependent and independent variables under low or high levels of our moderators. Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 illustrate the result of this analysis.
Figure 2 illustrates that when the involvement level of a reader is high, the slope is steeper than when the involvement level of a reader is low. This shows that source credibility’s positive effect will enhance the COR for readers in both high and low involvement. However, its impact is stronger for highly involved readers than readers with a low level of involvement. As shown in Figure 3, readers with a low level of involvement are more inclined to trust the two-sided reviews, compared with one-sided ones. However, the highly involved readers treated these two kinds of reviews almost the same.
Figure 4 illustrates that readers with a low level of involvement evaluate objective reviews as more credible than subjective ones, whereas readers with a high level of involvement are not willing to consider this aspect in order to assess the COR.
As shown in Figure 5, readers with a low level of involvement consider reviews with high internal consistency as more credible than reviews with a lack of internal consistency. However, readers with a high level of involvement are not willing to use this information attribute in order to assess the COR.
Figure 6 demonstrates that when homophily between the reader and the source of a review is low, the slope is steeper than when homophily is high. This means that, to evaluate the COR, source credibility has a higher impact on readers with a low level of homophily than readers with a high level of homophily.
Figure 7, Figure 8 and Figure 9 illustrate that, regardless of whether readers have a low or high level of involvement, the effects of argument quality, external consistency and review fluency on their evaluation of the COR were almost the same, which shows that readers’ involvement did not have significant moderating effects on these three attributes.

5. Discussion

In this study, we systematically investigated the moderating effect of reader’s involvement (RI) on the relationships between the COR and its antecedent factors. In addition, we explored the moderating role of homophily on the causal effect of source credibility on the COR. Firstly, we tested the main effect of our proposed research model. The results indicate that most of the antecedent factors, namely argument quality, review sidedness, review objectivity, internal consistency, review fluency, and source credibility significantly impact the COR. Whereas, external consistency was insignificant in this model, showing that not all the independent attributes exert an effect on the reader of a review. Next, we tested our hypotheses H1–H7. We predict RI can moderate the antecedent factors’ effects on the COR. The statistical results validate the importance of exploring the effects of readers’ perspectives during the processing of online review information.
We found that argument quality does not significantly vary across different readers (consumers) with different levels of involvement, rejecting H1. We predict this is due to the fact that consumers visit e-commerce platforms like online stores deliberately to find relevant information and make their purchase decision. Thus, they are motivated and involved enough to read and judge the arguments of a review and assess its quality. In this case, consumers reading online review information will consider reviews’ arguments to make their judgments, no matter how involved they are; as such, the effect of argument quality remains the same (constant effect) among consumers with different levels of involvement. For the three attributes of review objectivity, review sidedness, and internal consistency the results show that RI moderates their effects on the review credibility. This means that, as we predicted, the positive effects of review objectivity, internal consistency and review sidedness on the COR were attenuated with the increase in the involvement level, confirming H2, H3 and H6 respectively.
The results indicate that external consistency does not have a strong impact on the COR. This is consistent with the finding of Thomas, Wirtz [59], but disconfirms the finding of Luo, Luo [3], which states that the consistency among reviews significantly impacts the COR’s assessment. We believe our study could help explain the inconsistent results from prior research. Nowadays, we think that, in many e-commerce platforms, there is a vast number of online reviews for each product or service; however, some local platforms have only a few online reviews for their products or services. Thus, depending on the platform and the number of reviews, the effect of external consistency might vary. For instance, in a local online store, where there is a handful of reviews for a product, it is relatively easier for a customer to read all the reviews and judge the consistency among those reviews and accordingly external consistency can play a key role to assess the COR. However, when there is a large number of reviews for a product, customers might not be able to read all of them and realize the convergence among different viewpoints, as such, external consistency might not be a strong attribute for them to judge the COR [2]. The results also indicate that RI has no moderating effect on review fluency and external consistency among reviews. This means review fluency and external consistency serve as constant attributes to impact the COR, regardless of consumers’ involvement levels. This does not confirm H4 and H5.
The findings suggest that, on e-commerce sites, consumers consider the credibility of a source as an important attribute to evaluate the COR. In addition, we found that the positive effect of source credibility on COR is strengthened with the increase in consumers’ involvement level. This finding reversely supports hypothesis 7, which suggests the effect of source credibility on COR will be stronger if consumers have a lower level of involvement. One of the explanations for this is that online reviews are mostly about consumers’ experiences and opinions toward a product/service, rather than knowledge or facts. In addition, as suggested by Shan (2016) the persuasiveness of an online review has often been attributed to its source credibility (Shan 2016). In addition, in the context of online reviews, it is challenging to understand the motives behind the reviewers and, to do so, readers should use a considerable amount of cognitive effort. Thus, readers with a low level of involvement are less motivated to scrutinize the profile of a reviewer to assess the COR. However, readers with a higher level of involvement are more inclined to carefully evaluate the profile of a reviewer to adopt information and make their purchase decision. This is another interesting finding of this study, which is consistent with the findings of prior research that suggests an information cue may perform as a heuristic (peripheral) factor in some circumstances, but a systematic factor in other circumstances [19,60].
Our findings show that homophily negatively moderates the causal relationship between source credibility and the COR, which reversely confirms H8 and indicates that the stronger effect of source credibility on the COR would be attenuated if the reader felt high homophily with the writer. To further clarify this effect, we conducted an extensive literature review. Perhaps the closest study to this finding is Shan [61], which conducted two experiments to investigate the effect of system-generated and self-generated cues on source credibility judgment. According to the finding of this research, the perceived similarity between the writer and reader (homophily) generates a negative influence on source expertise. This means that, although the credibility of a source has a high positive impact on the COR, this relationship is stronger when the similarity between the reader and writer is low and, as such, a higher level of similarity brings down the strong effect of source credibility. This is an interesting finding because it shows that being similar to a consumer could potentially discount the expertise and accordingly reduce the credibility of a reviewer, because consumers tend to judge the reviewer in close proximity to themselves.
Finally, the findings illustrate that, in the context of online reviews, the impacts of reviews’ attributes seem to be highly complex. For instance, it is not advisable that we simply classify the argument quality as the only systematic processing factor and other attributes including source credibility and external consistency as the heuristic attributes, as suggested by prior research, e.g., [19]. This is because, according to our findings, the credibility of a source plays a key role to assess COR; moreover, it has a higher impact on the highly involved readers compared to readers with a low level of involvement; therefore, it can be considered as the systematic attribute rather than the heuristic one. In addition, as supported by the HSM theory [15,31], it is possible that an attribute such as source credibility serves as a heuristic processing factor in some situations, but a systematic processing factor in other situations. Thus, it is highly recommended that future research focuses on the classification of the attributes used in this study in different conditions.

Theoretical Contributions and Practical Implications

This research can provide important theoretical contributions to the online reviews area. We extend the credibility of online reviews research scope by incorporating the moderating role of the reader’s attributes (reader’s involvement and homophily) into the HSM model. Our research is consistent with prior related studies [3,17,19], which suggest that the readers’ perspective, e.g., level of expertise and sense of membership, serves as the moderating attributes, rather than independent antecedent ones, in the credibility evaluation of online reviews.
In fact, the findings show that readers’ involvement has different moderating effects on the antecedent factors of COR, indicating that different online readers use different attributes to assess the COR. Accordingly, the influences of various online reviews’ attributes were differentiated. Specifically, we found that the reader’s involvement positively influences the direct effect of source credibility on COR, whereas it attenuates the effects of internal consistency, review sidedness, and review objectivity on the COR. In addition, it does not significantly moderate the effects of argument quality, review fluency, and external consistency on COR.
This research also provides several practical implications to online stores and other e-commerce platforms. A potential consumer might have different characteristics; as such, e-commerce platforms should customize their online reviews for each consumer accordingly in order to attract and motivate them to purchase. One practical way to customize online reviews for different consumers is to capture their log-in information or create a profile for each consumer. By doing this, the platform can classify consumers into different categories based on their profiles. After finishing the classification, the platform will be able to recommend customized information to each consumer. By doing so, the platform will not only help consumers to make better purchase decisions, but also help businesses to achieve their goals and increase their performance.

6. Conclusions and Future Research

Using HSM as the theoretical lens, this paper explores how readers’ attributes (i.e., involvement and homophily) moderate the relationships between the credibility of online consumer reviews and its antecedent factors. Our findings show that one piece of online review may reflect different views for different readers depending on their attitudes. This study provides a better classification of the attributes related to the credibility of online reviews using the HSM theory and helps e-commerce platforms to customize their online reviews to different consumers and satisfies their information needs.
We suggest future research could compare different types of products and/or services (e.g., search, experience and credence categories) because we believe that consumers judge online reviews for each category differently. This means that, during the assessment of online reviews, consumers use different attributes to different degrees, depending on the product/service types, to make their decisions. Thus, product or service types can affect consumers’ assessment of online reviews. Finally, the importance of demographic variables on the credibility evaluation of online reviews can be explored in further studies, considering that digital platforms increasingly have more information about their users.

Author Contributions

Conceptualization, E.A., A.M. and S.K.; methodology, E.A., A.M. and S.K.; software, E.A., A.M. and S.K.; validation, E.A., A.M. and S.K.; formal analysis, E.A., A.M. and S.K.; writing—original draft preparation, E.A., A.M. and S.K.; writing—review and editing, E.A., A.M. and S.K.; visualization, E.A., A.M. and S.K.; supervision, A.M. and S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

All the ethical guidelines in this study were reviewed and approved by the ethics committee of The University of Melbourne (Ethics ID: 1953689, May 2020).

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Scale Development Process

The new items for “internal consistency” construct in the survey were pilot tested in three different rounds: the first round with three scholars with expertise in this area, the second round with seven PhD students in the information systems domain, and finally the third round with 92 AMT users. These pilot test rounds enabled us to refine and improve response items for “internal consistency” construct used in this research. Next, the validity and reliability of these new items were tested through exploratory factor analysis (EFA), using Cronbach’s alpha and composite reliability. We conducted EFA by principal axis factoring using Promax rotation. All the items had factor loading above 0.6 and commonalities below 0.33. Besides, values for Cronbach’s alpha and composite reliability were also above the proposed threshold of 0.60 introduced by Bagozzi and Yi [62].

Appendix B

Table A1. Heterotrait-Monotrait Ratio.
Table A1. Heterotrait-Monotrait Ratio.
SHRCSCRSAQRFRORIICEC
SH
RC0.530
SC0.4310.743
RS0.2320.2600.141
AQ0.4580.6720.6590.127
RF0.2610.5820.5040.0270.395
RO0.4180.6030.4930.2370.5540.363
RI0.1690.2580.3330.0280.2350.3820.142
IC0.2630.4470.3410.0650.3990.5140.3100.365
EC0.3900.4150.2960.3260.4030.1840.4870.0670.188

References

  1. Williams, A.D.; Tapscott, D. Wikinomics; Atlantic Books Ltd.: London, UK, 2011. [Google Scholar]
  2. Van Dijck, J.; Nieborg, D. Wikinomics and its discontents: A critical analysis of Web 2.0 business manifestos. New Media Soc. 2009, 11, 855–874. [Google Scholar] [CrossRef] [Green Version]
  3. Luo, C.; Luo, X.; Xu, Y.; Warkentin, M.; Sia, C.L. Examining the moderating role of sense of membership in online review evaluations. Inf. Manag. 2015, 52, 305–316. [Google Scholar] [CrossRef]
  4. Abedin, E.; Mendoza, A.; Karunasekera, S. Credible vs Fake: A Literature Review on Differentiating Online Reviews based on Credibility. In Proceedings of the International Conference on Information Systems (ICIS 2020), Hyderabad, India, 13–16 December 2020. [Google Scholar]
  5. Baumeister, R.F.; Bratslavsky, E.; Finkenauer, C.; Vohs, K.D. Bad is stronger than good. Rev. Gen. Psychol. 2001, 5, 323–370. [Google Scholar] [CrossRef]
  6. Filieri, R.; McLeay, F.; Tsui, B.; Lin, Z. Consumer perceptions of information helpfulness and determinants of purchase intention in online consumer reviews of services. Inf. Manag. 2018, 55, 956–970. [Google Scholar] [CrossRef]
  7. Huang, Y.; Li, C.; Wu, J.; Lin, Z. Online customer reviews and consumer evaluation: The role of review font. Inf. Manag. 2018, 55, 430–440. [Google Scholar] [CrossRef]
  8. Siddiqui, M.; Siddiqui, U.; Khan, M.; Alkandi, I.; Saxena, A.; Siddiqui, J. Creating Electronic Word of Mouth Credibility through Social Networking Sites and Determining Its Impact on Brand Image and Online Purchase Intentions in India. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1008–1024. [Google Scholar] [CrossRef]
  9. Chong, A.Y.L.; Khong, K.W.; Ma, T.; McCabe, S.; Wang, Y. Analyzing key influences of tourists’ acceptance of online reviews in travel decisions. Internet Res. 2018, 28, 564–586. [Google Scholar] [CrossRef]
  10. Zhang, Z.; Ye, Q.; Law, C.H.R.; Li, Y. The impact of e-word-of-mouth on the online popularity of restaurants: A comparison of consumer reviews and editor reviews. Int. J. Hosp. Manag. 2010, 29, 694–700. [Google Scholar] [CrossRef]
  11. Abedin, E.; Mendoza, A.; Karunasekera, S. What Makes a Review Credible? In Heuristic and Systematic Factors for the Credibility of Online Reviews; ACIS: Perth, Australia, 2019. [Google Scholar]
  12. Ansari, S.; Gupta, S. Customer perception of the deceptiveness of online product reviews: A speech act theory perspective. Int. J. Inf. Manag. 2021, 57, 102286. [Google Scholar] [CrossRef]
  13. Lee, J.; Hong, I. The Influence of Situational Constraints on Consumers’ Evaluation and Use of Online Reviews: A Heuristic-Systematic Model Perspective. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1517–1536. [Google Scholar] [CrossRef]
  14. Petty, R.E.; Cacioppo, J.T. The Elaboration Likelihood Model of Persuasion. In Communication and Persuasion; Springer: Berlin/Heidelberg, Germany, 1986; pp. 1–24. [Google Scholar]
  15. Chaiken, S. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J. Personal. Soc. Psychol. 1980, 39, 752. [Google Scholar] [CrossRef]
  16. Rogers, E.M.; Bhowmik, D.K. Homophily-Heterophily: Relational Concepts for Communication Research. Public Opin. Q. 1970, 34, 523–538. [Google Scholar] [CrossRef]
  17. Luo, C.; Wu, J.; Shi, Y.; Xu, Y. The effects of individualism–collectivism cultural orientation on eWOM information. Int. J. Inf. Manag. 2014, 34, 446–456. [Google Scholar] [CrossRef]
  18. Munzel, A. Assisting consumers in detecting fake reviews: The role of identity information disclosure and consensus. J. Retail. Consum. Serv. 2016, 32, 96–108. [Google Scholar] [CrossRef]
  19. Cheung, C.M.-Y.; Sia, C.-L.; Kuan, K.K. Is this review believable? A study of factors affecting the credibility of online consumer reviews from an ELM perspective. J. Assoc. Inf. Syst. 2012, 13, 618. [Google Scholar] [CrossRef] [Green Version]
  20. Abedin, E.; Mendoza, A.; Karunasekera, S. Towards a Credibility Analysis Model for Online Reviews; PACIS: Xi’an, China, 2019. [Google Scholar]
  21. Barbado, R.; Araque, O.; Iglesias, C.A. A framework for fake review detection in online consumer electronics retailers. Inf. Process. Manag. 2019, 56, 1234–1244. [Google Scholar] [CrossRef] [Green Version]
  22. Fitzpatrick, E.; Bachenko, J.; Fornaciari, T. Automatic Detection of Verbal Deception. Synth. Lect. Hum. Lang. Technol. 2015, 8, 1–119. [Google Scholar] [CrossRef]
  23. Hu, N.; Bose, I.; Koh, N.S.; Liu, L. Manipulation of online reviews: An analysis of ratings, readability, and sentiments. Decis. Support Syst. 2012, 52, 674–684. [Google Scholar] [CrossRef]
  24. Plotkina, D.; Munzel, A.; Pallud, J. Illusions of truth—Experimental insights into human and algorithmic detections of fake online reviews. J. Bus. Res. 2020, 109, 511–523. [Google Scholar] [CrossRef]
  25. Banerjee, S.; Bhattacharyya, S.; Bose, I. Whose online reviews to trust? Understanding reviewer trustworthiness and its impact on business. Decis. Support Syst. 2017, 96, 17–26. [Google Scholar] [CrossRef]
  26. Kudugunta, S.; Ferrara, E. Deep neural networks for bot detection. Inf. Sci. 2018, 467, 312–322. [Google Scholar] [CrossRef] [Green Version]
  27. Zhang, D.; Zhou, L.; Kehoe, J.L.; Kilic, I.Y. What Online Reviewer Behaviors Really Matter? Effects of Verbal and Nonverbal Behaviors on Detection of Fake Online Reviews. J. Manag. Inf. Syst. 2016, 33, 456–481. [Google Scholar] [CrossRef]
  28. Jindal, N.; Liu, B. Analyzing and Detecting Review Spam. In Proceedings of the Seventh IEEE International Conference on Data Mining (ICDM 2007), Omaha, NE, USA, 28–31 October 2007; pp. 547–552. [Google Scholar]
  29. Filieri, R.; Hofacker, C.; Alguezaui, S. What makes information in online consumer reviews diagnostic over time? The role of review relevancy, factuality, currency, source credibility and ranking score. Comput. Hum. Behav. 2018, 80, 122–131. [Google Scholar] [CrossRef] [Green Version]
  30. Ruiz-Mafe, C.; Chatzipanagiotou, K.; Curras-Perez, R. The role of emotions and conflicting online reviews on consumers’ purchase intentions. J. Bus. Res. 2018, 89, 336–344. [Google Scholar] [CrossRef] [Green Version]
  31. Eagly, A.H.; Chaiken, S. The Psychology of Attitudes; Harcourt Brace Jovanovich College Publishers: San Diego, CA, USA, 1993. [Google Scholar]
  32. Zhang, K.Z.; Zhao, S.J.; Cheung, C.; Lee, M.K.O. Examining the influence of online reviews on consumers’ decision-making: A heuristic–systematic model. Decis. Support Syst. 2014, 67, 78–89. [Google Scholar] [CrossRef]
  33. Watts, S.A.; Zhang, W. Capitalizing on content: Information adoption in two online communities. J. Assoc. Inf. Syst. 2008, 9, 3. [Google Scholar] [CrossRef]
  34. Stacks, D.W.; Salwen, M.B.; Eichhorn, K.C. An Integrated Approach to Communication Theory and Research; Routledge: Abingdon, UK, 2019. [Google Scholar]
  35. Slater, M.D.; Rouner, D. How Message Evaluation and Source Attributes May Influence Credibility Assessment and Belief Change. Journal. Mass Commun. Q. 1996, 73, 974–991. [Google Scholar] [CrossRef]
  36. Kim, S.J.; Maslowska, E.; Malthouse, E.C. Understanding the effects of different review features on purchase probability. Int. J. Advert. 2017, 37, 29–53. [Google Scholar] [CrossRef]
  37. Ketron, S. Investigating the effect of quality of grammar and mechanics (QGAM) in online reviews: The mediating role of reviewer crediblity. J. Bus. Res. 2017, 81, 51–59. [Google Scholar] [CrossRef]
  38. Cox, D.; Cox, J.G.; Cox, A.D. To Err is human? How typographical and orthographical errors affect perceptions of online reviewers. Comput. Hum. Behav. 2017, 75, 245–253. [Google Scholar] [CrossRef] [Green Version]
  39. Schwarz, N. hapter 14: Feelings-as-Information Theory. In Handbook of Theories of Social Psychology; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2011; Volume 1, pp. 289–308. [Google Scholar]
  40. Brown, J.J.; Reingen, P.H. Social Ties and Word-of-Mouth Referral Behavior. J. Consum. Res. 1987, 14, 350–362. [Google Scholar] [CrossRef]
  41. Lawler, E.J.; Yoon, J. Commitment in Exchange Relations: Test of a Theory of Relational Cohesion. Am. Sociol. Rev. 1996, 61, 89. [Google Scholar] [CrossRef]
  42. Aghakhani, N.; Oh, O.; Gregg, D.G.; Karimi, J. Online Review Consistency Matters: An Elaboration Likelihood Model Perspective. Inf. Syst. Front. 2020, 23, 1287–1301. [Google Scholar] [CrossRef]
  43. Cheung, M.Y.; Luo, C.; Sia, C.L.; Chen, H. Credibility of Electronic Word-of-Mouth: Informational and Normative Determinants of On-line Consumer Recommendations. Int. J. Electron. Commer. 2009, 13, 9–38. [Google Scholar] [CrossRef]
  44. Xu, Q. Should I trust him? The effects of reviewer profile characteristics on eWOM credibility. Comput. Hum. Behav. 2014, 33, 136–144. [Google Scholar] [CrossRef]
  45. Festinger, L. A Theory of Social Comparison Processes. Hum. Relat. 1954, 7, 117–140. [Google Scholar] [CrossRef]
  46. Zhang, Y. Responses to Humorous Advertising: The Moderating Effect of Need for Cognition. J. Advert. 1996, 25, 15–32. [Google Scholar] [CrossRef]
  47. Zhao, K.; Stylianou, A.C.; Zheng, Y. Sources and impacts of social influence from online anonymous user reviews. Inf. Manag. 2018, 55, 16–30. [Google Scholar] [CrossRef]
  48. Park, D.-H.; Lee, J. eWOM overload and its effect on consumer behavioral intention depending on consumer involvement. Electron. Commer. Res. Appl. 2008, 7, 386–398. [Google Scholar] [CrossRef]
  49. Ohanian, R. Construction and Validation of a Scale to Measure Celebrity Endorsers’ Perceived Expertise, Trustworthiness, and Attractiveness. J. Advert. 1990, 19, 39–52. [Google Scholar] [CrossRef]
  50. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E.; Tatham, R.L. Multivariate Data Analysis: A Global Perspective; Pearson: Upper Saddle River, NJ, USA, 2010; Volume 7. [Google Scholar]
  51. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  52. Voorhees, C.M.; Brady, M.K.; Calantone, R.J.; Ramirez, E. Discriminant validity testing in marketing: An analysis, causes for concern, and proposed remedies. J. Acad. Mark. Sci. 2016, 44, 119–134. [Google Scholar] [CrossRef]
  53. Henseler, J.; Ringle, C.M.; Sarstedt, M. A new criterion for assessing discriminant validity in variance-based structural equation modeling. J. Acad. Mark. Sci. 2015, 43, 115–135. [Google Scholar] [CrossRef] [Green Version]
  54. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879. [Google Scholar] [CrossRef] [PubMed]
  55. O’Brien, R.M. A Caution Regarding Rules of Thumb for Variance Inflation Factors. Qual. Quant. 2007, 41, 673–690. [Google Scholar] [CrossRef]
  56. Hair, J.F.; Sarstedt, M.; Ringle, C.M.; Mena, J.A. An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 2012, 40, 414–433. [Google Scholar] [CrossRef]
  57. Cohen, J.; Cohen, P.; West, S.G.; Aiken, L.S. Applied Multiple Regression/Correlation Analysis for The Behavioral Sciences; Routledge: Abingdon, UK, 2013. [Google Scholar]
  58. Aiken, L.S.; West, S.G.; Reno, R.R. Multiple Regression: Testing and Interpreting Interactions; Sage: Thousand Oaks, CA, USA, 1991. [Google Scholar]
  59. Thomas, M.-J.; Wirtz, B.W.; Weyerer, J.C. Determinants of online review credibility and its impact on consumers’purchase intention. J. Electron. Commer. Res. 2019, 20, 1–20. [Google Scholar]
  60. Chaiken, S.; Maheswaran, D. Heuristic processing can bias systematic processing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment. J. Personal. Soc. Psychol. 1994, 66, 460. [Google Scholar] [CrossRef]
  61. Shan, Y. How credible are online product reviews? The effects of self-generated and system-generated cues on source credibility evaluation. Comput. Hum. Behav. 2016, 55, 633–641. [Google Scholar] [CrossRef]
  62. Bagozzi, R.P.; Yi, Y. On the evaluation of structural equation models. J. Acad. Mark. Sci. 1988, 16, 74–94. [Google Scholar] [CrossRef]
Figure 1. Research Model.
Figure 1. Research Model.
Jtaer 16 00184 g001
Figure 2. The Moderating Effect of RI on SC.
Figure 2. The Moderating Effect of RI on SC.
Jtaer 16 00184 g002
Figure 3. The Moderating Effect of RI on RS.
Figure 3. The Moderating Effect of RI on RS.
Jtaer 16 00184 g003
Figure 4. The Moderating Effect of RI on RO.
Figure 4. The Moderating Effect of RI on RO.
Jtaer 16 00184 g004
Figure 5. The Moderating Effect of RI on IC.
Figure 5. The Moderating Effect of RI on IC.
Jtaer 16 00184 g005
Figure 6. The Moderating Effect of SH on SC.
Figure 6. The Moderating Effect of SH on SC.
Jtaer 16 00184 g006
Figure 7. The Moderating Effect of RI on AQ.
Figure 7. The Moderating Effect of RI on AQ.
Jtaer 16 00184 g007
Figure 8. The Moderating Effect of RI on RF.
Figure 8. The Moderating Effect of RI on RF.
Jtaer 16 00184 g008
Figure 9. The Moderating Effect of RI on EC.
Figure 9. The Moderating Effect of RI on EC.
Jtaer 16 00184 g009
Table 1. Items Used in the Study.
Table 1. Items Used in the Study.
ConstructItemsSupporting References
Argument Quality1. The review arguments are convincing[3,46]
2. The review arguments are persuasive
3. The review arguments are reasonable
Internal Consistency1. In this review, the comment and star rating match each otherThis study
2. In this review, the arguments are consistent with each other
3. In this review, there is no conflict within its parts
Review Fluency1. This review is easy to read[9,47]
2. This review is understandable
3. This review is easy to comprehend
External Consistency1. The comments made in this review are consistent with other reviews[19]
2. The comments made in this review are similar to other reviews
Review Objectivity1. The argument of this review is unemotional[29,48]
2. This review is objective
3. This review is based on facts
Review Sidedness1. This review includes both pros and cons on the discussed product/service[19,43]
2. This review includes only one-sided comments (positive or negative)
3. This review includes both positive and negative comments
Perceived Source Credibility1. The writer (reviewer) of this review is credible[19,49]
2. The writer (reviewer) of this review is reliable
3. The writer (reviewer) of this review is trustworthy
Reader’s Involvement1. How much effort did you put into evaluating the given information?[29]
2. Did you think deeply about the information contained in online reviews?
3. How informed are you on the subject matter of this review
Source Homophily1. The reviewer has the same opinions as I do[6]
2. The reviewer has the same viewpoints as I do
3. The reviewer has the same preferences as I do
Perceived Review Credibility1. This review is believable[3,19]
2. This review is trustworthy
3. This review is credible
4. This review is accurate
Table 2. Sample Demographics.
Table 2. Sample Demographics.
FrequencyPercent
GenderMale21350.1
Female21249.9
Age range<3013130.8
30–4014734.6
40+14734.6
EducationLess than high school20.5
High school graduate8419.8
College10424.5
Bachelor’s degree17841.9
Master’s degree5312.5
Doctorate40.9
Table 3. Cronbach’s α, CR, AVE and factor loadings.
Table 3. Cronbach’s α, CR, AVE and factor loadings.
AttributesAbbreviationsItemsαCRAVEFactor Loading
External ConsistencyECEC10.9590.9590.9210.975
EC20.934
Argument QualityAQAQ10.9330.9340.8250.934
AQ20.925
AQ30.803
Source CredibilitySCSC10.9480.9480.8590.880
SC20.896
SC30.930
Review CredibilityRCRC10.9610.9610.8610.925
RC20.855
RC30.949
RC40.836
Review SidednessRSRS10.9240.9260.8060.966
RS20.839
RS30.886
Review ObjectivityRORO10.8990.9050.7610.794
RO20.940
RO30.855
Review FluencyRFRF10.9060.9070.7650.806
RF20.864
RF30.932
Internal ConsistencyICIC10.8720.8760.7020.780
IC20.909
IC30.811
Readers’ InvolvementRIRI10.8890.8940.7400.894
RI20.902
RI30.778
Source HomophilySHSH10.9300.9310.8180.858
SH20.923
SH30.913
Table 4. Correlations and Average Variance Extracted (AVE).
Table 4. Correlations and Average Variance Extracted (AVE).
SHRCSCRSAQRFRORIICEC
SH0.905
RC0.530 ***0.928
SC0.440 ***0.744 ***0.927
RS0.239 ***0.251 ***0.141 **0.898
AQ0.450 ***0.670 ***0.655 ***0.130 *0.909
RF0.255 ***0.570 ***0.491 ***0.0280.384 ***0.875
RO0.427 ***0.602 ***0.496 ***0.221 ***0.556 ***0.364 ***0.872
RI0.153 **0.257 ***0.334 ***−0.0450.238 ***0.403 ***0.136 **0.860
IC0.259 ***0.454 ***0.341 ***−0.0470.402 ***0.515 ***0.317 ***0.356 ***0.838
EC0.386 ***0.411 ***0.296 ***0.326 ***0.399 ***0.178 ***0.474 ***0.0480.185 ***0.960
Note: * p < 0.050, ** p < 0.010, *** p < 0.001.
Table 5. Collinearity Statistics.
Table 5. Collinearity Statistics.
Collinearity Statistics
ToleranceVIF
External Consistency (EC)0.6001.667
Review Objectivity (RO)0.4782.090
Source Credibility (SC)0.3642.749
Review Sidedness (RS)0.7411.349
Argument Quality (AQ)0.3342.998
Review Fluency (RF)0.4782.090
Internal Consistency (IC)0.5511.815
Reader’s Involvement (RI)0.5661.766
Source Homophily (SH)0.6111.637
Table 6. Structural Equation Modeling Results.
Table 6. Structural Equation Modeling Results.
Model 1Model 2Model 3
SC 1tSig.SC 1tSig.SC 1tSig.
Internal Consistency (IC)0.0963.0540.0020.0822.7320.0070.0973.3020.001
Review Objectivity (RO)0.1554.5820.0000.1283.9520.0000.1233.8870.000
Review Fluency (RF)0.1945.8910.0000.1895.8120.0000.1936.1070.000
Argument Quality (AQ)0.1694.4240.0000.1533.9760.0000.1233.2550.001
Review Sidedness (RS)0.1224.5720.0000.1224.7130.0000.0993.8840.000
Source Credibility (SC)0.40310.8800.0000.46212.6010.0000.43011.8420.000
External Consistency (EC)0.0561.8750.0610.0160.5550.5790.0010.0340.973
Reader’s Involvement (RI) −0.037−1.2350.218−0.017−0.5820.561
RI * IC −0.118−3.5090.000−0.081−2.4410.015
RI * RO −0.101−3.2680.001−0.096−3.1960.002
RI * RF −0.064−1.6200.106−0.041−1.0720.285
RI * AQ 0.0561.7790.0760.0280.9010.368
RI * RS −0.099−3.5670.000−0.073−2.6460.008
RI * SC 0.2737.2710.0000.2536.9060.000
RI * EC 0.0561.6580.0980.0591.8040.072
Source Homophily (SH) 0.0812.8750.004
SH * SC −0.105−4.3370.000
R20.7470.7920.805
Adjusted R20.7430.7840.797
F175.994103.70498.710
Note: SC 1: Standardized Coefficients.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abedin, E.; Mendoza, A.; Karunasekera, S. Exploring the Moderating Role of Readers’ Perspective in Evaluations of Online Consumer Reviews. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 3406-3424. https://doi.org/10.3390/jtaer16070184

AMA Style

Abedin E, Mendoza A, Karunasekera S. Exploring the Moderating Role of Readers’ Perspective in Evaluations of Online Consumer Reviews. Journal of Theoretical and Applied Electronic Commerce Research. 2021; 16(7):3406-3424. https://doi.org/10.3390/jtaer16070184

Chicago/Turabian Style

Abedin, Ehsan, Antonette Mendoza, and Shanika Karunasekera. 2021. "Exploring the Moderating Role of Readers’ Perspective in Evaluations of Online Consumer Reviews" Journal of Theoretical and Applied Electronic Commerce Research 16, no. 7: 3406-3424. https://doi.org/10.3390/jtaer16070184

APA Style

Abedin, E., Mendoza, A., & Karunasekera, S. (2021). Exploring the Moderating Role of Readers’ Perspective in Evaluations of Online Consumer Reviews. Journal of Theoretical and Applied Electronic Commerce Research, 16(7), 3406-3424. https://doi.org/10.3390/jtaer16070184

Article Metrics

Back to TopTop