Empowering Consumer Decision-Making: Decoding Incentive vs. Organic Reviews for Smarter Choices Through Advanced Textual Analysis †
Abstract
:1. Introduction
2. Related Work
2.1. Online Review
2.2. Incentive vs. Organic
2.3. Incentive and Decision-Making in Purchases
3. Hypothesis Development
3.1. Review Credibility and Consistency
3.1.1. Review Credibility
3.1.2. Review Consistency
3.2. Impact on Customer Decision-Making
4. Research Methodology
4.1. Data Collection
4.2. Data Preprocessing
- 29,597 incentive reviews and 14,658 organic reviews from Capterra.
- 861 incentive reviews and 1397 organic reviews from GetApp.
- 2280 incentive reviews and 1205 organic reviews from Software Advice.
4.3. Data Analysis
4.3.1. Exploratory Data Analysis (EDA)
4.3.2. Sentiment Analysis
4.3.3. Semantic Link Analysis
4.3.4. Spectral Clustering in Topic Modeling
4.3.5. t-Distributed Stochastic Neighbor Embedding (t-SNE)
4.4. Statistical Testing and Validation
4.4.1. A/B Testing
4.4.2. Hypothesis Testing and Bootstrap Distribution
4.5. Recommendation
- Script-wise match ratio: number of top reviews identified by the model that were actually present in the ground truth data (to evaluate the relevance of the model’s predictions)
- Script-wise MRR: calculates average reciprocal ranks of results for a query set (evaluating ranking-based system performance where the order of the results matters).
5. Results and Analysis
5.1. EDA Results
5.2. Sentiment Analysis Results
5.3. Semantic Link Results
5.3.1. Semantic Link Results Using TF-IDF
5.3.2. Semantic Link Results Using Sentence-BERT
5.4. Spectral Clustering in Topic Modeling Results
5.5. t-SNE Results
5.6. Statistical Testing and Validation Results
5.7. Recommendation Results
6. Discussion
6.1. Incentive vs. Organic
6.2. Incentives, Customer Behavior, and Review Quality
6.3. Incentive Review and Decision-Making in Purchases
6.4. Recommendations and Decision-Making in Purchases
6.5. Comparison of the Proposed Approach with the State-of-the-Art
6.6. Comparable Analysis with Existing Studies
7. Conclusions and Future Work
7.1. Implications of the Study
7.2. Strengths and Limitations
7.3. Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kargozari, K.; Ding, J.; Chen, H. Evaluating the Impact of Incentive/Non-incentive Reviews on Customer Decision-making. In Proceedings of the 2023 IEEE International Conference on Artificial Intelligence Testing (AITest), Athens, Greece, 17–20 July 2023; pp. 160–168. [Google Scholar]
- Zhu, L.; Li, H.; Wang, F.; He, W.; Tian, Z. How online reviews affect purchase intention: A new model based on the stimulus-organism-response (S-O-R) framework. Aslib J. Inf. Manag. 2020, 72, 463–488. [Google Scholar] [CrossRef]
- Yu, Y.; Yang, Y.; Huang, J.; Tan, Y. Unifying Algorithmic and Theoretical Perspectives: Emotions in Online Reviews and Sales. MIS Q. 2023, 47, 127–160. [Google Scholar] [CrossRef]
- Alqaryouti, O.; Siyam, N.; Abdel Monem, A.; Shaalan, K. Aspect-based sentiment analysis using smart government review data. Appl. Comput. Inform. 2024, 20, 142–161. [Google Scholar] [CrossRef]
- Alamoudi, E.S.; Alghamdi, N.S. Sentiment classification and aspect-based sentiment analysis on yelp reviews using deep learning and word embeddings. J. Decis. Syst. 2021, 30, 259–281. [Google Scholar] [CrossRef]
- Jain, D.K.; Boyapati, P.; Venkatesh, J.; Prakash, M. An intelligent cognitive-inspired computing with big data analytics framework for sentiment analysis and classification. Inf. Process. Manag. 2022, 59, 102758. [Google Scholar] [CrossRef]
- Qiao, D.; Rui, H. Text performance on the vine stage? The effect of incentive on product review text quality. Inf. Syst. Res. 2023, 34, 676–697. [Google Scholar] [CrossRef]
- Petrescu, M.; O’Leary, K.; Goldring, D.; Ben Mrad, S. Incentivized reviews: Promising the moon for a few stars. J. Retail. Consum. Serv. 2018, 41, 288–295. [Google Scholar] [CrossRef]
- Ai, J.; Gursoy, D.; Liu, Y.; Lv, X. Effects of offering incentives for reviews on trust: Role of review quality and incentive source. Int. J. Hosp. Manag. 2022, 100, 103101. [Google Scholar] [CrossRef]
- Burtch, G.; Hong, Y.; Bapna, R.; Griskevicius, V. Stimulating Online Reviews by Combining Financial Incentives and Social Norms. Manag. Sci. 2018, 64, 2065–2082. [Google Scholar] [CrossRef]
- Costa, A.; Guerreiro, J.; Moro, S.; Henriques, R. Unfolding the characteristics of incentivized online reviews. J. Retail. Consum. Serv. 2019, 47, 272–281. [Google Scholar] [CrossRef]
- Woolley, K.; Sharif, M. Incentives Increase Relative Positivity of Review Content and Enjoyment of Review Writing. J. Mark. Res. 2021, 58, 539–558. [Google Scholar] [CrossRef]
- Imtiaz, M.N.; Ahmed, M.T.; Paul, A. Incentivized Comment Detection with Sentiment Analysis on Online Hotel Reviews. Authorea 2020. [Google Scholar] [CrossRef]
- Zhang, M.; Wei, X.; Zeng, D. A matter of reevaluation: Incentivizing users to contribute reviews in online platforms. Decis. Support Syst. 2020, 128, 113158. [Google Scholar] [CrossRef] [PubMed]
- Garnefeld, I.; Helm, S.; Grötschel, A.K. May we buy your love? psychological effects of incentives on writing likelihood and valence of online product reviews. Electron. Mark. 2020, 30, 805–820. [Google Scholar] [CrossRef]
- Cui, G.; Chung, Y.; Peng, L.; Zheng, W. The importance of being earnest: Mandatory vs. voluntary disclosure of incentives for online product reviews. J. Bus. Res. 2022, 141, 633–645. [Google Scholar] [CrossRef]
- Luca, M.; Zervas, G. Fake it till you make it: Reputation, competition, and yelp review fraud. Manag. Sci. 2016, 62, 3412–3427. [Google Scholar] [CrossRef]
- Li, H.; Bruce, X.B.; Li, G.; Gao, H. Restaurant survival prediction using customer-generated content: An aspect-based sentiment analysis of online reviews. Tour. Manag. 2023, 96, 104707. [Google Scholar] [CrossRef]
- Alhumoud, S.O.; Al Wazrah, A.A. Arabic sentiment analysis using recurrent neural networks: A reviews. Artif. Intell. Rev. 2022, 55, 707–748. [Google Scholar] [CrossRef]
- Samah, K.A.F.A.; Jailani, N.S.; Hamzah, R.; Aminuddin, R.; Abidin, N.A.Z.; Riza, L.S. Aspect-Based Classification and Visualization of Twitter Sentiment Analysis Towards Online Food Delivery Services in Malaysia. J. Adv. Res. Appl. Sci. Eng. Tech. 2024, 37, 139–150. [Google Scholar]
- Martin-Fuentes, E.; Fernandez, C.; Mateu, C.; Marine-Roig, E. Modelling a grading scheme for peer-to-peer accommodation: Stars for Airbnb. Int. J. Hosp. Manag. 2018, 69, 75–83. [Google Scholar] [CrossRef]
- Singh, H.P.; Alhamad, I.A. Deciphering key factors impacting online hotel ratings through the lens of two-factor theory: A case of hotels in the makkah city of Saudi Arabia. Int. Trans. J. Eng. Manag. Appl. Sci. Technol. 2021, 12, 1–12. [Google Scholar]
- Singh, H.P.; Alhamad, I.A. A Novel Categorization of Key Predictive Factors Impacting Hotels’ Online Ratings: A Case of Makkah. Sustainability 2021, 14, 16588. [Google Scholar] [CrossRef]
- Liu, Z.; Lei, S.H.; Guo, Y.L.; Zhou, Z.A. The interaction effect of online review language style and product type on consumers’ purchase intentions. Palgrave Commun. 2020, 6, 1–8. [Google Scholar] [CrossRef]
- Chakraborty, U.; Bhat, S. Credibility of online reviews and its impact on brand image. Manag. Res. Rev. 2018, 41, 148–164. [Google Scholar] [CrossRef]
- Mackiewicz, J.; Yeats, D.; Thornton, T. The Impact of Review Environment on Review Credibility. J. Bus. Res. 2016, 59, 71–88. [Google Scholar] [CrossRef]
- Aghakhani, N.; Oh, O.; Gregg, D.; Jain, H. How Review Quality and Source Credibility Interacts to Affect Review Usefulness: An Expansion of the Elaboration Likelihood Model. Inf. Syst. Front. 2022, 25, 1513–1531. [Google Scholar] [CrossRef]
- Filieri, R.; Hofacker, C.F.; Alguezaui, S. What makes information in online consumer reviews diagnostic over time? The role of review relevancy, factuality, currency, source credibility and ranking score. Comput. Hum. Behav. 2018, 80, 122–131. [Google Scholar] [CrossRef]
- Qiu, K.; Zhang, L. How online reviews affect purchase intention: A meta-analysis across contextual and cultural factors. Data Inf. Manag. 2023, 8, 100058. [Google Scholar] [CrossRef]
- Zhao, K.; Stylianou, A.C.; Zheng, Y. Sources and impacts of social influence from online anonymous user reviews. Inf. Manag. 2018, 55, 16–30. [Google Scholar] [CrossRef]
- Tran, V.D.; Nguyen, M.D.; Lương, L.A. The effects of online credible review on brand trust dimensions and willingness to buy: Evidence from Vietnam consumers. Cogent Bus. Manag. 2022, 9, 2038840. [Google Scholar] [CrossRef]
- Hung, S.W.; Chang, C.W.; Chen, S.Y. Beyond a bunch of reviews: The quality and quantity of electronic word-of-mouth. Inf. Manag. 2023, 60, 103777. [Google Scholar] [CrossRef]
- Aghakhani, N.; Oh, O.; Gregg, D. Beyond the Review Sentiment: The Effect of Review Accuracy and Review Consistency on Review Usefulness. In Proceedings of the International Conference on Information Systems (ICIS), Seoul, Republic of Korea, 10–13 December 2017. [Google Scholar]
- Xie, K.L.; Chen, C.; Wu, S. Online Consumer Review Factors Affecting Offline Hotel Popularity: Evidence from Tripadvisor. J. Travel Tour. Mark. 2016, 33, 211–223. [Google Scholar] [CrossRef]
- Aghakhani, N.; Oh, O.; Gregg, D.G.; Karimi, J. Online Review Consistency Matters: An Elaboration Likelihood Model Perspective. Inf. Syst. Front. 2021, 23, 1287–1301. [Google Scholar] [CrossRef]
- Wu, H.H.; Tipgomut, P.; Chung, H.F.; Chu, W.K. The mechanism of positive emotions linking consumer review consistency to brand attitudes: A moderated mediation analysis. Asia Pacific J. Mark. Logist. 2020, 32, 575–588. [Google Scholar] [CrossRef]
- Gutt, D.; Neumann, J.; Zimmermann, S.; Kundisch, D.; Chen, J. Design of review systems—A strategic instrument to shape online reviewing behavior and economic outcomes. J. Strateg. Inf. Syst. 2019, 28, 104–117. [Google Scholar] [CrossRef]
- Kamble, V.; Shah, N.; Marn, D.; Parekh, A.; Ramchandran, K. The Square-Root Agreement Rule for Incentivizing Objective Feedback in Online Platforms. Manag. Sci. 2023, 69, 377–403. [Google Scholar] [CrossRef]
- Le, L.T.; Ly, P.T.M.; Nguyen, N.T.; Tran, L.T.T. Online reviews as a pacifying decision-making assistant. J. Retail. Consum. Serv. 2022, 64, 102805. [Google Scholar] [CrossRef]
- Zhang, H.; Yang, A.; Peng, A.; Pieptea, L.F.; Yang, J.; Ding, J. A Quantitative Study of Software Reviews Using Content Analysis Methods. IEEE Access 2022, 10, 124663–124672. [Google Scholar] [CrossRef]
- Kusumasondjaja, S.; Shanka, T.; Marchegiani, C. Credibility of online reviews and initial trust: The roles of reviewer’s identity and review valence. J. Vacat. Mark. 2012, 18, 185–195. [Google Scholar] [CrossRef]
- Jamshidi, S.; Rejaie, R.; Li, J. Characterizing the dynamics and evolution of incentivized online reviews on Amazon. Soc. Netw. Anal. Min. 2019, 9, 22. [Google Scholar] [CrossRef]
- Gneezy, U.; Meier, S.; Rey-Biel, P. When and why incentives (don’t) work to modify behavior. J. Bus. Res. 2011, 25, 191–210. [Google Scholar] [CrossRef]
- Chen, T.; Samaranayake, P.; Cen, X.; Qi, M.; Lan, Y.C. The Impact of Online Reviews on Consumers’ Purchasing Decisions: Evidence from an Eye-Tracking Study. Front. Physiol. 2022, 13, 2723. [Google Scholar] [CrossRef] [PubMed]
- ANoh, Y.G.; Jeon, J.; Hong, J.H. Understanding of Customer Decision-Making Behaviors Depending on Online Reviews. Appl. Sci. 2023, 13, 3949. [Google Scholar] [CrossRef]
- Truong Du Chau, X.; Toan Nguyen, T.; Khiem Tran, V.; Quach, S.; Thaichon, P.; Jo, J.; Vo, B.; Dieu Tran, Q.; Viet Hung Nguyen, Q. Towards a review-analytics-as-a-service (raaas) framework for smes: A case study on review fraud detection and understanding. Australas. Mark. J. 2024, 32, 76–90. [Google Scholar] [CrossRef]
- Park, S.; Shin, W.; Xie, J. Disclosure in Incentivized Reviews: Does It Protect Consumers? Manag. Sci. 2023, 69, 7009–7021. [Google Scholar] [CrossRef]
- Bigne, E.; Chatzipanagiotou, K.; Ruiz, C. Pictorial content, sequence of conflicting online reviews and consumer decision-making: The stimulus-organism-response model revisited. J. Bus. Res. 2020, 115, 403–416. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhang, Z.; Liu, S.; Zhang, Z. Are high-status reviewers more likely to seek anonymity? Evidence from an online review platform. J. Retail. Consum. Serv. 2024, 78, 103792. [Google Scholar] [CrossRef]
- Zhong, M.; Yang, H.; Zhong, K.; Qu, X.; Li, Z. The Impact of Online Reviews Manipulation on Consumer Purchase Decision Based on The Perspective of Consumers’ Perception. J. Internet Technol. 2023, 24, 1469–1476. [Google Scholar] [CrossRef]
- Lu, B.; Ma, B.; Cheng, D.; Yang, J. An investigation on impact of online review keywords on consumers’ product consideration of clothing. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 187–205. [Google Scholar] [CrossRef]
- Li, K.; Chen, Y.; Zhang, L. Exploring the influence of online reviews and motivating factors on sales: A meta-analytic study and the moderating role of product category. J. Retail. Consum. Serv. 2020, 55, 102107. [Google Scholar] [CrossRef]
- He, S.; Hollenbeck, B.; Overgoor, G.; Proserpio, D.; Tosyali, A. Detecting fake-review buyers using network structure: Direct evidence from Amazon. Proc. Natl. Acad. Sci. USA 2022, 119, e2211932119. [Google Scholar] [CrossRef] [PubMed]
- Gerrath, M.H.; Usrey, B. The impact of influencer motives and commonness perceptions on follower reactions toward incentivized reviews. Int. J. Res. Mark. 2021, 38, 531–548. [Google Scholar] [CrossRef]
- Beck, B.B.; Wuyts, S.; Jap, S. Guardians of Trust: How Review Platforms Can Fight Fakery and Build Consumer Trust. J. Mark. Res. 2023, 61, 00222437231195576. [Google Scholar] [CrossRef]
- Du Plessis, C.; Stephen, A.T.; Bart, Y.; Goncalves, D. When in Doubt, Elaborate? How Elaboration on Uncertainty Influences the Persuasiveness of Consumer-Generated Product Reviews When Reviewers Are Incentivized. SSRN Electron. J. 2016, 59, 2821641. [Google Scholar] [CrossRef]
- Yin, H.; Zheng, S.; Yeoh, W.; Ren, J. How online review richness impacts sales: An attribute substitution perspective. J. Assoc. Inf. Sci. Technol. 2021, 72, 901–917. [Google Scholar] [CrossRef]
- Jamshidi, S.; Rejaie, R.; Li, J. Trojan horses in amazon’s castle: Understanding the incentivized online reviews. In Proceedings of the 10th IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2018), Barcelona, Spain, 28–31 August 2018; pp. 335–342. [Google Scholar]
- Jia, Y.; Liu, I.L. Do consumers always follow “useful” reviews? The interaction effect of review valence and review usefulness on consumers’ purchase decisions. J. Assoc. Inf. Sci. Technol. 2018, 69, 1304–1317. [Google Scholar] [CrossRef]
- Siering, M.; Muntermann, J.; Rajagopalan, B. Explaining and predicting online review helpfulness: The role of content and reviewer-related signals. Decis. Support Syst. 2018, 108, 1–12. [Google Scholar] [CrossRef]
- Tang, M.; Xu, Z.; Qin, Y.; Su, C.; Zhu, Y.; Tao, F.; Ding, J. A Quantitative Study of Impact of Incentive to Quality of Software Reviews. In Proceedings of the 9th International Conference on Dependable Systems and Their Applications (DSA 2022), Wulumuqi, China, 4–5 August 2022; pp. 54–63. [Google Scholar]
- Li, X.; Wu, C.; Mai, F. The effect of online reviews on product sales: A joint sentiment-topic analysis. Inf. Manag. 2019, 56, 172–184. [Google Scholar] [CrossRef]
- Danilchenko, K.; Segal, M.; Vilenchik, D. Opinion Spam Detection: A New Approach Using Machine Learning and Network-Based Algorithms. In Proceedings of the Sixteenth International AAAI Conference on Web and Social Media (ICWSM 2022), Atlanta, GA, USA, 6–9 June 2022; Volume 11, pp. 125–134. [Google Scholar]
- Liu, Z.; Liao, H.; Li, M.; Yang, Q.; Meng, F. A deep learning-based sentiment analysis approach for online product ranking with probabilistic linguistic term sets. IEEE Trans. Eng. Manag. 2023. [Google Scholar] [CrossRef]
- Ali, H.; Hashmi, E.; Yayilgan Yildirim, S.; Shaikh, S. Analyzing Amazon Products Sentiment: A Comparative Study of Machine and Deep Learning, and Transformer-Based Techniques. Electronics 2024, 13, 1305. [Google Scholar] [CrossRef]
- Victor, V.; James, N.; Dominic, E. Incentivised dishonesty: Moral frameworks underlying fake online reviews. Int. J. Consum. Stud. 2024, 48, e13037. [Google Scholar] [CrossRef]
- Husain, A.; Alsharo, M.; Jaradat, M.I.R. Content-rating consistency of online product review and its impact on helpfulness: A fine-grained level sentiment analysis. Interdiscip. J. Inf. Knowl. Manag. 2023, 18, 645–666. [Google Scholar] [CrossRef] [PubMed]
- Liao, J.; Chen, J.; Jin, F. Social free sampling: Engaging consumer through product trial reports. Inf. Technol. People. 2023, 36, 1626–1644. [Google Scholar] [CrossRef]
- Joseph, E.; Munasinghe, T.; Tubbs, H.; Bishnoi, B.; Anyamba, A. Scraping Unstructured Data to Explore the Relationship between Rainfall Anomalies and Vector-Borne Disease Outbreaks. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 4156–4164. [Google Scholar]
- Dogra, K.S.; Nirwan, N.; Chauhan, R. Unlocking the Market Insight Potential of Data Extraction Using Python-Based Web Scraping on Flipkart. In Proceedings of the 2023 International Conference on Sustainable Emerging Innovations in Engineering and Technology (ICSEIET), Ghaziabad, India, 14–15 September 2023; pp. 453–457. [Google Scholar]
- Naseem, U.; Razzak, I.; Eklund, P.W. A survey of pre-processing techniques to improve short-text quality: A case study on hate speech detection on Twitter. Multimed. Tools Appl. 2021, 80, 35239–35266. [Google Scholar] [CrossRef]
- Gupta, H.; Patel, M. Method of text summarization using LSA and sentence-based topic modeling with Bert. In Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; pp. 511–517. [Google Scholar]
- Özçift, A.; Akarsu, K.; Yumuk, F.; Söylemez, C. Advancing natural language processing (NLP) applications of morphologically rich languages with bidirectional encoder representations from transformers (BERT): An empirical case study for Turkish. Automatika 2021, 62, 226–238. [Google Scholar] [CrossRef]
- Yuan, L.; Zhao, H.; Wang, Z. Research on News Text Clustering for International Chinese Education. In Proceedings of the 2023 International Conference on Asian Language Processing (IALP), Singapore, 18–20 November 2023; pp. 377–382. [Google Scholar]
- Bawa, S.S. Implementing Text Analytics with Enterprise Resource Planning. Int. J. Simul. Syst. Sci. Technol. 2023, 24. [Google Scholar] [CrossRef]
- Jebb, A.T.; Parrigon, S.; Woo, S.E. Exploratory data analysis as a foundation of inductive research. Hum. Resour. Manag. Rev. 2017, 27, 265–276. [Google Scholar] [CrossRef]
- Basiri, M.E.; Ghasem-Aghaee, N.; Naghsh-Nilchi, A.R. Exploiting reviewers’ comment histories for sentiment analysis. J. Inf. Sci. 2014, 40, 313–328. [Google Scholar] [CrossRef]
- Catelli, R.; Pelosi, S.; Esposito, M. Lexicon-based vs. Bert-based sentiment analysis: A comparative study in Italian. Electronics 2022, 11, 374. [Google Scholar] [CrossRef]
- Arroni, S.; Galán, Y.; Guzmán Guzmán, X.M.; Núñez Valdéz, E.R.; Gómez Gómez, A. Sentiment analysis and classification of hotel opinions in twitter with the transformer architecture. Int. J. Interact. Multimed. Artif. Intell. 2023, 8, 53. [Google Scholar] [CrossRef]
- Schober, P.; Boer, C.; Schwarte, L.A. Correlation coefficients: Appropriate use and interpretation. Anesth Analg. 2018, 126, 1763–1768. [Google Scholar] [CrossRef] [PubMed]
- Gomaa, W.H.; Fahmy, A.A. A survey of text similarity approaches. Int. J. Comput. Appl. 2013, 68, 13–18. [Google Scholar]
- Qaiser, S.; Ali, R. Text mining: Use of TF-IDF to examine the relevance of words to documents. Int. J. Comput. Appl. 2018, 181, 25–29. [Google Scholar] [CrossRef]
- Reimers, N.; Gurevych, I. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2019), Hong Kong, China, 3–7 November 2019. [Google Scholar]
- Huang, D.; Wang, C.D.; Wu, J.S.; Lai, J.H.; Kwoh, C.K. Ultra-scalable spectral clustering and ensemble clustering. IEEE Trans. Knowl. Data Eng. 2019, 32, 1212–1226. [Google Scholar] [CrossRef]
- Hansen, P.C. The truncated SVD as a method for regularization. BIT Numer. Math. 1987, 27, 534–553. [Google Scholar] [CrossRef]
- Rousseeuw, P.J. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 1987, 20, 53–65. [Google Scholar] [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Kumar, B.; Badiger, V.S.; Jacintha, A.D. Sentiment Analysis for Products Review based on NLP using Lexicon-Based Approach and Roberta. In Proceedings of the 2024 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE), Bangalore, India, 24–25 January 2024; pp. 1–6. [Google Scholar]
- Alatrash, R.; Priyadarshini, R. Fine-grained sentiment-enhanced collaborative filtering-based hybrid recommender system. J. Web Eng. 2024, 22, 983–1035. [Google Scholar] [CrossRef]
- Sharma, D.; Hamed, E.N.; Akhtar, N.; Vignesh, G.; Thomas, S.A.; Sekhar, M. Next-Generation NLP Techniques: Boosting Machine Understanding in Conversational AI Technologies. J. Comput. Anal. Appl. 2024, 33, 100–109. [Google Scholar]
- Verma, D.; Dewani, P.P.; Behl, A.; Pereira, V.; Dwivedi, Y.; Del Giudice, M. A meta-analysis of antecedents and consequences of eWOM credibility: Investigation of moderating role of culture and platform type. J. Bus. Res. 2023, 154, 113292. [Google Scholar] [CrossRef]
- Li, X.; Ma, B.; Chu, H. The impact of online reviews on product returns. Asia Pac. J. Mark. Logist. 2018, 33, 1814–1828. [Google Scholar] [CrossRef]
- Sun, B.; Kang, M.; Zhao, S. How online reviews with different influencing factors affect the diffusion of new products. Int. J. Consum. Stud. 2023, 47, 1377–1396. [Google Scholar] [CrossRef]
- Hair, M.; Ozcan, T. How reviewers’ use of profanity affects perceived usefulness of online reviews. Mark. Lett. 2018, 29, 151–163. [Google Scholar] [CrossRef]
- Luo, C.; Luo, X.R.; Xu, Y.; Warkentin, M.; Sia, C.L. Examining the moderating role of sense of membership in online review evaluations. Inf. Manag. 2015, 52, 305–316. [Google Scholar] [CrossRef]
- Bi, S.; Liu, Z.; Usman, K. The influence of online information on investing decisions of reward-based crowdfunding. J. Bus. Res. 2017, 71, 10–18. [Google Scholar] [CrossRef]
- Janze, C.; Siering, M. ‘Status Effect’in User-Generated Content: Evidence from Online Service Reviews. In Proceedings of the 2015 International Conference on Information Systems: Exploring the Information Frontier (ICIS 2015), Fort Worth, TX, USA, 13–16 December 2015; pp. 1–15. [Google Scholar]
- Chatterjee, S.; Chaudhuri, R.; Kumar, A.; Wang, C.L.; Gupta, S. Impacts of consumer cognitive process to ascertain online fake review: A cognitive dissonance theory approach. J. Bus. Res. 2023, 154, 113370. [Google Scholar] [CrossRef]
- Campagna, C.L.; Donthu, N.; Yoo, B. Brand authenticity: Literature review, comprehensive definition, and an amalgamated scale. J. Mark. Theory Pract. 2023, 31, 129–145. [Google Scholar] [CrossRef]
- Xu, C.; Zheng, X.; Yang, F. Examining the effects of negative emotions on review helpfulness: The moderating role of product price. Comput. Hum. Behav. 2023, 139, 107501. [Google Scholar] [CrossRef]
- Luo, L.; Liu, J.; Shen, H.; Lai, Y. Vote or not? How language mimicry affect peer recognition in an online social Q&A community. Neurocomputing 2023, 530, 139–149. [Google Scholar]
Study | Year | Gap(s) | Goal(s) | Method(s) |
---|---|---|---|---|
[2] | 2022 | Lack of focus on info quality Difficulty measuring fragmented info | Study impact of online reviews Explore mediating/moderating roles | Smart PLS analysis Web-based experiment and survey |
[7] | 2023 | Lack of focus on text quality Limited research on coherence | Study effect of incentives on text quality Explore coherence and aspect richness | Two-way fixed-effect model Randomized MTurk experiment |
[8] | 2018 | Limited studies on influencer marketing Few works on incentivized reviews | Study effect of incentivized reviews Analyze reviewer motivations | Qualitative and quantitative analyses Content analysis and surveys |
[9] | 2022 | Lack of focus on eWOM trust Limited exploration of norms conflict | Study impact of incentives on trust Explore norms conflict mediation role | Three experiments Bootstrap analysis |
[10] | 2018 | Limited studies on incentives vs norms Lack of combined strategy research | Study effect of incentives and norms Examine their joint impact on reviews | Two randomized experiments Econometric analysis |
[11] | 2019 | Lack of study on identifying incentivized reviews using text | Predict incentivized reviews Explore text features and sentiment | Decision trees (C5.0, C&RT) Random forest, sentiment analysis |
[12] | 2021 | Limited focus on content positivity Lack of review-writing enjoyment data | incentives impact on review positivity Examine enjoyment of review writing | Seven controlled experiments NLP and human judgment analysis |
[13] | 2020 | Lack of studies on incentivized reviews in the hotel sector | Detect incentivized reviews Perform sentiment analysis | Random forest, KNN, SVM Sentiment analysis (VADER) |
[14] | 2020 | Lack of research on reevaluation mechanisms in incentives | Study how reevaluation-based incentives affect reviewer behavior | Propensity score matching (PSM) Difference-in-differences (DID) |
[15] | 2020 | Lack of studies on incentive effects on review valence | Investigate psychological effects of incentives on review valence | Pilot study, two experiments Content analysis |
[16] | 2022 | Limited research on mandatory vs voluntary disclosure effects | Compare mandatory and voluntary disclosures on review bias | Propensity score matching Sentiment analysis |
[38] | 2023 | Lack of effective reward mechanisms for objective feedback | Propose SRA to incentivize objective, truthful evaluations | Square-root agreement rule (SRA) Numerical experiments |
[42] | 2019 | Lack of quantitative study on incentivized reviews’ prevalence | Detect and characterize incentivized reviews on Amazon | Machine learning classification Regular expression patterns |
[46] | 2024 | Limited frameworks for SMEs on fraudulent review detection | Develop RAaaS framework for SMEs to detect fake reviews | Cloud-based framework, NLP, sentiment analysis, unsupervised learning |
[47] | 2023 | Lack of empirical study on disclosure effectiveness | Investigate if incentivized review disclosures protect consumers | Difference-in-differences (DID) Regression analysis |
[50] | 2023 | Lack of studies on deceptive reviews’ impact on purchase decisions | Study how deceptive reviews affect consumer purchase decisions | Questionnaire survey Empirical analysis using SPSS |
[54] | 2021 | Lack of focus on influencer motives for accepting incentives | Examine how acceptance motives affect follower reactions | Survey study, experiments Field study with blog data |
[57] | 2021 | Lack of focus on review richness impacts | Investigate the impact of review richness on sales | Regression models Online experiments |
[61] | 2022 | Lack of clarity on the impact of incentivized reviews | Investigate the impact of incentives on review quality | Sentiment analysis A/B testing, similarity analysis |
[62] | 2019 | Limited study on joint sentiment-topic models | Investigate how numerical and textual reviews affect sales | Joint sentiment-topic model Mediation analysis |
[63] | 2022 | Insufficient labeled data for opinion spam detection | Develop a new opinion spam detection using few-shot learning | Machine learning, network algorithms, belief propagation |
[64] | 2023 | Limited accuracy of PLTS in sentiment analysis | Develop a deep learning approach for PLTS generation | Deep learning, sentiment analysis, PLTS |
[65] | 2024 | Lack of comparative study on sentiment analysis methods | Compare ML, DL, and Transformer-based sentiment models | NLP, BERT, CNN, Bi-LSTM, random forest, TF-IDF |
[66] | 2024 | Limited empirical study on moral frameworks in fake reviews | Investigate how incentives affect dishonest reviews Identify moral heuristics involved | Survey, hypothetical scenarios Philosophical moral framework measure |
Study | Finding(s) | Contribution(s) | Limitation(s) |
---|---|---|---|
[2] | Info quality improves trust Social presence improves trust Positive reviews drive intention | Insights on trust and intention Extends S-O-R to online reviews | Sample mostly Chinese students No time dimension considered |
[7] | Incentives improve text coherence Aspect richness increases with incentives | Insights into text quality improvements Encourages detail-rich reviews | Limited to the Amazon Vine program Data until August 2015 only |
[8] | Incentivized reviews boost review numbers Positive reviews increase purchase potential | Applies exchange theory to reviews Insights on influencer marketing effects | Limited generalizability platforms Focused on one product category |
[9] | Incentives lower trust in eWOM High-quality reviews boost trust | Insights on trust restoration Concrete strategies for eWOM management | Focused only on monetary incentives Only positive reviews analyzed |
[10] | Incentives drive review volumes Norms lengthen reviews | Insights on incentives and norms Combines social and financial incentives | Limited to specific retail contexts Limited generalizability platforms |
[11] | Incentivized reviews are longer Positive sentiment is higher | Text mining model for detection Practical rules to spot bias | Limited to two product categories Assumed disclaimers may miss bias |
[12] | Incentives increase review positivity Incentives boost review writing enjoyment | Highlights enjoyment role in review writing Extends literature on incentives and reviews | Limited to short-term incentives Only online reviews considered |
[13] | Random forest has a 94.4% accuracy VADER performs well for polarity | Provides a methodology for detecting incentivized hotel reviews | Limited to hotel reviews Small sample size |
[14] | Reviewers increase review frequency and quality in the short term | Shows the long-term impact of the reevaluation of content quality | Focused on Yelp Elite Squad only and limited geographic scope |
[15] | Incentives increase review numbers Psychological costs reduce review valence | Explores reciprocity and resistance Highlights unintended effects of incentives | Limited to monetary incentives Potential bias in the participant sample |
[16] | Mandatory disclosure reduces bias Voluntary disclosure increases ratings | Highlights the importance of mandatory disclosure for consumer trust | Focused only on the Amazon platform Limited generalizability |
[38] | SRA incentivizes truthful behavior Effective in homogeneous settings | Proposes SRA as a new reward mechanism in online platforms | Limited to objective feedback Assumes homogeneous responses |
[42] | EIRs show different patterns EIRs affect non-EIR submissions | Quantitative analysis of EIRs Temporal analysis of EIRs | Limited to two product categories Focused on Amazon only |
[46] | Fake reviews affect the ranking Fake reviews are shorter Emotional bias in fake reviews | Provides cost-effective review analytics for SMEs Insights into fake reviews Characteristics and patterns of fake reviews | Limited to English reviews Focused on two datasets |
[47] | Disclosure doesn’t remove inflation Sales increase despite disclosure | Highlights limitations of disclosure Proposes alternative (platform-initiated IR) | Limited to Amazon platform Time constraints for post-policy data |
[50] | Perceived deception lowers trust Fake reviews affect purchase decisions | Insights into the impact of fake reviews on behavior | Small sample size Focused only on Taobao |
[54] | Intrinsic motives mitigate negative effects on credibility | Shows the importance of motives in incentivized review acceptance | Limited to review and lifestyle influencers |
[57] | Richer reviews boost sales More impact on utilitarian products | Introduces review richness as a key factor in sales | Limited to JD.com platform Focused on specific product categories |
[61] | Incentives do not strongly impact overall review quality | Proposes evaluation of multiple review dimensions for quality | Focused on software reviews Limited to G2 platform data |
[62] | Textual reviews complement numerical ratings | Proposes a new model linking reviews to sales | Limited to tablet products Short time frame |
[63] | CRSDnet outperforms other spam detection algorithms | Introduces CRSDnet, a novel spam detection method | Limited to Yelp datasets Not tested on other platforms |
[64] | High prediction accuracy with PLTS method | Introduces deep learning for PLTS generation | Limited to product reviews Focused on specific datasets |
[65] | BERT achieved highest sentiment analysis accuracy | Provides insight into comparative performance of sentiment models | Limited to Amazon reviews Tested on limited product categories |
[66] | Incentives increase fake reviews Utilitarian, egoism frameworks dominate | Shows link between incentives and moral frameworks in reviews | Limited to food delivery platforms Focused on a single Indian city |
Attribute | Incentivized | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
overAllRating | NoIncentive | - | 554 | 335 | 711 | 4037 | 11,623 | - | - | - | - | - |
Incentive | - | 129 | 346 | 2162 | 11,328 | 18,773 | - | - | - | - | - | |
Value for money | NoIncentive | 2999 | 668 | 324 | 979 | 2930 | 9360 | - | - | - | - | - |
Incentive | 7367 | 817 | 748 | 3283 | 7411 | 13,612 | - | - | - | - | - | |
Ease of use | NoIncentive | 909 | 505 | 413 | 1383 | 4395 | 9655 | - | - | - | - | - |
Incentive | 43 | 320 | 985 | 4362 | 10,446 | 16,582 | - | - | - | - | - | |
Features | NoIncentive | 909 | 486 | 342 | 1442 | 4968 | 9413 | - | - | - | - | - |
Incentive | 43 | 176 | 672 | 8821 | 11,723 | 16,303 | - | - | - | - | - | |
Customer support | NoIncentive | 3026 | 842 | 275 | 704 | 2276 | 10,047 | - | - | - | - | - |
Incentive | 8667 | 471 | 823 | 3142 | 6540 | 13,095 | - | - | - | - | - | |
Likelihood to recommend | NoIncentive | 2372 | 124 | 120 | 118 | 86 | 803 | 331 | 995 | 2242 | 2857 | 7714 |
Incentive | 2184 | 119 | 225 | 309 | 367 | 1192 | 1495 | 3605 | 6288 | 6217 | 10,737 |
Number of Reviews by Year for Review Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Incentivized | Sentiment | 2017 | 2018 | 2019 | 2020 | 2021 | - | - | - | - |
NoIncentive | Positive | 1207 | 3777 | 3712 | 2553 | 1878 | - | - | - | - |
Negative | 351 | 1013 | 1125 | 931 | 755 | - | - | - | - | |
Incentive | Positive | 1027 | 8702 | 7500 | 3359 | 2251 | - | - | - | - |
Negative | 943 | 3301 | 2817 | 1732 | 1276 | - | - | - | - | |
Number of Reviews by Product Usage Duration for Review Description | ||||||||||
Incentivized | Sentiment | Free Trial | <6 months | 6–12 months | 1–2 years | 2+ years | - | - | - | - |
NoIncentive | Positive | 611 | 4172 | 2543 | 2058 | 2751 | - | - | - | - |
Negative | 288 | 1386 | 918 | 700 | 1002 | - | - | - | - | |
Incentive | Positive | 895 | 4553 | 5065 | 4207 | 5870 | - | - | - | - |
Negative | 492 | 1679 | 1747 | 1882 | 3056 | - | - | - | - | |
Number of Reviews by Company Size Based on the Number of Employees for Review Description | ||||||||||
Incentivized | Sentiment | Myself only | 1–10 | 11–50 | 51–200 | 201–500 | 501–1000 | 1001–5000 | 5001–10,000 | 10,001+ |
NoIncentive | Positive | 1452 | 3789 | 2843 | 1469 | 604 | 371 | 484 | 163 | 445 |
Negative | 474 | 1317 | 988 | 481 | 191 | 124 | 118 | 32 | 93 | |
Incentive | Positive | 1782 | 5124 | 5270 | 3572 | 1652 | 1158 | 1812 | 367 | 805 |
Negative | 727 | 2065 | 2009 | 1382 | 577 | 369 | 444 | 142 | 265 |
Top 20 Words from Review Descriptions | |||
Positive NoIncentive | Positive Incentive | Negative NoIncentive | Negative Incentive |
great | use | use | use |
use | great | software | need |
good | good | need | CRM |
work | work | CRM | work |
business | need | work | software |
software | business | good | great |
help | team | time | tool |
team | well | business | good |
need | software | product | |
well | help | great | easy |
easy | make | help | make |
love | easy use | company | well |
CRM | client | one | company |
make | company | support | one |
easy use | tool | make | business |
tool | CRM | client | sale |
client | sale | easy | project |
support | project | help | |
company | easy | feature | time |
project | customer | system | client |
Top 20 Trigrams from Combined Strings | |||
Trigrams in NoIncentive | Frequency | Trigrams in Incentive | Frequency |
software easy use | 18.02 | project management tool | 14.34 |
would like see | 15.13 | software easy use | 13.07 |
sensitive content hidden | 13.39 | easy use easy | 11.95 |
easy use great | 11.58 | would like see | 11.14 |
easy use easy | 11.39 | project management software | 9.41 |
great customer service | 10.84 | easy use great | 9.29 |
project management tool | 9.62 | help keep track | 9.01 |
help keep track | 8.50 | use free version | 7.90 |
user-friendly easy | 8.44 | steep learning curve | 7.84 |
great customer support | 7.61 | user-friendly easy | 7.67 |
really easy use | 7.57 | super easy use | 7.08 |
save lot time | 7.39 | simple easy use | 7.06 |
everything one place | 7.34 | really easy use | 6.74 |
project management software | 7.08 | easy keep track | 6.70 |
would like able | 6.90 | bit learning curve | 6.58 |
software user-friendly | 6.85 | take time learn | 6.35 |
would highly recommend | 6.51 | would highly recommend | 6.28 |
customer service great | 6.48 | great project management | 6.15 |
customer service team | 6.47 | save lot time | 6.14 |
product easy use | 6.35 | customer relationship management | 5.95 |
Attribute | Incentivized | Mean | Std | Std Error | 5% Threshold | Observed Difference | Empirical p | Observed t-Value |
---|---|---|---|---|---|---|---|---|
Overall Rating | NoIncentive | 4.497 | 0.913 | 0.007 | 4.483–4.511 | 0.023 | 0.000 | −2.848 |
Incentive | 4.474 | 0.702 | 0.004 | 4.467–4.482 | ||||
Value for Money | NoIncentive | 3.637 | 1.916 | 0.015 | 3.609–3.665 | 0.296 | 0.000 | −16.294 |
Incentive | 3.341 | 1.965 | 0.011 | 3.319–3.362 | ||||
Ease of Use | NoIncentive | 4.133 | 1.35 | 0.010 | 4.113–4.153 | −0.146 | 1.000 | 12.774 |
Incentive | 4.279 | 0.89 | 0.005 | 4.269–4.288 | ||||
Features | NoIncentive | 4.144 | 1.329 | 0.010 | 4.124–4.164 | −0.174 | 1.000 | 15.749 |
Incentive | 4.319 | 0.815 | 0.005 | 4.310–4.328 | ||||
Customer Support | NoIncentive | 3.657 | 1.954 | 0.015 | 3.628–3.686 | 0.505 | 0.000 | −26.961 |
Incentive | 3.152 | 2.060 | 0.011 | 3.129–3.173 | ||||
Likelihood to Recommend | NoIncentive | 7.666 | 3.431 | 0.026 | 7.615–7.717 | −0.177 | 1.000 | 5.880 |
Incentive | 7.843 | 2.685 | 0.015 | 7.813–7.872 | ||||
Length | NoIncentive | 110.143 | 132.151 | 1.006 | 108.194–112.113 | 0.304 | 1.000 | −0.260 |
Incentive | 109.839 | 107.383 | 0.593 | 108.694–111.023 |
Query | Nature of Query | Query Text |
---|---|---|
Query 1 (Q 1) | Complex customer preferences | For my work I need the software to facilitate my work and give me the will to recommend that to others as I am frustrated with other software I have used. I need the software to work well, no matter if it is complex or not as I like challenges, with good CRM, and good customer support, has enough features and I can work with that on my phone. The price is not that important. |
Query 2 (Q 2) | Moderate Customer Preferences | I need the product with good features, which has a low price, I can learn how to work with that fast and easily |
Query 3 (Q 3) | Simple Customer Preferences | I need Good CRM |
Query 4 (Q 4) | One NoIncentive Review | Surprised Franklin Covey would even advertise think the program would good could get work customer support beyond horrible there no pro point possibly layout great but would not know since can not get work tired sync w ical with no success when you call to support you route voice mailit take least hour someone calls you back in sale hour later not in my office in front computer etc work out issue |
Query 5 (Q 5) | Part of NoIncentive Review | Would not know since can not get workI tired sync w ical with no success when you call support you route voice mailit take least hour someone call you back in sale hour later not in my office in front computer etc work out issue |
Query 6 (Q 6) | Synonyms Replacement in Review | Astonished would even publicize think program would decent could get work customer provision yonder awful there no pro opinion perhaps design countless but would not know since can not get workI exhausted synchronize w l with no achievement when you call support you way voice mailit take smallest hour someone call you back in transaction hour later not in my office in forward-facing computer etc. work out problem |
Query | Model | Listing ID1 | Similarity Score 1 | Listing ID2 | Similarity Score 2 | Listing ID3 | Similarity Score 3 | Listing ID4 | Similarity Score 4 | Listing ID5 | Similarity Score 5 |
---|---|---|---|---|---|---|---|---|---|---|---|
Q 1 | TF-IDF | 113213 | 0.042 | 109395 | 0.029 | 10317 | 0.015 | 101405 | 0.015 | 119723 | 0.013 |
SBERT | 91179 | 0.862 | 9448 | 0.856 | 20406 | 0.852 | 10317 | 0.850 | 102533 | 0.848 | |
Q 2 | TF-IDF | 90941 | 0.027 | 9908 | 0.005 | 102517 | 0.003 | 106331 | 0.002 | 10317 | 0.000 |
SBERT | 106331 | 0.844 | 102445 | 0.828 | 90844 | 0.826 | 9531 | 0.825 | 91196 | 0.824 | |
Q 3 | TF-IDF | 102517 | 0.008 | 10317 | 0.000 | 90859 | 0.000 | 104247 | 0.000 | 106331 | 0.000 |
SBERT | 2046686 | 0.724 | 106331 | 0.702 | 2035403 | 0.695 | 9401 | 0.694 | 106331 | 0.693 | |
Q 4 | TF-IDF | 90602 | 0.011 | 90859 | 0.007 | 9908 | 0.005 | 90507 | 0.004 | 10317 | 0.002 |
SBERT | 91203 | 0.920 | 113901 | 0.919 | 10317 | 0.914 | 91203 | 0.914 | 90602 | 0.913 | |
Q 5 | TF-IDF | 10317 | 0.000 | 90859 | 0.000 | 104247 | 0.000 | 106331 | 0.000 | 90844 | 0.000 |
SBERT | 91203 | 0.916 | 2348 | 0.905 | 142099 | 0.892 | 104265 | 0.891 | 109561 | 0.891 | |
Q 6 | TF-IDF | 91734 | 0.004 | 10317 | 0.000 | 90859 | 0.000 | 104247 | 0.000 | 106331 | 0.000 |
SBERT | 90602 | 0.913 | 90507 | 0.913 | 113901 | 0.911 | 2348 | 0.910 | 91203 | 0.907 |
Query | Model | Precision | Recall | F1-Score | Accuracy | Match Ratio | Mean Reciprocal Rank |
---|---|---|---|---|---|---|---|
Q 1 | TF-IDF | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 |
SBERT | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 | |
Q 2 | TF-IDF | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 |
SBERT | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 | |
Q 3 | TF-IDF | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 |
SBERT | 1.000 | 0.019 | 0.038 | 0.995 | 1.000 | 1.000 | |
Q 4 | TF-IDF | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 |
SBERT | 1.000 | 0.019 | 0.038 | 0.995 | 1.000 | 1.000 | |
Q 5 | TF-IDF | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 |
SBERT | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 | |
Q 6 | TF-IDF | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 |
SBERT | 1.000 | 0.020 | 0.038 | 0.995 | 1.000 | 1.000 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kargozari, K.; Ding, J.; Chen, H. Empowering Consumer Decision-Making: Decoding Incentive vs. Organic Reviews for Smarter Choices Through Advanced Textual Analysis. Electronics 2024, 13, 4316. https://doi.org/10.3390/electronics13214316
Kargozari K, Ding J, Chen H. Empowering Consumer Decision-Making: Decoding Incentive vs. Organic Reviews for Smarter Choices Through Advanced Textual Analysis. Electronics. 2024; 13(21):4316. https://doi.org/10.3390/electronics13214316
Chicago/Turabian StyleKargozari, Kate, Junhua Ding, and Haihua Chen. 2024. "Empowering Consumer Decision-Making: Decoding Incentive vs. Organic Reviews for Smarter Choices Through Advanced Textual Analysis" Electronics 13, no. 21: 4316. https://doi.org/10.3390/electronics13214316
APA StyleKargozari, K., Ding, J., & Chen, H. (2024). Empowering Consumer Decision-Making: Decoding Incentive vs. Organic Reviews for Smarter Choices Through Advanced Textual Analysis. Electronics, 13(21), 4316. https://doi.org/10.3390/electronics13214316