Next Article in Journal
Influence of Social Media Usage on the Green Product Innovation of Manufacturing Firms through Environmental Collaboration
Next Article in Special Issue
A Bibliometric Analysis of Online Reviews Research in Tourism and Hospitality
Previous Article in Journal
Teenagers’ Awareness about Local Vertebrates and Their Functions: Strengthening Community Environmental Education in a Mexican Shade-Coffee Region to Foster Animal Conservation
Previous Article in Special Issue
Using Bayesian Network to Predict Online Review Helpfulness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Significant Labels in Sentiment Analysis of Online Customer Reviews of Airlines

by
Ayat Zaki Ahmed
and
Manuel Rodríguez-Díaz
*
Department of Economics and Business, Faculty of Economy, Business and Tourism, University of Las Palmas de Gran Canaria, 35001 Las Palmas, Spain
*
Author to whom correspondence should be addressed.
Sustainability 2020, 12(20), 8683; https://doi.org/10.3390/su12208683
Submission received: 30 August 2020 / Revised: 17 October 2020 / Accepted: 18 October 2020 / Published: 20 October 2020
(This article belongs to the Special Issue Online Reputation and Sustainability)

Abstract

:
Sentiment analysis is becoming an essential tool for analyzing the contents of online customer reviews. This analysis involves identifying the necessary labels to determine whether a comment is positive, negative, or neutral, and the intensity with which the customer’s sentiment is expressed. Based on this information, service companies such as airlines can design and implement a communication strategy to improve their customers’ image of the company and the service received. This study proposes a methodology to identify the significant labels that represent the customers’ sentiments, based on a quantitative variable, that is, the overall rating. The key labels were identified in the comments’ titles, which usually include the words that best define the customer experience. This database was applied to more extensive online customer reviews in order to validate that the identified tags are meaningful for assessing the sentiments expressed in them. The results show that the labels elaborated from the titles are valid for analyzing the feelings in the comments, thus, simplifying the labels to be taken into account when carrying out a sentiment analysis of customers’ online comments.

1. Introduction

Communication between companies and clients increasingly takes place through user-generated content (UGC) on social media and specialized websites [1]. The online opinions expressed by customers on TripAdvisor, Expedia, Facebook, Instagram, or Twitter influence the reputation and brand image of service companies. Customers share their experiences related to the service they have received with others. In this context, the analysis of the online content shared by customers is essential in order to implement an effective communication strategy. Sentiment analysis includes different methodologies to evaluate the meaning of online comments [2,3,4], so that steps can be taken to increase customer loyalty. Sentiment analysis involves designing automated learning models that make it possible to assess whether the sentiments communicated by clients are positive, negative, or neutral, and their degree of intensity [5,6,7,8]. The aim is to create and implement machine learning and artificial intelligence methodologies that help to manage the large amount of data generated on the Internet between clients and service companies [9]. As service companies, airlines are exposed to constant information transmitted by their customers, which directly influences potential customers. Therefore, they need methods that speed up effective online communication with their customers [10].
The online reputation is the basis for the different research lines being developed to improve knowledge and provide useful tools for airlines to better understand their clientele’s preferences and the competitiveness of their service offerings [11,12]. The objective is for the image transmitted by companies on the Internet through social media and specialized websites to correspond to the service perceived by customers [13]. The online reputation evaluated by customers on quantitative scales or in written comments about their experiences must correspond with what customers actually perceive while receiving the service. Hence, service companies, such as hotels and airlines, give great importance to this spontaneous communication on the Internet, either to counteract a negative opinion or to thank customers and encourage them to contribute their assessments [14,15,16,17,18]. From this perspective, online customer ratings are an agile and up-to-date method for measuring service companies’ performance, once specific quantitative data are provided on the attributes of the service quality or the expression of sentiments and emotions that facilitate the evaluation of the degree of customer satisfaction [19,20,21,22,23,24,25,26].
Airlines have a specialized website where customers can evaluate the service received and the satisfaction achieved: TripAdvisor. It is currently the most important specialized website collecting quantitative and qualitative information. Quantitative information includes the rating, which is a general evaluation of the service quality perceived by customers [26,27]. Service quality is a construct that focuses on internal and external processes that require a specific quantitative assessment of the attributes involved [11,28]. In contrast, satisfaction is a concept that is closely linked to the emotions and sentiments experienced by the client while receiving the contracted service. Therefore, the degree of satisfaction is a more qualitative construct that must be extracted and interpreted from clients’ comments [29]. The content analysis of online comments written by customers is designed to assess their sentiments in order to allow service companies to implement effective and personalized communication [3].
This study aims to develop an effective intelligent model extracted from unstructured data to measure the overall ratings based on passengers’ feelings and transform them into managerial insights. Our study proposes a methodology to detect the essential labels that determine customers’ sentiments in online comments about an airline. The tags are obtained from the titles of the comments available on TripAdvisor, which are short phrases that summarize the customers’ assessment. These tags are applied to the comments to define the customers’ sentiments in terms of their direction (positive, negative, or neutral) and intensity. The purpose of this study is to provide valuable insights into various attributes that impact passenger satisfaction, based on online overall ratings from TripAdvisor. From a managerial point of view, the published ratings of a service or product’s quality have high economic significance with an important strategic and long-term financial impact [30,31,32,33,34,35,36,37,38]. Additionally, we display how airlines can use these unstructured data to obtain a better perception of their market positioning. The objective of this study is to determine the key labels related to the overall rating evaluations clients make in their online reviews. To achieve this objective, a literature review is carried out. Then, the methodology applied is presented, where the different stages of developing machine learning are described. The results obtained are presented in the following section, where multiple regression analysis is carried out, concluding with the main conclusions and limitations of the study.

2. Literature Review

Establishing the sense of the feelings expressed in online comments is a more complicated task than defining the attributes customers relate to in their ratings [39]. It is relatively easy to determine whether customers are commenting on check-in or the airline’s on-board personnel. However, it is much more subjective and complex to establish whether the rating is positive or negative and to what degree [40,41,42,43]. Sentiment analysis applied to airlines is a tool to evaluate customers’ written assessments of the service received in order to assign it to a particular sentimental class [5,44,45,46,47,48]. The methodologies for classifying feelings differ depending on the classes to which they are allocated [49]. Some choose a more straightforward classification that only distinguishes between positive and negative. Other methods introduce neutral feelings, insofar as they do not express an emotion marked by an excellent service rating or a negative rating. Finally, some methodologies extend the classification to as many as five classes, making it much more challenging to evaluate the clients’ subjectively.
It is much easier to determine whether a label expresses a positive or negative feeling than to establish how positive or negative the feeling is. From this perspective, sentiment analysis also integrates the degree of intensity of the clients’ assessments. Two approaches have been applied in the analysis of sentiment [50]. One approach focuses on identifying the labels that represent feelings, referred to as the word bag model. In this case, vectors of variables are created for each of the texts to be analyzed, and they are introduced into machine learning using various classification models, such as Naïve Bayes [51], maximum entropy [52], K-nearest neighbor [53], decision tree [54], and support vector machine [55]. The other method is the word embedding technique, which uses neural networks to learn, based on the similarity of the words used in the clients’ comments [56,57].
In the airline industry, content analysis is used to determine customers’ satisfaction through their online comments [58,59,60]. Online customer ratings influence potential customers’ purchasing decisions through social media [61,62,63,64,65]. A line of research in airlines has focused on perceived service quality evaluated through quantitative variables, analyzing its impact on customer satisfaction and brand image [1,66,67,68,69,70,71]. Another aspect of online reputation is developed through written customer feedback. This information is used to assess the level of customer satisfaction, which is a concept related to a customer’s experiences and emotions with an airline [72]. Therefore, companies have to make decisions about communication with their customers based on quantitative and qualitative information [73]. Table 1 presents tourism studies that use the overall rating to conduct empirical research. The companies linked to tourism are accommodations, restaurants, travel agencies, tour operators, and airlines. These service companies have various specialized websites where customers can share their ratings and opinions, such as TripAdvisor, Booking, or Facebook. This work uses quantitative information, such as the general rating, to identify and evaluate customer feelings.

3. Research Methodology

Sentiment analysis is usually performed through machine learning. In this case, a machine learning methodology based on multiple regression analysis is proposed, based on quantitative information offered by TripAdvisor, such as the rating, which makes it possible to measure the relationship with the identified labels. Therefore, the proposed methodology uses a quantitative variable, the general rating, and multiple qualitative variables, which are the customers’ labels to communicate their sentiments. To carry out this analysis, it is necessary to convert the qualitative variables of the labels into dichotomous variables (0, 1), in order to convert each comment into a vector. With these data, a multiple regression analysis is carried out where the dependent variable is the general rating, and the independent variables are the identified labels.
To achieve this goal, it is necessary to follow a sequence of steps (see Figure 1). The first step is to create an initial database of labels with all the words found in the titles of the customers’ online comments. In the second step, the tags are debugged by eliminating those that do not offer direct information about the feelings expressed, such as the articles “the” or “a” or commonly used verbs such as “to be” or “to have”. In the third step, the possibility of simplifying this database by reducing the number of labels that share the same root through lemmatization is evaluated. Thus, plurals are eliminated as well as verb tenses in regular verbs.
The next step is to create a numerical database with the rating variable and the dichotomous variables (0, 1) of all the defined labels in comments or titles of online customer reviews. This transformation leads to the next step, which is the regression analysis. Then, the model’s robustness is evaluated, and the essential labels are extracted depending on whether they have a significant relationship with the rating. Finally, a database of the significant labels is generated, where the sign and intensity of their statistical relationship with the general rating variable are determined. Thus, a specific lexicon of the airline’s customers is generated with the tags that they usually use and that predict a positive or negative evaluation of their sentiments about the service received. This proposed process can be updated continuously as the airline receives a relevant number of new comments.
In this research, these steps were followed, starting with obtaining 5278 online opinions about TripAdvisor’s Iberia airline, which were all available on the web in Spanish. The information collected was the overall rating, a variable with five alternatives ranging from 1 for low service to 5 for excellent service. The online comments were made in Spanish. However, the proposed data processing methodology can easily be applied to other languages such as English or French. Another piece of data obtained was the title of the comment, where the customers specify their feelings in a short sentence. Finally, the comment, where customers relate their experiences and emotions about the airline’s service in greater detail, was also entered into the database.
The next step was to build up a database of all the words used in all the online comment titles. For this purpose, a program was developed that created a database with each of the words used in the titles. In this study, the titles were used to obtain the labels because they are short and customers have to briefly express their sentiments about the service received from the airline. If the labels were created from the comments, their number would increase considerably, making the task of carrying out statistical analyses more difficult. In this context, if this study demonstrates that title tags can be used to perform sentiment analysis of comments, it will represent a step forward in the research in this field by significantly simplifying the number of tags to be evaluated. The words in the titles were refined to eliminate terms that do not influence the customers’ sentiment, such as articles, certain verbs, or pronouns. This work created the basic tags that were used in the research. Once the initial database was cleaned up, 2567 labels were obtained. To reduce this number, the labels were lemmatized. Several programs perform this function in English, but because all the texts are in Spanish, we decided to develop specific software to perform this function. For this purpose, a minimum of six letters was specified, so that the labels could be detected by their roots because the labels could share very short strings with different meanings. Thus, if a tag had less than six letters, the complete tag was searched for, whereas if it had six or more letters, its root was searched for. This process reduced the pool to 1523 labels that were later used in the regression analysis.
The next step was to build the numerical database to be used for multiple regression analysis. The first variable was the overall rating, which, as indicated above, is quantitative and has values from 1 to 5, depending on the degree of customer satisfaction with the service received. The next variables are the 1523 dichotomous labels, so that 0 means that the label is not in the customer’s comment, and 1 means that it is used by the customer to express the rating. To prepare this database, it was necessary to develop a software whose output was in Excel format. These data were processed through the statistical program SPSS in order to carry out the multiple regression. The dependent variable is the general rating, whereas the independent variables are the defined labels. The result offered by this program is the adjusted R square, which measures the degree of robustness of the model and the coefficients and levels of significance. With these outputs, the labels that are significantly related to the general rating are determined and can be considered key labels to measure customers’ feelings. Likewise, the sign and intensity of the sentiments are established in the coefficients, which can be positive or negative, with a value that determines the degree of relationship with the general rating. If in the study, the model obtains a high adjusted R square, it will demonstrate that the labels extracted from the titles of the comments can be used to establish the sentiments reflected in the online comments.

4. Analysis of Results

4.1. Regression Analysis

Quantitative and qualitative data from online customer feedback organized into vectors were entered into a multiple regression analysis. The dependent variable is the overall rating, whereas the independent variables are the defined labels. Table 2 compares the results obtained in the regression with all the tags and the regression with the tags’ roots. Table 2 shows how the number of labels is reduced to reach the significant labels for measuring sentiment based on the overall rating. When the regression model was performed with all the labels after debugging, the adjusted R square was 0.614, which is high for this type of study. When the labels were lemmatized, the adjusted R square was reduced to 0.579, which is still high. Therefore, to simplify the study of the key labels, performing the regression with the tags’ roots was justified. Thus, the final result of this regression was 295 significant labels that best define the customers’ sentiments because they were significantly related to 5% and 10% of the overall rating.
The multiple regression analysis results are presented in Table 3, where only the significant labels at 5% and 10% are displayed. Two hundred ninety-five tags exhibit a significant relationship with the overall rating, allowing us to assess customers’ sentiments about the quality of service received from the airline. The sign of the coefficient determines the degree of a direct or inverse relationship. The value of the coefficient indicates the intensity with which each label is related to the rating. The model constant, which reaches a significant value (p < 0.05) of 3.679, is particularly noteworthy. This means that the coefficients of the labels found in the comments will be added or subtracted to predict the overall rating from this constant.
Among the labels with a positive coefficient, coordination stands out, with a coefficient of 2.245 (p < 0.05), as well as clause (1.824), unbeatable (0.990), perfection (0.633), or affordable (0.832), to give some examples. It should be taken into account that, in the comments, the number of words used is much higher than in the titles, and so some labels can be found that express neutral or negative sentiments with positive coefficients. This is because they appear very rarely in the comments, and there are other tags in the comments that have a different meaning. For example, the doubt label appears with a positive coefficient of 1.489 (p < 0.05). By itself, it expresses a negative sentiment, but if it is inserted in the expression “do not doubt it”, for example, its meaning changes to a positive sentiment of the client who has had this experience.
In contrast, some labels show negative sentiments, including roadkill with a coefficient of −2.124, indignation (−1.513), tablets (−1.058), irresponsible (−1.038), rude (−0.966), or minuscule (−0.914). There is also a label related to an attribute of the service, as in the case of check-in, which obtains a coefficient of −0.810, showing that it is an aspect of the airline that customers value negatively. Some tags express positive sentiments, but they appear with negative coefficients due to the context of the sentence in which they are found. Some examples are the labels exclusive (−0.861) and sensitivity (−0.965), which must be accompanied by negative words that change their meaning. A label that expresses a feeling is sardines (−0.758), which is usually used when passengers are overcrowded. Other terms that are used to communicate negative feelings are swindle (−0.746), badly (−0.346), uncomfortable (−0.314), bad (−0.305), scarce (−0.281), or disappointment (−0.263). When these adjectives appear in the comment, they are indicating a negative sign in the feelings communicated by the clients.
Labels with coefficients close to zero can be assumed to describe neutral sentiments. In other words, when this label appears, its value will hardly modify the constant regression value. From this perspective, it is plausible to say that they are labels that manifest a neutral sentiment. This is the case of the entertainment label, which has a coefficient of 0.086. Other similar cases are the labels without (−0.077), normal (−0.85), passenger (−0.086), or hours (−0.096). Table 3 shows that other labels with coefficients below 0.2 can be found and would also be considered neutral. However, the positive or negative sign marks a trend in customer sentiment.

4.2. Discussion of Results

The present study has validated the use of the labels extracted from the titles of comments to determine customers’ sentiments in their full comments. The titles are short phrases that synthesize the customer’s experience with the airline, whereas the comments are composed of longer texts where mixed feelings can be collected. Therefore, there may be positive impressions next to words that show negative sensations in a comment. The problem is to determine how these sentiments with different signs influence the final evaluation of the overall rating. In this context, the rating is a quantitative variable with five alternatives that evaluate a synthesis of the experience of the passengers of an airline or service company. Hence, this dimension has great importance in measuring the sign and intensity of customers’ labels to express their feelings.
Not only is it necessary to determine whether the sentiments are positive, negative, or neutral, but also to assess their intensity. Generally, studies on sentiment analysis use lexicons developed generically to evaluate the sign, and sometimes the intensity, of clients’ emotions and experiences to provide their perceptions in a specific way through open structured models [75,77,79]. However, language is a living reality that can vary according to geographical areas, time, and cultural backgrounds. English-speaking customers tend to post higher ratings than non-English speaking customers [95]. Moreover, terms to evaluate specific services may become more specific over time, creating a flexible, adaptable, and specific lexicon for each service or company. This research is carried out from this perspective, in order to propose a methodology for each airline to develop, assess, and test the key labels in knowing the sentiments of the customers. At present, it is an essential tool in companies’ communication strategy because the majority of the communication is being carried out through the Internet and spontaneously through social media.
The methodology proposed to develop machine learning involves obtaining the information and creating the label databases. This study shows that the labels can be simplified by considering only their roots because the adjusted R square, although somewhat lower than that of all the labels, is significantly high. Moreover, the difference is minimal when reducing about one thousand tags to be used in the statistical analyses. The study also demonstrates that the labels obtained from the titles are valid to determine the relationship between the contents of the comments and the general rating. This is especially useful because the number of tags obtained from the comments would be much higher, making the regression analysis more complicated. Therefore, this study reveals that the labels can be simplified to establish the customers’ sentiments in their online ratings, with 295 key labels identified as having a significant relationship with the overall rating.
The more complicated regression defines the labels that show positive sentiments, which have a positive sign and a high coefficient; whereas, the labels with negative signs and a high coefficient identify the negative sentiments. However, labels that obtain coefficients close to zero, either positive or negative, can determine a neutral feeling. In this context, the regression constant obtained a value of 3.679, which indicates that ratings around this value are reporting a neutral assessment of customers. Knowing that the average value of TripAdvisor’s scale is three, and that the regression constant is more than 20% higher, greater values mean that customers assess the airline’s service positively. Furthermore, the airline should consider any rating below 4 to be a non-positive rating. Therefore, any label that has obtained a coefficient close to 0 signifies that the customer’s rating using that label is close to the constant, which is a neutral value.
The results obtained show that the multiple regression analysis is valid to develop machine learning of customer sentiment analysis. It has an advantage over other models based on neural networks, which only result in a percentage of success in the prediction. However, it does not provide information about the key labels to follow to detect the sentiments of customers, or the level of intensity with which these sentiments are expressed. In this context, tags that can reflect the same feeling, whether positive or negative, may vary in intensity because a word is not the same as its synonym. Regression analysis referring to quantitative feedback, such as the overall rating, facilitates this task and helps to decipher the more emotional communication an airline has with customers through written texts. Therefore, one of the fundamental contributions of this study is that it demonstrates the need to have a quantitative reference in order to identify a lexicon of labels with their corresponding sentiments and develop effective communication with airline customers. From this perspective, general rating predictions can be made based on customers’ vocabulary in their comments. This is a strategic aspect in developing and applying artificial intelligence systems to communications between airlines or other service companies and their customers.

5. Conclusions

The main conclusion of this study is that regression analysis based on the overall rating can be used as the basis for machine learning of sentiment analysis of online customer reviews. A quantitative variable that serves as a reference to measure the sentiments of customers transmitted through written comments helps to determine their sign and intensity. Moreover, the results show that the labels extracted from the titles are valid for evaluating the feelings collected in the comments.
One of the main problems when assessing feelings is that there are no dynamic elements to guide feelings’ value. In many cases, pre-developed lexicons are used to determine a positive or negative sign or even an intensity level. However, all of this has been done on a general basis without focusing on evaluating the services offered by a company such as an airline. Moreover, companies need to detect the keywords used by their clients to evaluate their services because company–customer communication is increasingly carried out through the Internet, either on social media or by e-mail. Likewise, companies that want to go further and have immediate feedback when the client is receiving the service need to know the type of vocabulary their clients use to express their sentiments.
Along these lines, this study has validated the process of simplifying the number of labels to be used in sentiment analyses, showing that the roots of the labels are useful. With the multiple regression analysis, the labels significantly related to the general rating are determined, and their coefficients display the sign of the relationship and its intensity, according to the value obtained. This is a customer-centered method for developing the lexicon of customer sentiments that includes their sense and intensity based on the dynamics of customers’ online dialogues. This study makes an exciting contribution to current and future research. It is a proposal for each company to draw up its customer communication codes using the tags automatically extracted from online dialogues or comments.
In the context of the airline industry, managers can use this dynamic method to identify the different labels to achieve the maximum customer satisfaction across time and position their service offerings. Our findings are in line with the literature on the important role of Big Data in providing airlines with a sustainable competitive advantage [12,96,97]. The practical implications of this study have strategic relevance for airlines. First, the words used are related to the rating given by customers, so that a statistical analysis of relationships can be carried out. Second, it is essential for companies to determine which key labels best define their service, whether in a positive or negative sense. This study shows that it is a useful and practical method for applying a continuous learning procedure through technological means. Finally, airlines need to understand the qualitative assessments of customers more in-depth, going beyond the market classification of customer feelings as positive or negative. It is also necessary to know the intensity at the moment they receive the clients’ comments, in order to be able to give them an effective answer that will make them loyal.
This study has some limitations that should be investigated in the future. The first is that the number of times the labels appear is not evaluated. This is an essential factor in determining whether the results are significant or not. A label used in only a few comments may not coincide with its actual meaning because it might be biased in that comment by other words from its context. Another aspect of future evaluations would be to analyze tags that appear in the same comment or sentence. This is another dimension of the content analysis study because the tags that are integrated into the same comment can have an empowering or neutral effect. This line of development would lead to validating the analysis of label structures according to their relationship with the general rating.
Future studies should also be carried out to evaluate the degree of accuracy in the predictions made with the results of the regressions and compare them with other models already used in sentiment analysis, such as in airports [98], hotels [99], and in different online services [100]. Another aspect to take into account in future research is to introduce hypotheses to validate the methodology and important aspects such as the determination of the key labels and their capacity to predict the general rating. In this context, this research can be used as a theoretical support for further progress in this field. It would also be interesting to find out whether the significant labels are verified in other competing airlines in order to establish whether the lexicon is similar in the customers of different airlines. In this regard, the sentiments expressed in terms of sense and intensity may differ between companies that offer a high level of quality to their customers and those that do not.
In conclusion, the results obtained confirm that multiple regression analysis is adequate to evaluate clients’ sentiments, the availability of a quantitative reference variable is essential to evaluate the sign and intensity of the sentiments, and, finally, the number of labels used to evaluate the clients’ sentiments can be simplified based on their roots and levels of significance, with regard to the quantitative reference variable. The study also provides a method for developing, assessing, and developing a lexicon of labels that represent customers’ sentiments towards a service offered, such as airlines.

Author Contributions

The authors’ contributions to the current paper were as follows: conceptualization, A.Z.A. and M.R.-D.; methodology, A.Z.A.; software, M.R.-D.; investigation and validation, A.Z.A. and M.R.-D.; data curation, A.Z.A.; writing—original draft preparation, A.Z.A. and M.R.-D.; writing—review and editing, A.Z.A. and M.R.-D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Araque, O.; Corcuera-Platas, I.; Sánchez-Rada, F.; Iglesias, C.A. Enhancing deep learning sentiment analysis with ensemble techniques in social applications. Expert Syst. Appl. 2017, 77, 236–246. [Google Scholar] [CrossRef]
  2. Liu, B. Sentiment Analysis: Mining Opinions, Sentiments, and Emotions; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar]
  3. Salminen, J.; Yoganathan, C.; Corporan, J.; Jansen, B.J.; Jung, S.G. Machine learning approach to auto-tagging online content for content marketing efficiency: A comparative analysis between methods and content type. J. Bus. Res. 2017, 101, 203–217. [Google Scholar] [CrossRef]
  4. Balducci, B.; Marinova, D. Unstructured data in marketing. J. Acad. Mark. Sci. 2018, 46, 557–590. [Google Scholar] [CrossRef]
  5. Pang, B.; Lee, L.; Vaithyanathan, S. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing—Volume 10, Association for Computational Linguistics, Stroudsburg, PA, USA, 10 July 2002; pp. 79–86. [Google Scholar]
  6. Melville, P.; Gryc, W.; Lawrence, R.D. Sentiment analysis of blogs by combining lexical knowledge with text classification. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 28 June–1 July 2009; pp. 1275–1284. [Google Scholar] [CrossRef] [Green Version]
  7. Wang, S.; Manning, C.D. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers—Volume 2, Association for Computational Linguistics, Jeju Island, Korea, 8–14 July 2012; pp. 90–94. [Google Scholar]
  8. Sharma, A.; Park, S.; Nicolau, J.L. Testing loss aversion and diminishing sensitivity in review sentiment. Tour. Manag. 2020, 77, 104020. [Google Scholar] [CrossRef]
  9. Kiritchenko, S.; Zhu, X.; Mohammad, S.M. Sentiment analysis of short informal texts. J. Artif. Intell. Res. 2014, 50, 723–762. [Google Scholar] [CrossRef]
  10. Lucini, F.R.; Tonetto, L.M.; Fogliatto, F.S.; Anzanello, M.J. Text mining approach to explore dimensions of airline customer satisfaction using customer reviews. J. Air Transp. Manag. 2020, 83, 101760. [Google Scholar] [CrossRef]
  11. Rodríguez-Díaz, M.; Espino-Rodríguez, T.F. A methodology for a comparative analysis of the lodging tourism destinations based on online customer review. J. Destin. Mark. Manag. 2018, 8, 147–160. [Google Scholar] [CrossRef]
  12. Zaki Ahmed, A.; Rodríguez-Díaz, M. Analyzing the Online Reputation and Positioning of Airlines. Sustainability 2020, 12, 1184. [Google Scholar] [CrossRef] [Green Version]
  13. Rodriguez-Díaz, M.; Rodríguez-Voltes, C.I.; Rodríguez-Voltes, A.C. Gap analysis of the online reputation. Sustainability 2018, 10, 1603. [Google Scholar] [CrossRef] [Green Version]
  14. Horster, E.; Gottschalk, C. Computer-assisted webnography: A new approach to online reputation management in tourism. J. Vacat. Mark. 2012, 18, 229–238. [Google Scholar] [CrossRef]
  15. Yacouel, N.; Fleischer, A. The role of cybermediaries in reputation building and price premiums in the online hotel market. J. Travel Res. 2012, 51, 219–226. [Google Scholar] [CrossRef]
  16. Li, H.; Ye, Q.; Law, R. Determinants of customer satisfaction in the hotel industry: An application of online review analysis. Asia Pac. J. Tour. Res. 2013, 18, 784–802. [Google Scholar] [CrossRef]
  17. Gössling, S.; Hall, C.M.; Anderson, A.C. The Manager’s Dilemma: A Conceptualization of Online Review Manipulation Strategies. Curr. Issues Tour. 2018, 21, 484–503. Available online: http://www.tandfonline.com/doi/full/10.1080/13683500.2015.1127337 (accessed on 15 September 2018). [CrossRef]
  18. Rodríguez-Díaz, M.; Espino-Rodríguez, T.F. Determining the reliability and validity of online reputation databases for lodging: Booking.com, TripAdvisror, and HolidayCheck. J. Vacat. Mark. 2018, 24, 261–274. [Google Scholar]
  19. Chun, R. Corporate reputation: Meaning and measurement. Int. J. Manag. Rev. 2005, 7, 91–109. [Google Scholar] [CrossRef]
  20. Vermeulen, I.E.; Seegers, D. Tried and tested: The impact of online hotel reviews on consumer consideration. Tour. Manag. 2009, 30, 123–127. [Google Scholar] [CrossRef]
  21. Ye, Q.; Law, R.; Gu, B.; Chen, W. The influence of user-generated content on traveller behaviour: An empirical investigation on the effects of e-word-of-mouth to hotel online bookings. Comput. Hum. Behav. 2011, 27, 634–639. [Google Scholar] [CrossRef]
  22. Hernández Estárico, E.; Fuentes Medina, M.; Morini Marrero, S. Una aproximación a la reputación en línea de los establecimientos hoteleros españoles. Pap. Tur. 2012, 52, 63–88. [Google Scholar]
  23. Varini, K.; Sirsi, P. Social Media and Revenue Management: Where Should the Two Meet? Available online: https://www.researchgate.net/publication/264928889_Social_media_and_revenue_management_Where_should_the_two_meet (accessed on 20 August 2012).
  24. Kim, W.G.; Lim, H.; Brymer, R.A. The effectiveness of managing social media on hotel performance. Int. J. Hosp. Manag. 2015, 44, 165–171. [Google Scholar] [CrossRef]
  25. Lee, S.H.; Ro, H. The impact of online reviews on attitude changes: The differential effects of review attributes and consumer knowledge. Int. J. Hosp. Manag. 2016, 56, 1–9. [Google Scholar] [CrossRef]
  26. Rodríguez Díaz, M.; Espino Rodríguez, T.F.; Rodríguez Díaz, R. A model of market positioning base on value creation and service quality in the lodging industry: An empirical application of online customer reviews. Tour. Econ. 2015, 21, 1273–1294. [Google Scholar] [CrossRef]
  27. Ye, Q.; Li, H.; Wang, Z.; Law, R. The influence of hotel price on perceived service quality and value in e-tourism: An empirical investigation based on online traveller reviews. J. Hosp. Tour. Res. 2014, 38, 23–39. [Google Scholar] [CrossRef]
  28. Torres, E.N. Deconstructing service quality and customer satisfaction: Challenges and directions for future research. J. Hosp. Mark. Manag. 2014, 23, 652–677. [Google Scholar] [CrossRef]
  29. Oliver, R.L. Satisfaction: A Behavioural Perspective on the Consumer; McGraw-Hill: New York, NY, USA, 1997. [Google Scholar]
  30. Tellis, G.J.; Johnson, J. The value of quality. Mark. Sci. 2007, 26, 758–773. [Google Scholar] [CrossRef]
  31. Fornell, C.; Mithas, S.; Morgeson, F.V., III; Krishnan, M.S. Customer satisfaction and stock prices: High returns, low risk. J. Mark. 2006, 70, 3–14. [Google Scholar] [CrossRef] [Green Version]
  32. Jacobson, R.; Mizik, N. Assessing the Value-Relevance of Customer Satisfaction. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=990783 (accessed on 21 March 2009).
  33. Fornell, C.; Mithas, S.; Morgeson, F.V., III. Commentary—The economic and statistical significance of stock returns on customer satisfaction. Mark. Sci. 2009, 28, 820–825. [Google Scholar] [CrossRef] [Green Version]
  34. Chen, Y.; Ganesan, S.; Liu, Y. Does a firm’s product-recall strategy affect its financial value? An examination of strategic alternatives during product-harm crises. J. Mark. 2009, 73, 214–226. [Google Scholar] [CrossRef]
  35. Luo, X.; Bhattacharya, C.B. Corporate social responsibility, customer satisfaction, and market value. J. Mark. 2006, 70, 1–18. [Google Scholar] [CrossRef]
  36. Luo, X.; Homburg, C. Satisfaction, complaint, and the stock value gap. J. Mark. 2008, 72, 29–43. [Google Scholar] [CrossRef]
  37. O’sullivan, D.; Hutchinson, M.C.; O’Connell, V. Empirical evidence of the stock market’s (mis) pricing of customer satisfaction. Int. J. Res. Mark. 2009, 26, 154–161. [Google Scholar] [CrossRef]
  38. Ittner, C.; Larcker, D.; Taylor, D. Commentary—The stock market’s pricing of customer satisfaction. Mark. Sci. 2009, 28, 826–835. [Google Scholar] [CrossRef]
  39. Sharma, A.; Dey, S. A comparative study of feature selection and machine learning techniques for sentiment analysis. In Proceedings of the 2012 ACM Research in Applied Computation Symposium, San Antonio, TX, USA, 23–26 October 2012. [Google Scholar]
  40. Honeycutt, C.; Herrings, S.C. Beyond microblogging: Conversation and collaboration via Twitter. In Proceedings of the 42nd Hawaii International Conference on System Sciences, Big Island, HI, USA, 5–8 January 2009; pp. 1–10. [Google Scholar] [CrossRef]
  41. Boyd, D.M.; Ellison, N.B. Social network sites: Definition, history, and scholarship. J. Comput. Mediat. Commun. 2007, 13, 210–230. [Google Scholar] [CrossRef] [Green Version]
  42. Chunga, A.; Andreeva, P.; Benyoucef, M.; Duane, A.; O’Reilly, P. Managing an organisation’s social media presence: An empirical stages of growth model. Int. J. Inf. Manag. 2017, 37, 1405–1417. [Google Scholar] [CrossRef]
  43. Saura, J.R.; Palos-Sánchez, P.R.; Ríos Martín, M.A. Attitudes to environmental factors in the tourism sector expressed in online comments: An exploratory study. Int. J. Environ. Res. Public Health 2018, 15, 553. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Das, S.; Chen, M. Yahoo! for Amazon: Extracting market sentiment from stock message boards. Manag. Sci. 2007, 35, 43. [Google Scholar]
  45. Tong, R.M. An operational system for detecting and tracking opinions in on-line discussion. In Proceedings of the SIGIR Workshop on Operational Text Classification, New Orleans, LA, USA, 1 September 2001; Volume 1. [Google Scholar]
  46. Turney, P.D. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, Association for Computational Linguistics, Philadelphia, PA, USA, 7–12 July 2002; pp. 417–424. [Google Scholar]
  47. Fiorini, P.M.; LIPSKY, L.R. Search marketing traffic and performance models. Comput. Stand. Interfaces 2012, 34, 517–526. [Google Scholar] [CrossRef]
  48. Liu, Y.; Bi, J.W.; Fan, Z.P. A method for ranking products through online reviews based on sentiment classification and interval-valued intuitionistic fuzzy TOPSIS. Int. J. Inf. Technol. Decis. Mak. 2017, 16, 1497–1522. [Google Scholar] [CrossRef]
  49. Debes, V.; Sandeep, K.; Vinnett, G. Predicting information diffusion probabilities in social networks: A Bayesian networks based approach. J. Knowl.-Based Syst. 2017, 133, 66–76. [Google Scholar] [CrossRef]
  50. Insúa Yánez, A. Sistema Deep Learning Para el Análisis de Sentimientos en Opiniones de Productos Para la Ordenación de Resultados de un Buscador Semántico. Diploma Thesis, Universidad de La Coruña, La Coruña, Spain, 2019. [Google Scholar]
  51. McCallum, A.; Nigam, K.A. Comparison of event models for Naive Bayes text classification. In AAAI/ICML-98 Workshop on Learning for Text Categorization; AAAI Press: Menlo Park, CA, USA, 1998; pp. 41–48. [Google Scholar]
  52. Berger, A.L.; Pietra, V.J.D.; Pietra, S.A.D. A Maximum entropy approach to natural language processing. Comput. Linguist. 1996, 22, 39–71. [Google Scholar]
  53. Yang, Y.; Lin, X. A re-examination of text categorization methods. In Proceedings of the SIGIR99: 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, CA, USA, 15–19 August 1999; pp. 42–49. [Google Scholar]
  54. Quinlan, J.R. C4.5: Programs for Machine Learning; Morgan Kaufmann Publishers: Burlington, MA, USA, 1993. [Google Scholar]
  55. Joachims, T. Text categorization with support vector machines: Learning with many relevant features. In Proceedings of the 10th European Conference on Machine Learning (ECML-98), Chemnitz, Germany, 21–23 April 1998; pp. 137–142. [Google Scholar]
  56. Levy, O.; Goldberg, Y. Dependency-based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, Baltimore, MA, USA, 23–25 June 2014; pp. 302–308. [Google Scholar]
  57. Kusner, M.J.; Sun, Y.; Kolkin, N.I.; Weinberger, K.Q. From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 7–9 July 2015. [Google Scholar]
  58. González-Rodríguez, M.R.; Martínez-Torres, R.; Toral, S. Post-visit and pre-visit tourist destination image through eWOM sentiment analysis and perceived helpfulness. Int. J. Contemp. Hosp. Manag. 2016, 28, 2609–2627. [Google Scholar] [CrossRef]
  59. Kwok, L.; Xie, K.; Tori, R. Thematic framework of online review research. A systematic analysis of contemporary literature on seven major hospitality and tourism journals. Int. J. Contemp. Hosp. Manag. 2017, 29. [Google Scholar] [CrossRef]
  60. Nieto-García, M.; Muñoz-Gallego, P.A.; González-Benito, Ó. Tourists’ willingness to pay for an accommodation: The effect of eWOM and internal reference price. Int. J. Hosp. Manag. 2017, 62, 67–77. [Google Scholar] [CrossRef] [Green Version]
  61. Patricia, M.W.; Broniarczyk, S.M. Integrating Multiple Opinions: The Role of Aspiration Level on Consumer Response to Critic Consensus. J. Consum. Res. 1998, 25, 38–51. [Google Scholar]
  62. Rogers, E.M. Diffusion of Innovations, 5th ed.; Free Press: New York, NY, USA, 2003. [Google Scholar]
  63. Senecal, S.; Nantel, J. The influence of online product recommendations on consumers’ online choices. J. Retail. 2004, 80, 159–169. [Google Scholar] [CrossRef]
  64. Hennig-Thurau, T.; Gwinner, K.P.; Walsh, G.; Gremler, D.D. Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the internet? J. Interact. Mark. 2004, 18, 38–52. [Google Scholar] [CrossRef]
  65. Bickart, B.; Schindler, R.M. Internet forums as influential sources of consumer information. J. Interact. Mark. 2001, 15, 31–40. [Google Scholar] [CrossRef]
  66. Hussain, R.; Al Nasser, A.; Hussain, Y.K. Service quality and customer satisfaction of a UAE-based airline: An empirical investigation. J. Air Transp. Manag. 2015, 42, 167–175. [Google Scholar] [CrossRef]
  67. Tahanisaz, S. Evaluation of passenger satisfaction with service quality: A consecutive method applied to the airline industry. J. Air Transp. Manag. 2020, 83, 101764. [Google Scholar] [CrossRef]
  68. Sezgen, E.; Mason, K.J.; Mayer, R. Voice of airline passenger: A text mining approach to understand customer satisfaction. J. Air Transp. Manag. 2019, 77, 65–74. [Google Scholar] [CrossRef]
  69. Bellizzi, M.G.; Eboli, L.; Forciniti, C.; Mazzulla, G. Passengers’ Expectations on Airlines’ Services: Design of a Stated Preference Survey and Preliminary Outcomes. Sustainability 2020, 12, 4707. [Google Scholar] [CrossRef]
  70. Farooq, M.S.; Salam, M.; Fayolle, A.; Jaafar, N.; Ayupp, K. Impact of service quality on customer satisfaction in Malaysia airlines: A PLS-SEM approach. J. Air Transp. Manag. 2018, 67, 169–180. [Google Scholar] [CrossRef]
  71. Park, S.; Lee, J.S.; Nicolau, J.L. Understanding the dynamics of the quality of airline service attributes: Satisfiers and dissatisfiers. Tour. Manag. 2020, 81, 104163. [Google Scholar] [CrossRef] [PubMed]
  72. Lee, T.Y.; Bradlow, E.T. Automated marketing research using online customer reviews. J. Mark. Res. 2011, 48, 881–894. [Google Scholar] [CrossRef]
  73. Anderson, M.; Magruder, J. Learning from the crowd: Regression discontinuity estimates of the effects of an online review database. Econ. J. 2012, 122, 957–989. [Google Scholar] [CrossRef]
  74. Tsai, C.F.; Chen, K.; Hu, Y.H.; Chen, W.K. Improving text summarization of online hotel reviews with review helpfulness and sentiment. Tour. Manag. 2020, 80, 104122. [Google Scholar] [CrossRef]
  75. Song, C.; Guo, J.; Zhuang, J. Analyzing passengers’ emotions following flight delays-a 2011–2019 case study on SKYTRAX comments. J. Air Transp. Manag. 2020, 89, 101903. [Google Scholar] [CrossRef]
  76. Korfiatis, N.; Stamolampros, P.; Kourouthanassis, P.; Sagiadinos, V. Measuring service quality from unstructured data: A topic modeling application on airline passengers’ online reviews. Expert Syst. Appl. 2019, 116, 472–486. [Google Scholar] [CrossRef] [Green Version]
  77. Zhao, Y.; Xu, X.; Wang, M. Predicting overall customer satisfaction: Big data evidence from hotel online textual reviews. Int. J. Hosp. Manag. 2019, 76, 111–121. [Google Scholar] [CrossRef]
  78. Lee, K.; Yu, C. Assessment of airport service quality: A complementary approach to measure perceived service quality based on Google reviews. J. Air Transp. Manag. 2018, 71, 28–44. [Google Scholar] [CrossRef]
  79. Xiang, Z.; Du, Q.; Ma, Y.; Fan, W. A comparative analysis of major online review platforms: Implications for social media analytics in hospitality and tourism. Tour. Manag. 2017, 58, 51–65. [Google Scholar] [CrossRef]
  80. Fang, B.; Ye, Q.; Kucukusta, D.; Law, R. Analysis of the perceived value of online tourism reviews: Influence of readability and reviewer characteristics. Tour. Manag. 2016, 52, 498–506. [Google Scholar] [CrossRef]
  81. Naveen, A. The impact of eWOM density on sales of travel insurance. Ann. Tour. Res. 2016, 56, 137–140. [Google Scholar]
  82. Park, S.; Nicolau, J.L. Asymmetric effects of online consumer reviews. Ann. Tour. Res. 2015, 50, 67–83. [Google Scholar] [CrossRef] [Green Version]
  83. Zhu, F.; Zhang, X. Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics. J. Mark. 2010, 74, 133–148. [Google Scholar] [CrossRef]
  84. Mudambi, S.M.; Schuff, D. What makes a helpful review? A study of customer reviews on Amazon.com. MIS Q. 2010, 34, 185–200. [Google Scholar] [CrossRef] [Green Version]
  85. Li, X.; Hitt, L.M. Self-selection and information role of online product reviews. Inf. Syst. Res. 2008, 19, 456–474. [Google Scholar] [CrossRef] [Green Version]
  86. Duan, W.; Gu, B.; Whinston, A.B. Do online reviews matter?—An empirical investigation of panel data. Decis. Support Syst. 2008, 45, 1007–1016. [Google Scholar] [CrossRef]
  87. Gu, B.; Park, J.; Konana, P. Research note—the impact of external word-of-mouth sources on retailer sales of high-involvement products. Inf. Syst. Res. 2012, 23, 182–196. [Google Scholar] [CrossRef]
  88. Moe, W.W.; Trusov, M. The value of social dynamics in online product ratings forums. J. Mark. Res. 2011, 48, 444–456. [Google Scholar] [CrossRef]
  89. Dellarocas, C.; Gao, G.; Narayan, R. Are consumers more likely to contribute online reviews for hit or niche products? J. Manag. Inf. Syst. 2010, 27, 127–158. [Google Scholar] [CrossRef]
  90. Chintagunta, P.K.; Gopinath, S.; Venkataraman, S. The effects of online user reviews on movie box office performance: Accounting for sequential rollout and aggregation across local markets. Mark. Sci. 2010, 29, 944–957. [Google Scholar] [CrossRef]
  91. Ahmad, I.S.; Bakar, A.A.; Yaakub, M.R. Movie Revenue Prediction Based on Purchase Intention Mining Using YouTube Trailer Reviews. Inf. Process. Manag. 2020, 57, 102278. [Google Scholar] [CrossRef]
  92. Forman, C.; Ghose, A.; Wiesenfeld, B. Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets. Inf. Syst. Res. 2008, 19, 291–313. [Google Scholar] [CrossRef]
  93. Godes, D.; Mayzlin, D. Using online conversations to study word-of-mouth communication. Mark. Sci. 2004, 23, 545–560. [Google Scholar] [CrossRef] [Green Version]
  94. Chen, P.Y.; Wu, S.Y.; Yoon, J. The impact of online recommendations and consumer feedback on sales. In Proceedings of the ICIS 2004 Proceedings, Washington, DC, USA, December 2004; p. 58. [Google Scholar]
  95. Schuckert, M.; Liu, X.; Law, R. A segmentation of online reviews by language groups: How English and non-English speakers rate hotels differently. Int. J. Hosp. Manag. 2015, 48, 143–149. [Google Scholar] [CrossRef]
  96. Lambrecht, A.; Tucker, C.E. Can Big Data Protect a Firm from Competition? Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2705530 (accessed on 22 December 2015).
  97. Siering, M.; Deokar, A.V.; Janze, C. Disentangling consumer recommendations: Explaining and predicting airline recommendations based on online reviews. Decis. Support Syst. 2018, 107, 52–63. [Google Scholar] [CrossRef]
  98. Kuo, M.S.; Liang, G.S. Combining VIKOR with GRA techniques to evaluate service quality of airports under fuzzy environment. Expert Syst. Appl. 2011, 38, 1304–1312. [Google Scholar] [CrossRef]
  99. Phillips, P.; Barnes, S.; Zigan, K.; Schegg, R. Understanding the impact of online reviews on hotel performance: An empirical analysis. J. Travel Res. 2017, 56, 235–249. [Google Scholar] [CrossRef] [Green Version]
  100. Ray, A.; Bala, P.K.; Jain, R. Utilizing emotion scores for improving classifier performance for predicting customer’s intended ratings from social media posts. Benchmarking Int. J. 2020. [Google Scholar] [CrossRef]
Figure 1. Machine learning mythology of sentiment analysis based on online ratings measured with quantitative variables.
Figure 1. Machine learning mythology of sentiment analysis based on online ratings measured with quantitative variables.
Sustainability 12 08683 g001
Table 1. Related research on overall ratings from online reviews in tourism.
Table 1. Related research on overall ratings from online reviews in tourism.
StudyVariablesResearch ContextKey Findings
Park et al. (2020) [71]Number of user reviews; average user ratings-TripAdvisor
-20 US airlines with 157,035 reviews and overall ratings
-The quality of specific service attributes, such as cleanliness, food and beverages, and in-flight entertainment, affects the variations in positive ratings as a satisfier.
-Other airline service attributes, such as customer service and check-in and boarding, influence deviations in negative ratings as dissatisfaction.
Tsai et al. (2020) [74]Online hotel reviews; the overall ratings
-
TripAdvisor
-
1009 US hotels with 23,430 reviews
-
A novel approach is proposed to generate high-quality summaries of online hotel reviews.
-
Both review helpfulness and hotel features were considered before review summarization.
-
Online hotel reviews were collected in an experimental evaluation.
Sharma et al. (2020) [8]Number of user sentiment reviews and the overall rating of the specific flight-TripAdvisor
-20 US airlines with 157,036 reviews
-Prospect theory explains the relationship between ratings and review sentiment.
-Loss aversion and diminishing sensitivity are confirmed.
-Negative deviations in ratings lead to a higher impact on review sentiment than positive deviations.
-Variations in ratings closer to (away from) the reference point result in higher (lower) marginal impacts on sentiment.
Song et al. (2020) [75]Online sentiment reviews and the ratings of airlines
-
SKYTRAX
-
24,165 online reviews
-
Text mining technology is used to automatically access the information in text comments.
-
Sentiment analysis based on a sentiment dictionary is used to classify user reviews.
-
Co-occurrence analysis is used to identify passengers’ concerns about different aspects of service in the aviation industry.
Korfiatis et al. (2019) [76]Airline passenger reviews and the overall rating for their total experience
-
TripAdvisor
-
557,208 online reviews
-
Online reviews offer a solution through quality features extracted from review text.
-
Using structural topic modeling, review is coupled with numerical ratings.
-
An experimental application to airline passengers’ reviews is demonstrated.
Zhao et al. (2019) [77]Online textual reviews; the overall customer ratings
-
TripAdvisor
-
127,629 reviews
-
Customer satisfaction is predicted by the linguistic characteristics of reviews.
-
Review diversity and polarity have a positive effect on customer ratings.
-
Review subjectivity, readability, and length have a negative effect on ratings.
-
Customer review involvement has a positive effect on customer ratings.
Lee and Yu (2018) [78]User-generated online content; airport service quality (ASQ) ratings; Google star ratings
-
Google Maps reviews
-
42,137 reviews
-
The sentiment scores computed from Google reviews are good predictors of ASQ ratings.
-
The 25 topics extracted from Google reviews correspond well with the service attributes used in the ASQ survey.
-
The method can be used to measure the service quality of many airports, including those that have never participated in any survey.
Xiang et al. (2017) [79]Online reviews; average hotel ratings
-
TripAdvisor
-
Expedia
-
Yelp
-
Findings show discrepancies in the representation of hotel products on these platforms.
-
Information quality, measured by linguistic and semantic features, sentiment, rating, and usefulness, varies considerably.
Fang et al. (2016) [80]User online reviews, text readability, and historical rating distribution-Online attraction reviews from TripAdvisor
-Two-level empirical analysis; Tobit regression model
Both text readability and reviewer characteristics affect the perceived value of reviews.
Amblee (2015) [81]The density of negative reviews-SquareMouth.com
-Pooled ordinary least squares (OLS) regression
-Over 21,000 reviews of travel insurances
When the density of negative reviews is high, sales are lower and vice versa.
Park and Nicolau (2015) [82]Online reviews (star ratings) on usefulness and enjoyment-Yelp.com
-Data collected from restaurant reviews from New York and London
-35 restaurants in London with 2500 reviews and 10 in New York with 2590 reviews
-The valence of online reviews has a U-shaped effect on usefulness and enjoyment.
-Negative ratings of reviews are more useful than positive reviews.
-Positive ratings are associated with higher enjoyment than negative reviews.
Zhu and Zhang (2010) [83]Coefficient of variation in ratings; the total number of reviews posted.-Gamespot.com
-VideoGames.com
-Psychological choice model
Online reviews are more influential for less popular games and games whose players have more Internet talent.
Mudambi and Scuff (2010) [84]Star rating of the reviewer; the total number of votes about each review’s helpfulness; word count of the review-Amazon.com
-6 products with 1587 reviews
Review depth is correlated with helpfulness, but review extremity is less helpful for experience goods.
Li and Hitt (2008) [85] Average rating of all reviews-Amazon.com
-2651 books with 136,802 reviews.
Word of mouth (WOM) is not an unbiased indicator of quality and will affect sales.
Duan et al. (2008) [86]Number of user postings; user review ratings-Movie box office: Yahoo Movies
-71 movies with 95,867 total user posts
-The rating of online user reviews has no significant impact on movies’ box office revenues.
-Box office sales are significantly influenced by the volume of online postings, suggesting the importance of the awareness effect.
Li et al. (2013) [16]Text mining and content analysis-TripAdvisor
-774 star-rated hotels with 42,668 online traveler reviews
Transportation convenience, food, beverage management, convenience to tourist destinations, and value for money are identified as excellent factors that customers booking both luxury and budget hotels consider. The actual performance is very satisfactory to them.
Gu et al. (2012) [87]User-generated content (UGC); product ratings; product reviews-Amazon, DPReview, and Epinions
-Logistic regression
-148 digital cameras with 31,522 reviews
Online WOM on external review websites is a more significant indicator of sales for high involvement products.
Moe and Trusov (2011) [88]Average of all ratings-Bath, fragrance, and beauty products
-500 products with 3801 ratings
Online WOM affects sales and is subject to social dynamics in that ratings will affect future rating behavior.
Dellarocas et al. (2010) [89]Number of movies; UGC; the number of user ratings-Film data from Yahoo! Movies
-2002 data set contains 104 movies with 63,889 reviews
-2007-8 data set contains 143 movies with 95,443 reviews.
Products that are less available and less successful in the market are less likely to receive online reviews.
Chintag et al. (2010) [90]Valence, volume, and precision of online reviews.-Film data from Yahoo! Movies
-Logistic regression generalized method of moments procedure
-One hundred forty-eight movies, 253 markets with a total of 70,273 reviews.
Online user reviews are correlated with film box office performance.
Ahmad et al. (2020) [91]Average of user reviews; an average of user ratings; sentiment analysis.YouTube trailer reviews-People’s movie purchase intention can be extracted from YouTube trailer reviews.
-Purchase intention is positively correlated with box office revenue and can improve the prediction accuracy of box office revenue prediction.
-Multiple linear regression performed better than support vector machines, neural networks, and random forest in movie revenue prediction.
Forman et al. (2008) [92]Star ratings; average reviews per book-Amazon.com
-786 books with 175,714 reviews
Review identity may be used as a measurable proxy for both future sales and future geographic sales.
Godes and Mayzlin (2004) [93]The average number of posts and ratings-TV shows from USENET newsgroups.
-Multiple regression
-One hundred sixty-nine groups and 2398 posts were evaluated.
The dispersion of online conversations can be used to measure ratings.
Chen et al. (2004) [94]The average number of reviews, recommendations, and sales rank-Book data from Amazon.com
-Multiple regression
-Six hundred ten observations with 58,566 total reviews.
Consumer ratings are not found to be related to sales, but recommendations are highly significant.
Table 2. Regression analysis and number of tags.
Table 2. Regression analysis and number of tags.
ModelAdjusted R SquaredNumber of Tags
All labels0.6142567
Root of labels0.5791523 (295)
Table 3. Regression analysis with rating as an independent variable (R2 adjusted 0.579).
Table 3. Regression analysis with rating as an independent variable (R2 adjusted 0.579).
VariablesBtSig.VariablesBtSig.
(constant)3.67989.4430.000 ***call−0.218−2.5270.012 *
coordination2.2452.0330.042 *result−0.219−2.4890.013 *
clause1.8241.8510.064media−0.221−2.4670.014 *
doubt1.4892.7490.006 **please−0.235−1.8030.072
complex1.2331.8770.061nor−0.235−2.1890.029 *
dedicated1.2271.7130.087after−0.240−1.8210.069
honey1.1891.6140.107following−0.242−3.4110.001 **
signpost1.1641.8280.068cost−0.243−1.8920.059
restrict1.1582.3460.019 *vacations−0.243−1.9270.054
battle1.0781.6010.109nobody−0.244−2.3650.018 *
beautiful1.0672.8590.004 **not−0.244−7.9640.000 ***
notorious1.0311.8800.060port−0.250−2.1230.034 *
supplement0.9983.2510.001 **basic−0.258−1.6080.108
unbeatable0.9904.3650.000 ***disappointment−0.263−1.6110.107
useless0.9672.3220.020 *possible−0.268−2.4290.015 *
candy0.8961.6350.102unfortunate−0.274−2.4080.016 *
achievement0.8871.9140.056scarce−0.281−2.4860.013 *
trend0.8562.0240.043 *tired−0.283−1.6490.099
located0.8551.6430.100know−0.289−1.9920.046 *
distribution0.8512.1660.030 *various−0.299−2.1360.033 *
examine0.8403.1090.002 **missing−0.304−3.6810.000 ***
affordable0.8321.7220.085bad−0.305−4.3440.000 ***
instructions0.8071.6850.092international−0.307−1.8970.058
mess0.7861.6060.108number−0.308−2.3910.017 *
renewal0.7861.7510.080forget−0.312−1.7390.082
presence0.7681.8130.070uncomfortable−0.314−5.0730.000 ***
canned0.7311.6430.100total−0.316−2.2620.024 *
charter0.6981.7420.082money−0.317−1.6650.096
perfection0.6331.6760.094recognizes−0.322−1.8910.059
spotlight0.6142.3850.017 *evil−0.332−1.6740.094
setback0.6042.8650.004 **bus−0.342−1.6790.093
amazing0.5934.1520.000 ***mister−0.343−2.7350.006 **
cabotage0.5631.7580.079badly−0.346−1.7850.074
rich0.5581.7470.081consume−0.348−1.9800.048 *
reach0.5552.3510.019 *still−0.348−2.4290.015 *
wrong0.5542.0700.038 *thrown−0.351−1.7410.082
directions0.5371.7790.075few−0.351−2.3530.019 *
strategy0.5291.6090.108old−0.367−2.9650.003 **
commander ***0.5243.8480.000 ***cancel−0.368−2.7230.006 **
great ***0.5123.8780.000 ***never−0.370−2.4480.014 *
remarkable *0.5061.9090.056telephone−0.378−3.1830.001 **
regional0.5032.2060.027 *user−0.379−1.9590.050
evident0.5011.7920.073minors−0.386−2.6340.008 **
climate0.5001.6830.092poor−0.387−2.1550.031 *
air0.4882.3640.018 *above−0.398−3.1010.002 **
awesome0.4752.7360.006 **impossible−0.400−3.3040.001 **
wonder0.4532.9820.003 **leave−0.402−1.6950.090
delight0.4451.9050.057clothing−0.402−2.0650.039 *
electronic0.4422.0740.038 *horror−0.407−3.0670.002 **
exact0.4402.6810.007 **narrow−0.407−2.5830.010 *
excellent0.43110.1370.000 ***separate−0.407−1.7700.077
exquisite0.4312.3770.017 *window−0.421−2.8670.004 **
like0.4113.3410.001 **hands−0.421−1.8840.060
note0.4083.4760.001 **accept−0.430−1.9700.049 *
cancellations0.4061.7690.077close−0.442−2.7590.006 **
additional0.4022.8260.005 **worse−0.443−6.0330.000 ***
subsidiary0.4001.6030.109hotel−0.447−3.7190.000 ***
satisfied0.3992.0050.045 *go away−0.451−3.2710.001 **
acceptable0.3911.6880.092authenticates−0.457−2.0430.041 *
assistance0.3811.7610.078intercontinental−0.466−2.2250.026 *
house0.3793.3160.001 **landing−0.468−1.9190.055
husband0.3771.8600.063wrongly−0.469−7.0710.000 ***
chair0.3721.7070.088routes−0.469−1.8720.061
pleasure0.3712.4580.014 *error−0.472−2.7010.007 **
perfect0.3675.2650.000 ***characterizes−0.473−1.7630.078
agile0.3601.8000.072disaster−0.476−4.2720.000 ***
according to0.3551.8800.060dirty−0.477−2.6230.009 **
charm0.3542.6190.009 **heat−0.481−2.3800.017 *
meet0.3411.9680.049 *bridge−0.491−2.1140.035 *
friends0.3251.8600.063exhausting−0.513−1.9670.049 *
preferably0.3221.9990.046 *claims−0.514−5.8430.000 ***
organized0.3171.6830.092zero−0.519−2.1580.031 *
put0.2892.2790.023 *chaotic−0.519−2.1260.034 *
heavy0.2882.0410.041 *dreadful−0.529−1.7810.075
channels0.2842.5130.012 *multiple−0.536−2.3160.021 *
deserves0.2811.8510.064misplaced−0.547−2.6290.009 **
corridor0.2562.1280.033 *awful−0.556−1.9660.049 *
difference0.2542.8900.004 **deficient−0.562−3.2090.001 **
incidence0.2512.3640.018 *learn−0.574−1.9590.050
inconvenient0.2382.5760.010 *cleaning−0.575−2.2310.026 *
could0.2352.0700.038 *patience−0.589−1.6620.097
free0.2352.0680.039 *grow−0.600−1.9200.055
world0.2312.3490.019 *priority−0.616−1.9130.056
problem0.2304.7150.000 ***snack−0.621−2.8870.004 **
spouse0.2261.6210.105pessimistic−0.623−7.0810.000 ***
fast0.2263.1710.002 **nightmare−0.631−1.8930.058
truth0.2223.4520.001 **neglect−0.636−2.0620.039 *
impeccable0.2171.7350.083depart−0.643−1.7320.083
punctual0.2156.6900.000 ***left−0.647−2.6080.009 **
comfortable0.2115.6840.000 ***unpresentable−0.671−2.0050.045 *
planned0.2102.5740.010 *rush−0.674−2.5820.010 *
cheap0.2091.8650.062deplorable−0.710−1.9800.048 *
food0.2081.7040.088refund−0.719−3.0280.002 **
luck0.2042.0930.036 *excuses−0.721−2.4910.013 *
internal0.2001.9310.054glass−0.731−4.7720.000 ***
breakfast0.2002.2430.025 *places−0.740−4.2370.000 ***
offer0.1871.8850.059swindle−0.746−3.7480.000 ***
recommend0.1872.5030.012 *lies−0.751−3.0240.003 **
went0.1831.7720.076abuse−0.756−1.8630.063
find0.1691.6930.090sardines−0.758−1.9100.056
comfort0.1671.9180.055deteriorated−0.786−1.8760.061
quiet0.1612.3050.021 *robots−0.798−2.1590.031 *
attentive0.1562.8300.005 **checkin−0.810−2.1200.034 *
highlight0.1542.0840.037 *garbage−0.831−2.5400.011 *
can0.1511.6240.104decision−0.832−2.5800.010 *
little0.1492.1930.028 *subject−0.860−1.8790.060
weigh0.1452.0630.039 *exclusive−0.861−2.2310.026 *
also0.1442.5050.012 *about−0.868−2.3120.021 *
better0.1412.2280.026 *painful−0.890−3.7790.000 ***
land0.1401.6600.097minuscule−0.914−2.1250.034 *
remain0.1331.6360.102remodeled−0.916−1.8920.059
nice0.1322.0500.040 *fall−0.937−1.8270.068
always0.1282.9170.004 **support−0.964−3.0050.003 **
very0.1284.4460.000 ***sensitivity−0.965−1.7030.089
real0.1221.6270.104rude−0.966−3.6390.000 ***
something0.1212.0930.036 *list−1.026−2.6950.007 **
friendly0.1122.5520.011 *perfume−1.033−2.1710.030 *
deal0.1122.2620.024 *irresponsible−1.038−2.6820.007 **
entertainment0.0861.8640.062remark−1.056−2.0250.043 *
without−0.077−2.0750.038 *tablets−1.058−1.7370.082
normal−0.085−1.6030.109commitments−1.114−2.5310.011 *
passenger−0.086−1.6820.093weak−1.127−1.9540.051
hours−0.096−2.3490.019 *die−1.206−1.8980.058
appear−0.108−1.7610.078judge−1.244−2.5040.012 *
none−0.109−1.9090.056veil−1.270−2.3560.019 *
when−0.110−2.1300.033 *delete−1.331−4.9300.000 ***
more−0.110−3.4490.001 **stingy−1.352−2.0930.036 *
delay−0.116−2.1980.028 *strict−1.361−1.6040.109
between−0.117−2.2150.027 *victims−1.389−1.7970.072
count−0.119−1.8800.060pathetic−1.419−3.0990.002 **
because−0.120−2.1800.029 *christmas−1.472−3.5810.000 ***
almost−0.124−2.0940.036 *indignation−1.513−1.9710.049 *
case−0.125−1.6370.102sugar−1.518−2.1080.035 *
customer−0.129−1.6470.100deadly−1.539−3.0840.002 **
minute−0.130−1.8720.061intolerable−1.596−2.1190.034 *
neither−0.136−2.5060.012 *tending−1.654−1.6320.103
last−0.150−2.3910.017 *pure−1.681−1.6690.095
nothing−0.151−3.0670.002 **incompetence−1.688−3.2770.001 **
receive−0.157−1.6590.097converted−1.902−3.1860.001 **
want−0.174−1.9990.046 *load−1.928−2.3480.019 *
speak−0.175−1.6960.090rare−1.965−2.4780.013 *
request−0.191−1.6920.091modification−1.992−2.2210.026 *
lose−0.192−2.5880.010 *roadkill−2.124−3.4420.001 **
solve−0.194−2.0550.040 *anniversary−2.195−2.2000.028 *
minimum−0.202−1.8220.068disparate−2.521−1.8730.061
conditions−0.203−1.9190.055falling−2.825−1.6850.092
answer−0.204−1.6840.092capital−3.231−2.5030.012 *
think−0.217−1.6240.105molehill−3.295−2.2900.022 *
Significance level: * p < 0.05; ** p < 0.01, *** p < 0.001.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zaki Ahmed, A.; Rodríguez-Díaz, M. Significant Labels in Sentiment Analysis of Online Customer Reviews of Airlines. Sustainability 2020, 12, 8683. https://doi.org/10.3390/su12208683

AMA Style

Zaki Ahmed A, Rodríguez-Díaz M. Significant Labels in Sentiment Analysis of Online Customer Reviews of Airlines. Sustainability. 2020; 12(20):8683. https://doi.org/10.3390/su12208683

Chicago/Turabian Style

Zaki Ahmed, Ayat, and Manuel Rodríguez-Díaz. 2020. "Significant Labels in Sentiment Analysis of Online Customer Reviews of Airlines" Sustainability 12, no. 20: 8683. https://doi.org/10.3390/su12208683

APA Style

Zaki Ahmed, A., & Rodríguez-Díaz, M. (2020). Significant Labels in Sentiment Analysis of Online Customer Reviews of Airlines. Sustainability, 12(20), 8683. https://doi.org/10.3390/su12208683

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop