A Recommender System for Mobility-as-a-Service Plans Selection
Round 1
Reviewer 1 Report
Dear Authors, thanks for your contribution on MaaS topics.
I think that this paper has two main problems that could be solved by a review version:
+First quality of all figures should be improved as it´s really difficult to read and understand.
+Second, I suggest to extend the references updating with recently work. I suggest this new entry, but consider to extend them:
The Ws of MaaS: Understanding mobility as a service fromaliterature review, by Daniela Arias-Molinares, Juan C. García-Palomares, IATSS Research. Volume 44, Issue 3. 2020,ISSN 0386-1112. https://doi.org/10.1016/j.iatssr.2020.02.001.
Finally, due to the relation of this work to previous work by authors at your reference [8]. This should be more detail inside the paper.
Author Response
Please see the attachment
Author Response File: Author Response.pdf
Reviewer 2 Report
New types of mobility beyond public transport and private cars (e.g., shared bicycles, electric scooters, car-sharing, etc.) allow for a more seamless traveling, address travelers’ needs in a personalized manner, optimize resource usage and thus improve sustainability. Mobility-as-a-Service (MaaS) bundles different types of mobility services as plans and offers them to customers. MaaS Operators deploy mobility apps and back-end platforms that offer to travelers a single point for MaaS plans selection, route planning, booking and payment. However, with travelers being accustomed to using single transport services, they should be aided in selecting MaaS plans that are relevant to users’ specific characteristics, habits and preferences. The authors present a hybrid knowledge-based recommender system to recommend MaaS plans to users, leveraging constraint satisfaction techniques and cosine similarity. An experiment showed positive results when compared to other recommendation methods. In a pilot study, the majority of the participants chose an actual MaaS Plan from the top 3 recommendations.
The paper is clear and well written (although there is some duplication). However, a main issue with the paper is, in my opinion, a lack of contribution. The authors rather straightforwardly apply well-known CSP and similarity methods to a problem that seems very similar to other recommender problems, such as tourism packages and product bundles. There is no explanation of why this problem is different from those other problems, and, hence, why it cannot be solved by existing recommender methods or systems. Regarding the method, the knowledge that is utilized by the recommender system has an unclear synthesis (this is not aided by the fact that the relevant figures are illegible). The evaluation is comprehensive, but it is unclear why the different methods were evaluated in pair-wise evaluations. Figures 1, 3-7, 9, 10, 14, 16 are *very* low quality (mostly illegible, really).
With regards to the method, it is unclear where the constraints come from. E.g., what is the rationale for CH_2, CS_2: would it not be possible for another transport mode, of which the user was not aware, to be more suitable for their purposes (e.g., shared electric scooters, bikes?) Won't this just reinforce the user's prior modes of transport (which may not have included "novel" ones)? E.g., why not determine suitability based on distance, cost, safety, amount of traffic, ..? (Since fig. 3, 4 are illegible, the examples from the text are the only ones to go on.) Not saying that this approach would be better, but the rationale for these constraints is not so straightforward as it may appear. (Was it perhaps a user focus group, or a literature study?) Secondly, the difference between "hard" and "soft" constraints is unclear as it seems that both strictly rule out plans - I believe the difference is that the latter include *ranges* of values instead of a discrete value (hence, "soft" rather implies "inverval" instead of "fuzzy"). In this vein, why is it needed to have two stages for ruling out plans, since all plans need to adhere to all constraints (if this is for optimization it is unclear how that would optimize things).
The paper presents a comprehensive evaluation that gives a positive view of the proposed method. But, it is unclear why the evaluation always occurs in function of a comparison with another approach, since this clearly introduces bias. Why not simply have users rank each separate method, i.e., the generated MaaS plans, and then compare the rankings? E.g., it seems possible that "price-desc", in a comparison between "price-desc" and "price-asc", may actually score better than "csp-sim" in any comparison, so it is unclear what the added value is of such an evaluation. (If the alternatives are low-quality enough, it seems that users will score the option higher than they would in a stand-alone way.) Finally, the "data cleaning" raises a lot of questions. The authors mention "cases of participants that conducted the survey in less than 1.5 or more than 12 minutes were filtered out from the initial dataset". I can guess as to why the former would be ruled out, but why the latter? (I've never seen this reported in other work.) Also, the authors mention "examining the various answers to both Likert scale and direct questions by removing the relevant unbalanced answers". This certainly seems very strange - what is meant by unbalanced - simply removing very low/high scores?
The authors previously presented this work in a conference; the paper should really outline concrete differences with the prior work (AFAICT, the main difference is a fleshing out of the CSP, and the more comprehensive evaluation).
MINOR
- in fig. 2, what are transformation scripts
- on p. 5 (line 211) authors _allude_ to data-driven learning, but up until now this has not been mentioned.
- "the similarity mechanism either receives direct user feedback or in the case of an existing user, his/her past data (i.e. past subscriptions) are considered". Is this either-or?
- definitions: instead of writing vn, cm, .. use subscripts - really makes it a lot more legible
- what is "maximum level of this particular transport mode"
- unclear: "the max/min mode levels in MaaS plans depends on the application at hand"
- instead of "work demonstrated in [17]", the authors may consider a style such as "work by Reiterer et al. [17]"
- unclear sentence: "in particular the provided modal allowances for the under-investigation cities"
- "facilitate end-users to select MasS" -> "facilitate end-users to select MaaS"
Author Response
Please see the attachment
Author Response File: Author Response.pdf
Reviewer 3 Report
The proposed article addresses an up-to-date and socially relevant issue. Being the result of a European project, maas4eu, it could have a more in-depth theoretical framework, namely related to the social value of the project.
However, the main flaw of the article results from the use of a set of low-resolution graphics and schematics, which consultation of the project's website did not allow us to overcome.
However, the main flaw of the article results from the use of a set of low-resolution graphics and schematics, which consultation of the project's website did not allow us to overcome. This quality strongly affected the present evaluation.
It becomes necessary to improve the graphic quality, for the best evaluation of this article.
After this central observation has been made, other small questions can also be asked. In line 236 and 237, contrary to what is presented in lines 233 and 234, the example is not completely understood. The same can be said for lines 248 and 249.
In lines 294 and 296 it is stated that "For all other responses the feature value will vary between these two extremes", the question is which decay function was used, and why?
In lines 349 to 353 it is referred the use of modal use information according to behavior in the previous n months. Was the issue of seasonality in any way considered? The use in winter and summer months, in periods of school activities and without school activities?
Please, redefine table 1. As it stands it is muddled. The same for Figure 12, must be in line with figure 17.
Finally, as the result expressed in figure 13, what can be concluded? Shouldn't these answers be crossed with the respondents' sociodemographic characteristics? Not much can be concluded from this result.
Author Response
Please see the attachment
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
Dear authors,
Thanks for the corrections you made to the article. The main doubts are clarified. However, the graphic quality of the figures is not in a condition to be published. Please redo the figures.
Kind regards
Author Response
Point 1: Thanks for the corrections you made to the article. The main doubts are clarified. However, the graphic quality of the figures is not in a condition to be published. Please redo the figures.
Response 1: Thank you for the comment, we have updated the figures and we will send the raw files to the editor since the mdpi system keeps lowering the quality of the figures.