Algorithmic Discriminations and New Forms of Protections: An Analysis of the Italian Case
Round 1
Reviewer 1 Report
The article can be published after just minor editing of a few spelling errors (lines 130, 147, figure 1) and some English corrections.
I suggest also to change the design of figure 2.
Some minor editing of a few spelling errors (lines 130, 147, figure 1) and review some sentences to adjust to a proper English writing.
Author Response
Dear Reviewer, we are grateful for your comments and suggestions.
We have corrected the typos that you have indicated. We have also changed the design of figure 2.
Finally, we have minimized the Self-citations in our essay and added other references in bibliography.
Reviewer 2 Report
This paper is almost ready for publishing. I find the theme relevant and timely taking the rapid development of AI managed systems into account. However, there are some issues worth considering in finalising the paper.
In the very beginning of the introduction, it would help the reader if you mention briefly what this paper is about. In Tbl 1, women/not minimum hourly wage the figure should be 45.2
In rows 359-62 you refer to subjective vs. objective criteria. It would be important to mention here that the subjective criteria may be biased, e.g. in the case of customer rating being a criteria.
Tables 1 and figures 1-3: You could carry through statistical testing of the differences in the descriptive analysis, so the reader could assess the relevance of the differences.
In the table one educational level is "lyceum", later you use "high school". The latter is more easily understandable for readers.
When referring to males, it seems that you use term "man" several times in the text instead of "men". Please check.
OECD Employment Outlook 2023, published in July, is under the theme AI and the Labour Market. I recommend to check this. The Italian case explained in more detail in this paper is also mentioned in chapter 6.2 of the OECD Employment Outlook
minor editing / checks needed
Author Response
Dear Reviewer, we are grateful for your comments and suggestions.
As you suggested, at the very beginning of the Introduction, we have briefly indicated what this paper is about.
We have corrected the typos in Tbl 1.
We have mentioned that subjective criteria may be biased compared to objective ones.
We have carried out t-test of statistical differences and added a comment after Tbl 1.
We have redefined "lyceum" as "high school".
We have checked and corrected the wrong term "man" with "men".
We have mentioned the Employment Outlook (OECD, 2023) of the impact of artificial intelligence on the labour market, highlighting the level of uncertainty surrounding the current and especially future impact of artificial intelligence on the labour market, and the most appropriate policy actions to promote the legitimate use of AI.
On the Italian case, as indicated by the 2023 OECD Report, we have deepened the judgment of the Court of Bologna of 31 December 2020 on the Deliveroo case, in which indirect discrimination was recognised for the work platform. We then discussed the law 128/2019.
We have also minimized the Self-citations in our essay and added other references in bibliography.
Reviewer 3 Report
Dear Authors
Hope you are well. I congratulate you for your manuscript. The issue very pertinent and actual.
Although there is different opinions on the impact of AI, there is general agreement among economic studies that AI will increase inequality. One possible example of this could be a further shift in the advantage from labour to capital, weakening labour institutions along the way. But also AI can help for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings. AI can also support in making less-biased decisions. Therefore I suggest you add in the conclusion section that because AI algorithms, are often trained on data sets that are skewed or not fully representative of the groups they serve this can happen and in part because they are built by humans who have their own natural biases. For that with your study you can make some recommendations about it.
Best wishes
Author Response
Dear Reviewer, we are grateful for your comment and suggestion.
Following it, we have added a sentence in the conclusion. We clarify that data provided to the algorithm is not neutral and that this can lead to various forms of discrimination. This is why the algorithm amplifies the discrimination in the labour market. So more human control is needed in data entry into algorithms and fully automated decisions should be prohibited.
We have also minimized the Self-citations in our essay and added other references in bibliography.
Round 2
Reviewer 1 Report
Authors have applied all changes requested in the first revision.
Reviewer 2 Report
Comments very well taken aboard!
Check language in the additions
Reviewer 3 Report
In my opinion the article can be published.