Applying AI to Social Science and Social Science to AI

A special issue of Social Sciences (ISSN 2076-0760).

Deadline for manuscript submissions: closed (15 January 2024) | Viewed by 29162

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Computer Science and Artificial Intelligence, Universidad de Granada, 318011 Granada, Spain
2. UCL Department of Experimental Psychology, University College London, London WC1H 0AP, UK
Interests: machine learning; big data; Internet of Things; XAI (explainable artificial intelligence); association rules

E-Mail Website
Guest Editor
Department of Developmental and Educational Psychology, University of Granada, 18071 Granada, Spain
Interests: cyberbullying; developmental psychology

E-Mail Website
Guest Editor
UCL Department of Experimental Psychology, University College London, London WC1H 0AP, UK
Interests: causal reasoning; counterfactual reasoning; evidential reasoning; responsibility attribution

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) models human behavior, decision making and reasoning. The study of human reasoning has also inspired the development of new models. AI practitioners have also attempted to build more understandable AI models and provide clearer indicators of how a model has arrived at a decision. This emerging field is called Explainable AI (XAI). Meanwhile, social scientists have begun to outline what would make for a better XAI explanation and how we can evaluate people’s reactions to these explanations.

This Special Issue aims to showcase research on the theoretical and practical applications of AI to the social sciences and vice versa. A subset of papers will be selected from the 15th International Conference in Flexible Query Answering Systems, Palma de Mallorca, Spain. However, external submissions are also welcome.

We are seeking submissions including (but not limited to) the following topics. Other submissions involving AI and the social sciences will also be considered.

  1. Blame attribution to machines;
  2. Causal Inference and machine learning;
  3. The risks and benefits of Applications of AI;
  4. AI and privacy;
  5. Bias in AI;
  6. Human–machine trust in AI systems;
  7. Citizen perceptions of AI and its impact;
  8. Explainable Artificial Intelligence (XAI);
  9. Evaluation models for Explainable Artificial Intelligence (XAI);
  10. Technical research into the representation, acquisition, and use of ethical knowledge by AI systems;
  11. Technical research into solutions for AI, such as bias, fairness, explainability, accountability, responsibility, risk;
  12. ChatGPT in Social Science Research.

Dr. Carlos Fernandez-Basso
Dr. Jesica Gómez Sánchez
Prof. Dr. David Lagnado
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Social Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • explainable AI
  • social science
  • human–machine trust

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Review

9 pages, 274 KiB  
Review
Are TikTok Algorithms Influencing Users’ Self-Perceived Identities and Personal Values? A Mini Review
by Claudiu Gabriel Ionescu and Monica Licu
Soc. Sci. 2023, 12(8), 465; https://doi.org/10.3390/socsci12080465 - 21 Aug 2023
Cited by 9 | Viewed by 26986
Abstract
The use of TikTok is more widespread now than ever, and it has a big impact on users’ daily lives, with self-perceived identity and personal values being topics of interest in light of the algorithmically curated content. This mini-review summarizes current findings related [...] Read more.
The use of TikTok is more widespread now than ever, and it has a big impact on users’ daily lives, with self-perceived identity and personal values being topics of interest in light of the algorithmically curated content. This mini-review summarizes current findings related to the TikTok algorithm, and the impact it has on self-perceived identity, personal values, or related concepts of the Self. We pass through the contents of algorithmic literacy and emphasize its importance along with users’ attitudes toward algorithms. In the first part of our results, we show conceptual models of algorithms like the crystal framework, platform spirit, and collective imaginaries. In the second part, we talk about the degree of impact a social media algorithm may exert over an individual’s sense of self, understanding how the algorithmized self and domesticated algorithm are trying to sum up the dual development of this relationship. In the end, with the concept of Personal Engagement and the role of cognitive biases, we summarize the current findings and discuss the questions that still need to be addressed. Performing research on the topic of social media, especially TikTok, poses ethical, cultural, and regulatory challenges for researchers. That is why we will discuss the main theoretical frameworks that were published with their attached current studies and their impact on the current theoretical models as well as the limitations within these studies. Finally, we discuss further topics of interest related to the subject and possible perspectives, as well as recommendations regarding future research in areas like impact on personal values and identity, cognitive biases, and algorithmic literacy. Full article
(This article belongs to the Special Issue Applying AI to Social Science and Social Science to AI)
Back to TopTop