Digital Analysis in Digital Humanities

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Big Data and Augmented Intelligence".

Deadline for manuscript submissions: closed (30 September 2024) | Viewed by 12748

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Innovation and Information Engineering, Università degli Studi "Guglielmo Marconi", 00193 Roma, Italy
2. Leibniz Institute for Educational Media | Georg Eckert Institute, 38118 Braunschweig, Germany
Interests: digital analysis and digital humanities; machine learning; natural language processing; knowledge discovery and data linking of structured and unstructured open linked data; semantic technology and knowledge management of big data
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Leibniz Institute for Educational Media | Georg Eckert Institute, 38118 Braunschweig, Germany
Interests: data and web mining; human-machine interaction; information processing; user and data modeling; semantic web and information retrieval

Special Issue Information

Dear Colleagues,

The aim of this Special Issue on “Digital Analysis in Digital Humanities” is to gather results on the interdisciplinary area of innovative data analytics methods and customized technologies for digital humanities research and analysis.

This Special Issue is calling for submissions of novel and innovative research results on innovative methods and technologies with a clear reference to data collection, data processing, data analysis and data visualizations that can be exploited in the field of digital humanities and cultural heritage. 

This Special Issue also invites submissions that concentrate on the well-founded idea that digital analysis enables the digital humanities domain to digitally transform its research analysis and results, becoming more innovative and forward-looking in its decision making.

In addition, this Special Issue aims to emphasize the role of humanists in data analysis through demonstrating the successful and challenging application of human interaction to interpretate the quantitative analysis and define new advanced and customized solutions.

The Special Issue thus intends to focus on any aspect of “Digital Analysis in Digital Humanities” referring to computer science and humanist research interests in addition to recent research trends.

Dr. Francesca Fallucchi
Prof. Dr. Ernesto William De Luca
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • digital humanities
  • digital analysis
  • digital libraries
  • information technology
  • data mining
  • machine learning
  • natural language processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 5283 KiB  
Article
Beyond Lexical Boundaries: LLM-Generated Text Detection for Romanian Digital Libraries
by Melania Nitu and Mihai Dascalu
Future Internet 2024, 16(2), 41; https://doi.org/10.3390/fi16020041 - 25 Jan 2024
Cited by 4 | Viewed by 2489
Abstract
Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents, including human-written samples as well as generated texts using six [...] Read more.
Machine-generated content reshapes the landscape of digital information; hence, ensuring the authenticity of texts within digital libraries has become a paramount concern. This work introduces a corpus of approximately 60 k Romanian documents, including human-written samples as well as generated texts using six distinct Large Language Models (LLMs) and three different generation methods. Our robust experimental dataset covers five domains, namely books, news, legal, medical, and scientific publications. The exploratory text analysis revealed differences between human-authored and artificially generated texts, exposing the intricacies of lexical diversity and textual complexity. Since Romanian is a less-resourced language requiring dedicated detectors on which out-of-the-box solutions do not work, this paper introduces two techniques for discerning machine-generated texts. The first method leverages a Transformer-based model to categorize texts as human or machine-generated, while the second method extracts and examines linguistic features, such as identifying the top textual complexity indices via Kruskal–Wallis mean rank and computes burstiness, which are further fed into a machine-learning model leveraging an extreme gradient-boosting decision tree. The methods show competitive performance, with the first technique’s results outperforming the second one in two out of five domains, reaching an F1 score of 0.96. Our study also includes a text similarity analysis between human-authored and artificially generated texts, coupled with a SHAP analysis to understand which linguistic features contribute more to the classifier’s decision. Full article
(This article belongs to the Special Issue Digital Analysis in Digital Humanities)
Show Figures

Figure 1

12 pages, 1610 KiB  
Article
Generating Synthetic Resume Data with Large Language Models for Enhanced Job Description Classification
by Panagiotis Skondras, Panagiotis Zervas and Giannis Tzimas
Future Internet 2023, 15(11), 363; https://doi.org/10.3390/fi15110363 - 9 Nov 2023
Cited by 3 | Viewed by 4411
Abstract
In this article, we investigate the potential of synthetic resumes as a means for the rapid generation of training data and their effectiveness in data augmentation, especially in categories marked by sparse samples. The widespread implementation of machine learning algorithms in natural language [...] Read more.
In this article, we investigate the potential of synthetic resumes as a means for the rapid generation of training data and their effectiveness in data augmentation, especially in categories marked by sparse samples. The widespread implementation of machine learning algorithms in natural language processing (NLP) has notably streamlined the resume classification process, delivering time and cost efficiencies for hiring organizations. However, the performance of these algorithms depends on the abundance of training data. While selecting the right model architecture is essential, it is also crucial to ensure the availability of a robust, well-curated dataset. For many categories in the job market, data sparsity remains a challenge. To deal with this challenge, we employed the OpenAI API to generate both structured and unstructured resumes tailored to specific criteria. These synthetically generated resumes were cleaned, preprocessed and then utilized to train two distinct models: a transformer model (BERT) and a feedforward neural network (FFNN) that incorporated Universal Sentence Encoder 4 (USE4) embeddings. While both models were evaluated on the multiclass classification task of resumes, when trained on an augmented dataset containing 60 percent real data (from Indeed website) and 40 percent synthetic data from ChatGPT, the transformer model presented exceptional accuracy. The FFNN, albeit predictably, achieved lower accuracy. These findings highlight the value of augmented real-world data with ChatGPT-generated synthetic resumes, especially in the context of limited training data. The suitability of the BERT model for such classification tasks further reinforces this narrative. Full article
(This article belongs to the Special Issue Digital Analysis in Digital Humanities)
Show Figures

Graphical abstract

16 pages, 1368 KiB  
Article
New RFI Model for Behavioral Audience Segmentation in Wi-Fi Advertising System
by Shueh-Ting Lim, Lee-Yeng Ong and Meng-Chew Leow
Future Internet 2023, 15(11), 351; https://doi.org/10.3390/fi15110351 - 26 Oct 2023
Cited by 1 | Viewed by 1900
Abstract
In this technological era, businesses tend to place advertisements via the medium of Wi-Fi advertising to expose their brands and products to the public. Wi-Fi advertising offers a platform for businesses to leverage their marketing strategies to achieve desired goals, provided they have [...] Read more.
In this technological era, businesses tend to place advertisements via the medium of Wi-Fi advertising to expose their brands and products to the public. Wi-Fi advertising offers a platform for businesses to leverage their marketing strategies to achieve desired goals, provided they have a thorough understanding of their audience’s behaviors. This paper aims to formulate a new RFI (recency, frequency, and interest) model that is able to analyze the behavior of the audience towards the advertisement. The audience’s interest is measured based on the relationship between their total view duration on an advertisement and its corresponding overall click received. With the help of a clustering algorithm to perform the dynamic segmentation, the patterns of the audience behaviors are then being interpreted by segmenting the audience based on their engagement behaviors. In the experiments, two different Wi-Fi advertising attributes are tested to prove the new RFI model is applicable to effectively interpret the audience engagement behaviors with the proposed dynamic characteristics range table. The weak and strongly engaged behavioral characteristics of the segmented behavioral patterns of the audience, such as in a one-time audience, are interpreted successfully with the dynamic-characteristics range table. Full article
(This article belongs to the Special Issue Digital Analysis in Digital Humanities)
Show Figures

Graphical abstract

14 pages, 2745 KiB  
Article
Digital Qualitative and Quantitative Analysis of Arabic Textbooks
by Francesca Fallucchi, Bouchra Ghattas, Riem Spielhaus and Ernesto William De Luca
Future Internet 2022, 14(8), 237; https://doi.org/10.3390/fi14080237 - 29 Jul 2022
Cited by 4 | Viewed by 2747
Abstract
Digital Humanities (DH) provide a broad spectrum of functionalities and tools that enable the enrichment of both quantitative and qualitative research methods in the humanities. It has been widely recognized that DH can help in curating and analysing large amounts of data. However, [...] Read more.
Digital Humanities (DH) provide a broad spectrum of functionalities and tools that enable the enrichment of both quantitative and qualitative research methods in the humanities. It has been widely recognized that DH can help in curating and analysing large amounts of data. However, digital tools can also support research processes in the humanities that are interested in detailed analyses of how empirical sources are patterned. Following a methodological differentiation between close and distant reading with regard to textual analysis, this article describes the Edumeres Toolbox, a digital tool for textbook analysis. The Edumeres Toolbox is an outcome of the continuous interdisciplinary exchange between computer scientists and humanist researchers, whose expertise is crucial to convert information into knowledge by means of (critical) interpretation and contextualization. This paper presents a use case in order to describe the various functionalities of the Edumeres Toolbox and their use for the analysis of a collection of Arabic textbooks. Hereby, it shows how the interaction between humanist researchers and computer scientists in this digital process produces innovative research solutions and how the tool enables users to discover structural and linguistic patterns and develop innovative research questions. Finally, the paper describes challenges recognized by humanist researchers in using digital tools in their work, which still require in-depth research and practical efforts from both parties to improve the tool performance. Full article
(This article belongs to the Special Issue Digital Analysis in Digital Humanities)
Show Figures

Figure 1

Back to TopTop