applsci-logo

Journal Browser

Journal Browser

Algorithmic Music and Sound Computing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Acoustics and Vibrations".

Deadline for manuscript submissions: closed (20 November 2024) | Viewed by 31844

Special Issue Editor


E-Mail Website
Guest Editor
Dipartimento di Informatica, Università degli Studi di Salerno, 84084 Fisciano, Italy
Interests: nature-inspired systems; methodologies and computational models; theoretical computer science; formal languages; combinatorics on words; computational intelligence; Lyndon-based factorizations; bio-inspired unconventional models
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent decades, the use of algorithmic techniques in the field of music or, more in general, for the manipulation of sound material has seen considerable growth.

Among these, techniques for equipping machines with intelligence have also begun to encompass the endeavor of reproducing human beings' creativity; as a consequence, the musical composition ability has become a challenging task for the AI community.

Music information retrieval, music plagiarism, and automatic music composition are some other examples of problems for which algorithmic approaches have resulted in effective solutions.

Artificial intelligence and machine learning techniques open promising ways to new solutions for music and sound-related problems.

This Special Issue aims to collect contributions both exploring novel approaches and improving existing solutions, as well as conducting state-of-the-art surveys.

Topics of interest include, but are not limited to, algorithmic music composition, music similarity, music information retrieval, and voice identification.

Dr. Rocco Zaccagnino
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 3882 KiB  
Article
Open-Set Recognition of Pansori Rhythm Patterns Based on Audio Segmentation
by Jie You and Joonwhoan Lee
Appl. Sci. 2024, 14(16), 6893; https://doi.org/10.3390/app14166893 - 6 Aug 2024
Viewed by 704
Abstract
Pansori, a traditional Korean form of musical storytelling, is characterized by performances involving a vocalist and a drummer. It is well-known for the singer’s expressive narrative (aniri) and delicate gesture with fan in hand. The classical Pansori repertoires mostly tell love, satire, and [...] Read more.
Pansori, a traditional Korean form of musical storytelling, is characterized by performances involving a vocalist and a drummer. It is well-known for the singer’s expressive narrative (aniri) and delicate gesture with fan in hand. The classical Pansori repertoires mostly tell love, satire, and humor, as well as some social lessons. These performances, which can extend from three to five hours, necessitate that the vocalist adheres to precise rhythmic structures. The distinctive rhythms of Pansori are crucial for conveying both the narrative and musical expression effectively. This paper explores the challenge of open-set recognition, aiming to efficiently identify unknown Pansori rhythm patterns while applying the methodology to diverse acoustic datasets, such as sound events and genres. We propose a lightweight deep learning-based encoder–decoder segmentation model, which employs a 2-D log-Mel spectrogram as input for the encoder and produces a frame-based 1-D decision along the temporal axis. This segmentation approach, processing 2-D inputs to classify frame-wise rhythm patterns, proves effective in detecting unknown patterns within time-varying sound streams encountered in daily life. Throughout the training phase, both center and supervised contrastive losses, along with cross-entropy loss, are minimized. This strategy aimed to create a compact cluster structure within the feature space for known classes, thereby facilitating the recognition of unknown rhythm patterns by allocating ample space for their placement within the embedded feature space. Comprehensive experiments utilizing various datasets—including Pansori rhythm patterns (91.8%), synthetic datasets of instrument sounds (95.1%), music genres (76.9%), and sound datasets from DCASE challenges (73.0%)—demonstrate the efficacy of our proposed method to detect unknown events, as evidenced by the AUROC metrics. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

15 pages, 386 KiB  
Article
Detecting Selected Instruments in the Sound Signal
by Daniel Kostrzewa, Paweł Szwajnoch, Robert Brzeski and Dariusz Mrozek
Appl. Sci. 2024, 14(14), 6330; https://doi.org/10.3390/app14146330 - 20 Jul 2024
Viewed by 803
Abstract
Detecting instruments in a music signal is often used in database indexing, song annotation, and creating applications for musicians and music producers. Therefore, effective methods that automatically solve this issue need to be created. In this paper, the mentioned task is solved using [...] Read more.
Detecting instruments in a music signal is often used in database indexing, song annotation, and creating applications for musicians and music producers. Therefore, effective methods that automatically solve this issue need to be created. In this paper, the mentioned task is solved using mel-frequency cepstral coefficients (MFCC) and various architectures of artificial neural networks. The authors’ contribution to the development of automatic instrument detection covers the methods used, particularly the neural network architectures and the voting committees created. All these methods were evaluated, and the results are presented and discussed in the paper. The proposed automatic instrument detection methods show that the best classification quality was obtained for an extensive model, which is the so-called committee of voting classifiers. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

16 pages, 746 KiB  
Article
Generating Fingerings for Piano Music with Model-Based Reinforcement Learning
by Wanxiang Gao, Sheng Zhang, Nanxi Zhang, Xiaowu Xiong, Zhaojun Shi and Ka Sun
Appl. Sci. 2023, 13(20), 11321; https://doi.org/10.3390/app132011321 - 15 Oct 2023
Viewed by 2577
Abstract
The piano fingering annotation task refers to assigning finger labels to notes in piano sheet music. Good fingering helps improve the smoothness and musicality of piano performance. In this paper, we propose a method for automatically generating piano fingering using a model-based reinforcement [...] Read more.
The piano fingering annotation task refers to assigning finger labels to notes in piano sheet music. Good fingering helps improve the smoothness and musicality of piano performance. In this paper, we propose a method for automatically generating piano fingering using a model-based reinforcement learning algorithm. We treat fingering annotation as a partial constraint combinatorial optimization problem and establish an environment model for the piano performance process based on prior knowledge. We design a reward function based on the principle of minimal motion and use reinforcement learning algorithms to decide the optimal fingering combinations. Our innovation lies in establishing a more realistic environment model and adopting a model-based reinforcement learning approach, compared to model-free methods, to enhance the utilization of samples. We also propose a music score segmentation method to parallelize the fingering annotation task. The experimental section shows that our method achieves good results in eliminating physically impossible fingerings and reducing the amount of finger motion required in piano performance. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

14 pages, 283 KiB  
Article
Unleashing Creative Synergies: A Mixed-Method Case Study in Music Education Classrooms
by Mário Anibal Cardoso, Elsa Maria Gabriel Morgado and Levi Leonido
Appl. Sci. 2023, 13(17), 9842; https://doi.org/10.3390/app13179842 - 31 Aug 2023
Cited by 2 | Viewed by 1747
Abstract
Algorithmic music composition has been gaining prominence and recognition as an innovative approach to music education, providing students with opportunities to explore creativity, computational thinking, and musical knowledge. This study aims to investigate the impact of integrating algorithmic music composition in the classroom, [...] Read more.
Algorithmic music composition has been gaining prominence and recognition as an innovative approach to music education, providing students with opportunities to explore creativity, computational thinking, and musical knowledge. This study aims to investigate the impact of integrating algorithmic music composition in the classroom, examining its influence on student engagement, musical knowledge, and creative expression, as well as to enhance computational thinking skills. A mixed-method case study was conducted in three Basic Music Education classrooms in the north of Portugal, involving 71 participants (68 students and 3 music teachers). The results reveal: (i) both successes and challenges in integrating computational thinking concepts and practices; (ii) pedagogical benefits of integrating programming platforms, where programming concepts overlapped with music learning outcomes; and (iii) positive impact on participants’ programming self-confidence and recognition of programming’s importance. Integrating algorithmic music composition in the classroom positively influences student engagement, musical knowledge, and creative expression. The use of algorithmic techniques provides a novel and engaging platform for students to explore music composition, fostering their creativity, critical thinking, and collaboration skills. Educators can leverage algorithmic music composition as an effective pedagogical approach to enhance music education, allowing students to develop a deeper understanding of music theory and fostering their artistic expression. Future research should contribute to the successful integration of digital technologies in the Portuguese curriculum by further exploring the long-term effects and potential applications of algorithmic music composition in different educational contexts. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
28 pages, 1994 KiB  
Article
TruMuzic: A Deep Learning and Data Provenance-Based Approach to Evaluating the Authenticity of Music
by Kuldeep Gurjar, Yang-Sae Moon and Tamer Abuhmed
Appl. Sci. 2023, 13(16), 9425; https://doi.org/10.3390/app13169425 - 19 Aug 2023
Viewed by 1989
Abstract
The digitalization of music has led to increased availability of music globally, and this spread has further raised the possibility of plagiarism. Numerous methods have been proposed to analyze the similarity between two pieces of music. However, these traditional methods are either focused [...] Read more.
The digitalization of music has led to increased availability of music globally, and this spread has further raised the possibility of plagiarism. Numerous methods have been proposed to analyze the similarity between two pieces of music. However, these traditional methods are either focused on good processing speed at the expense of accuracy or they are not able to properly identify the correct features and the related feature weights needed for achieving accurate comparison results. Therefore, to overcome these issues, we introduce a novel model for detecting plagiarism between two given pieces of music. The model does this with a focus on the accuracy of the similarity comparison. In this paper, we make the following three contributions. First, we propose the use of provenance data along with musical data to improve the accuracy of the model’s similarity comparison results. Second, we propose a deep learning-based method to classify the similarity level of a given pair of songs. Finally, using linear regression, we find the optimized weights of extracted features following the ground truth data provided by music experts. We used the main dataset, containing 3800 pieces of music, to evaluate the proposed method’s accuracy; we also developed several additional datasets with their own established ground truths. The experimental results show that our method, which we call ‘TruMuzic’, improves the overall accuracy of music similarity comparison by 10% compared to the other state-of-the-art methods from recent literature. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

18 pages, 1544 KiB  
Article
Low Complexity Deep Learning Framework for Greek Orthodox Church Hymns Classification
by Lazaros Alexios Iliadis, Sotirios P. Sotiroudis, Nikolaos Tsakatanis, Achilles D. Boursianis, Konstantinos-Iraklis D. Kokkinidis, George K. Karagiannidis and Sotirios K. Goudos
Appl. Sci. 2023, 13(15), 8638; https://doi.org/10.3390/app13158638 - 27 Jul 2023
Viewed by 1059
Abstract
The Byzantine religious tradition includes Greek Orthodox Church hymns, which significantly differ from other cultures’ religious music. Since the deep learning revolution, audio and music signal processing are often approached as computer vision problems. This work trains from scratch three different novel convolutional [...] Read more.
The Byzantine religious tradition includes Greek Orthodox Church hymns, which significantly differ from other cultures’ religious music. Since the deep learning revolution, audio and music signal processing are often approached as computer vision problems. This work trains from scratch three different novel convolutional neural networks on a hymns dataset to perform hymns classification for mobile applications. The audio data are first transformed into Mel-spectrograms and then fed as input to the model. To study in more detail our models’ performance, two state-of-the-art (SOTA) deep learning models were trained on the same dataset. Our approach outperforms the SOTA models both in terms of accuracy and their characteristics. Additional statistical analysis was conducted to validate the results obtained. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

21 pages, 9057 KiB  
Article
Generative Music with Partitioned Quantum Cellular Automata
by Eduardo Reck Miranda and Hari Shaji
Appl. Sci. 2023, 13(4), 2401; https://doi.org/10.3390/app13042401 - 13 Feb 2023
Cited by 2 | Viewed by 2793
Abstract
Cellular automata (CA) are abstract computational models of dynamic systems that change some features with space and time. Music is the art of organising sounds in space and time, and it can be modelled as a dynamic system. Hence, CA are of interest [...] Read more.
Cellular automata (CA) are abstract computational models of dynamic systems that change some features with space and time. Music is the art of organising sounds in space and time, and it can be modelled as a dynamic system. Hence, CA are of interest to composers working with generative music. The art of generating music with CA hinges on the design of algorithms to evolve patterns of data and methods to render those patterns into musical forms. This paper introduces methods for creating original music using partitioned quantum cellular automata (PQCA). PQCA consist of an approach to implementing CA on quantum computers. Quantum computers leverage properties of quantum mechanics to perform computations differently from classical computers, with alleged advantages. The paper begins with some explanations of background concepts, including CA, quantum computing, and PQCA. Then, it details the PQCA systems that we have been developing to generate music and discusses practical examples. PQCA-generated materials for Qubism, a professional piece of music composed for London Sinfonietta, are included. The PQCA systems presented here were run on real quantum computers rather than simulations thereof. The rationale for doing so is also discussed. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

10 pages, 7323 KiB  
Article
Q1Synth: A Quantum Computer Musical Instrument
by Eduardo Reck Miranda, Peter Thomas and Paulo Vitor Itaboraí
Appl. Sci. 2023, 13(4), 2386; https://doi.org/10.3390/app13042386 - 13 Feb 2023
Cited by 1 | Viewed by 4166
Abstract
This paper introduces Q1Synth, an unprecedented musical instrument that produces sounds from (a) quantum state vectors representing the properties of a qubit, and (b) its measurements. The instrument is presented on a computer screen (or mobile device, such as a tablet or smartphone) [...] Read more.
This paper introduces Q1Synth, an unprecedented musical instrument that produces sounds from (a) quantum state vectors representing the properties of a qubit, and (b) its measurements. The instrument is presented on a computer screen (or mobile device, such as a tablet or smartphone) as a Bloch sphere, which is a visual representation of a qubit. The performer plays the instrument by rotating this sphere using a mouse. Alternatively, a gesture controller can be used, e.g., a VR glove. While the sphere is rotated, a continuously changing sound is produced. The instrument has a ‘measure key’. When the performer activates this key, the instrument generates a program (also known as a quantum circuit) to create the current state vector. Then, it sends the program to a quantum computer over the cloud for processing, that is, measuring, in quantum computing terminology. The computer subsequently returns the measurement, which is also rendered into sound. Currently, Q1Synth uses three different techniques to make sounds: frequency modulation (FM), subtractive synthesis, and granular synthesis. The paper explains how Q1Synth works and details its implementation. A setup developed for a musical performance, Spinnings, with three networked Q1Synth instruments is also reported. Q1Synth and Spinnings are examples of how creative practices can open the doors to new application pathways for quantum computing technology. Additionally, they illustrate how such emerging technology is leading to new approaches to musical instrument design and musical creativity. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

18 pages, 4070 KiB  
Article
An HMM-Based Approach for Cross-Harmonization of Jazz Standards
by Maximos Kaliakatsos-Papakostas, Konstantinos Velenis, Leandros Pasias, Chrisoula Alexandraki and Emilios Cambouropoulos
Appl. Sci. 2023, 13(3), 1338; https://doi.org/10.3390/app13031338 - 19 Jan 2023
Cited by 3 | Viewed by 1619
Abstract
This paper presents a methodology for generating cross-harmonizations of jazz standards, i.e., for harmonizing the melody of a jazz standard (Song A) with the harmonic context of another (Song B). Specifically, the melody of Song A, along with the chords that start and [...] Read more.
This paper presents a methodology for generating cross-harmonizations of jazz standards, i.e., for harmonizing the melody of a jazz standard (Song A) with the harmonic context of another (Song B). Specifically, the melody of Song A, along with the chords that start and end its sections (chord constraints), are used as a basis for generating new harmonizations with chords and chord transitions taken from Song B. This task involves potential incompatibilities between the components drawn from the two songs that take part in the cross-harmonization. In order to tackle such incompatibilities, two methods are introduced that are integrated in the Hidden Markov Model and the Viterbi algorithm. First, a rudimentary approach to chord grouping is presented that allows interchangeable utilization of chords belonging to the same group, depending on melody compatibility. Then, a “supporting” harmonic space of chords and probabilities is employed, which is learned from the entire dataset of the available jazz standards; this space provides local solutions when there are insurmountable conflicts between the melody and constraints of Song A and the harmonic context of Song B. Statistical and expert evaluation allow an analysis of the methodology, providing valuable insight about future steps. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

20 pages, 1440 KiB  
Article
Induced Emotion-Based Music Recommendation through Reinforcement Learning
by Roberto De Prisco, Alfonso Guarino, Delfina Malandrino and Rocco Zaccagnino
Appl. Sci. 2022, 12(21), 11209; https://doi.org/10.3390/app122111209 - 4 Nov 2022
Cited by 7 | Viewed by 8714
Abstract
Music is widely used for mood and emotion regulation in our daily life. As a result, many research works on music information retrieval and affective human-computer interaction have been proposed to model the relationships between emotion and music. However, most of these works [...] Read more.
Music is widely used for mood and emotion regulation in our daily life. As a result, many research works on music information retrieval and affective human-computer interaction have been proposed to model the relationships between emotion and music. However, most of these works focus on applications in a context-sensitive recommendation that considers the listener’s emotional state, but few results have been obtained in studying systems for inducing future emotional states. This paper proposes Moodify, a novel music recommendation system based on reinforcement learning (RL) capable of inducing emotions in the user to support the interaction process in several usage scenarios (e.g., games, movies, smart spaces). Given a target emotional state, and starting from the assumption that an emotional state is entirely determined by a sequence of recently played music tracks, the proposed RL method is designed to learn how to select the list of music pieces that better “match” the target emotional state. Differently from previous works in the literature, the system is conceived to induce an emotional state starting from a current emotion instead of capturing the current emotion and suggesting certain songs that are thought to be suitable for that mood. We have deployed Moodify as a prototype web application, named MoodifyWeb. Finally, we enrolled 40 people to experiment MoodifyWeb, employing one million music playlists from the Spotify platform. This preliminary evaluation study aimed to analyze MoodifyWeb’s effectiveness and overall user satisfaction. The results showed a highly rated user satisfaction, system responsiveness, and appropriateness of the recommendation (up to 4.30, 4.45, and 4.75 on a 5-point Likert, respectively) and that such recommendations were better than they thought before using MoodifyWeb (6.45 on a 7-point Likert). Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

13 pages, 1567 KiB  
Article
Annotated-VocalSet: A Singing Voice Dataset
by Behnam Faghih and Joseph Timoney
Appl. Sci. 2022, 12(18), 9257; https://doi.org/10.3390/app12189257 - 15 Sep 2022
Cited by 2 | Viewed by 4166
Abstract
There are insufficient datasets of singing files that are adequately annotated. One of the available datasets that includes a variety of vocal techniques (n = 17) and several singers (m = 20) with several WAV files (p = 3560) is [...] Read more.
There are insufficient datasets of singing files that are adequately annotated. One of the available datasets that includes a variety of vocal techniques (n = 17) and several singers (m = 20) with several WAV files (p = 3560) is the VocalSet dataset. However, although several categories, including techniques, singers, tempo, and loudness, are in the dataset, they are not annotated. Therefore, this study aims to annotate VocalSet to make it a more powerful dataset for researchers. The annotations generated for the VocalSet audio files include fundamental frequency contour, note onset, note offset, the transition between notes, note F0, note duration, Midi pitch, and lyrics. This paper describes the generated dataset and explains our approaches to creating and testing the annotations. Moreover, four different methods to define the onset/offset are compared. Full article
(This article belongs to the Special Issue Algorithmic Music and Sound Computing)
Show Figures

Figure 1

Back to TopTop