Musical Interactions (Volume II)

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 16509

Special Issue Editor


E-Mail Website
Guest Editor
School of Arts and Media, University of Salford, Salford M5 4WT, UK
Interests: human computer interaction; multimodal interfaces; interdisciplinary research; cognition; AI; simulation and modelling; multimedia performance
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

I am happy to announce the call for a second Special Issue on Musical Interaction in the Journal Multimodal Technologies and Interaction. This call has been prompted by the positive responses to the first issue, which heightened the community’s recognition of the importance of this topic. Let me offer all authors my congratulations for their high-quality papers and the first issue’s success. It is notable that all contributing authors are based in interdisciplinary research environments around the world. Thanks also to the reviewers for their meticulous recommendations; throughout the cycle, review processes were beyond rigorous due to the nature of the research topic, which encouraged new ways of thinking and expression. It was a truly engaging cycle.

The first issue of Musical Interaction included wide-ranging yet connected topics, from engaging mobile devices to supercomputers, from prototyping pedagogy to production of musical events, from tactile sensing of music to communicating musical control by eye gaze, and from literature survey to development perspectives. By leveraging the diverse paths demonstrated by the first issue’s authors, the second issue can expand horizons toward possible impacts on diverse research domains and application areas. New horizons will deepen our understanding of the anchor point of musical interaction and how it can influence ways of engaging devices and technologies to mediate human activities and experiences in daily life.

Music is a structured sonic event for listening, which evolves around literature (in the form of musical scores with implicit performance practice and music theory), performance repertoire (a canon of compositions written for an instrument or ensemble), and instrument design (for example viola da gamba vs. violin). Both making music and listening to music are events that require actions, where the actions collectively shape a musical experience. It is this nature of music that the call is focused on, which is why the research topic is designated as Musical Interaction rather than Interactive Music. The aim is to inspire the concept of musical experience toward research and use cases of multimodal technologies and interaction. What is missing in the current research landscape around related topics is the concurrent development of literature, repertoire, and instrument design through which coherence can be achieved. It is a contextual shift that requires an interdisciplinary research team.

The multimodality of musical performance and listening experience is well recognized in research, including in multimedia modeling, music information retrieval, music therapy, enactive interfaces, and new interfaces for musical expression. While these investigations are well-structured, musical experiences are always contextualized by listeners’ situated encounters with episodic events and memory. Many situated interactions arise in the context of technology applications into daily life, where the qualities of some experiences are less desirable than others. The proliferation of embedded systems for multimedia and action sensing includes mobiles and wearables, personal data appliances, and equipment for medicine, sports, and wellness, for training as well as gameplay and socialization. Across this spectrum, where is it desirable to apply musical interactions?

To substantiate a possible ecology of Musical Interaction within diverse research domains, and to project where the future paths may further emerge, we welcome researchers from fields including but not limited to AI, HCI, Music, Design and Engineering, Neuroscience, and Cognitive Science. We invite your inquiries and articles, and we invite you to participate in generating insights and better understandings of multisensory experience and multimodality through musical interaction, including broader implications beyond the domain of music.

Dr. Insook Choi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–computer interaction
  • AI and AI critique
  • multimodal interaction, integration, and signal processing
  • measuring and assessing interactivity
  • simulation and modeling
  • playful interfaces
  • music supported therapy
  • user-related studies
  • music computation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

37 pages, 9873 KiB  
Article
Customizing and Evaluating Accessible Multisensory Music Experiences with Pre-Verbal Children—A Case Study on the Perception of Musical Haptics Using Participatory Design with Proxies
by Emma Frid, Claudio Panariello and Claudia Núñez-Pacheco
Multimodal Technol. Interact. 2022, 6(7), 55; https://doi.org/10.3390/mti6070055 - 17 Jul 2022
Cited by 10 | Viewed by 4153
Abstract
Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) [...] Read more.
Research on Accessible Digital Musical Instruments (ADMIs) has highlighted the need for participatory design methods, i.e., to actively include users as co-designers and informants in the design process. However, very little work has explored how pre-verbal children with Profound and Multiple Disabilities (PMLD) can be involved in such processes. In this paper, we apply in-depth qualitative and mixed methodologies in a case study with four students with PMLD. Using Participatory Design with Proxies (PDwP), we assess how these students can be involved in the customization and evaluation of the design of a multisensory music experience intended for a large-scale ADMI. Results from an experiment focused on communication of musical haptics highlighted the diversity in employed interaction strategies used by the children, accessibility limitations of the current multisensory experience design, and the importance of using a multifaceted variety of qualitative and quantitative methods to arrive at more informed conclusions when applying a design with proxies methodology. Full article
(This article belongs to the Special Issue Musical Interactions (Volume II))
Show Figures

Figure 1

21 pages, 2867 KiB  
Article
Brain Melody Interaction: Understanding Effects of Music on Cerebral Hemodynamic Responses
by Jessica Sharmin Rahman, Sabrina Caldwell, Richard Jones and Tom Gedeon
Multimodal Technol. Interact. 2022, 6(5), 35; https://doi.org/10.3390/mti6050035 - 4 May 2022
Cited by 2 | Viewed by 4774
Abstract
Music elicits strong emotional reactions in people, regardless of their gender, age or cultural background. Understanding the effects of music on brain activity can enhance existing music therapy techniques and lead to improvements in various medical and affective computing research. We explore the [...] Read more.
Music elicits strong emotional reactions in people, regardless of their gender, age or cultural background. Understanding the effects of music on brain activity can enhance existing music therapy techniques and lead to improvements in various medical and affective computing research. We explore the effects of three different music genres on people’s cerebral hemodynamic responses. Functional near-infrared spectroscopy (fNIRS) signals were collected from 27 participants while they listened to 12 different pieces of music. The signals were pre-processed to reflect oxyhemoglobin (HbO2) and deoxyhemoglobin (HbR) concentrations in the brain. K-nearest neighbor (KNN), random forest (RF) and a one-dimensional (1D) convolutional neural network (CNN) were used to classify the signals using music genre and subjective responses provided by the participants as labels. Results from this study show that the highest accuracy in distinguishing three music genres was achieved by deep learning models (73.4% accuracy in music genre classification and 80.5% accuracy when predicting participants’ subjective rating of emotional content of music). This study validates a strong motivation for using fNIRS signals to detect people’s emotional state while listening to music. It could also be beneficial in giving personalised music recommendations based on people’s brain activity to improve their emotional well-being. Full article
(This article belongs to the Special Issue Musical Interactions (Volume II))
Show Figures

Figure 1

19 pages, 5757 KiB  
Article
Music and Time Perception in Audiovisuals: Arousing Soundtracks Lead to Time Overestimation No Matter Their Emotional Valence
by Alessandro Ansani, Marco Marini, Luca Mallia and Isabella Poggi
Multimodal Technol. Interact. 2021, 5(11), 68; https://doi.org/10.3390/mti5110068 - 29 Oct 2021
Cited by 4 | Viewed by 6787
Abstract
One of the most tangible effects of music is its ability to alter our perception of time. Research on waiting times and time estimation of musical excerpts has attested its veritable effects. Nevertheless, there exist contrasting results regarding several musical features’ influence on [...] Read more.
One of the most tangible effects of music is its ability to alter our perception of time. Research on waiting times and time estimation of musical excerpts has attested its veritable effects. Nevertheless, there exist contrasting results regarding several musical features’ influence on time perception. When considering emotional valence and arousal, there is some evidence that positive affect music fosters time underestimation, whereas negative affect music leads to overestimation. Instead, contrasting results exist with regard to arousal. Furthermore, to the best of our knowledge, a systematic investigation has not yet been conducted within the audiovisual domain, wherein music might improve the interaction between the user and the audiovisual media by shaping the recipients’ time perception. Through the current between-subjects online experiment (n = 565), we sought to analyze the influence that four soundtracks (happy, relaxing, sad, scary), differing in valence and arousal, exerted on the time estimation of a short movie, as compared to a no-music condition. The results reveal that (1) the mere presence of music led to time overestimation as opposed to the absence of music, (2) the soundtracks that were perceived as more arousing (i.e., happy and scary) led to time overestimation. The findings are discussed in terms of psychological and phenomenological models of time perception. Full article
(This article belongs to the Special Issue Musical Interactions (Volume II))
Show Figures

Figure 1

Back to TopTop