A Hardware-Oriented Algorithm for Real-Time Music Key Signature Recognition
Abstract
:1. Introduction
2. Materials and Methods
2.1. Theoretical Foundations
- the length of each vector is equal to the normalized multiplicity of a given pitch class, i.e., ,
- the direction of the vector is determined with the following relationship:, where and so on.
2.2. A Hardware-Oriented Algorithm for Real-Time Music Key Signature Recognition
- (1)
- Generation of the music signature corresponding to a given fragment of a considered piece of music:
- (a)
- For a given fragment of a music piece, calculate the multiplicities of the individual pitch classes: C, D♭, D, E♭, E, F, F, G, A♭, A, B♭, and B.
- (b)
- Determine the maximum value of the multiplicities obtained in (1a):
- (c)
- Create the vector of the normalized pitch classes:
- (d)
- Create the music signature based on the vector K obtained in (1c).
- (2)
- Finding of the main directed axis Y Z, given the musical signature created in the previous step (1):
- (a)
- Determine the characteristic values [Y Z] for all directed axes inscribed in the circle of fifths.
- (b)
- Find the maximum characteristic value [Y Z], which indicates the main directed axis of the music signature.
- (c)
- Increase the length of the analyzed fragment of music by a single note and return to the first step of the algorithm (1a) if the value obtained in (2b) is associated with more than one directed axis.
- (3)
- Determination of the key signature corresponding to the analyzed fragment of music:
- (a)
- Find the tone pointed by the obtained main directed axis.
- (b)
- Following in the clockwise direction, determine the tone located next to the tone found in (3a).
- (c)
- Read the key signature associated with the tone found in (3b).
3. Results
4. Discussion
Author Contributions
Funding
Conflicts of Interest
References
- Chew, E. Towards a Mathematical Model of Tonality. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2000. [Google Scholar]
- Chew, E. Out of the Grid and Into the Spiral: Geometric Interpretations of and Comparisons with the Spiral-Array Model. Comput. Musicol. 2008, 15, 51–72. [Google Scholar]
- Shepard, R. Geometrical approximations to the structure of musical pitch. Psychol. Rev. 1982, 89, 305–333. [Google Scholar] [CrossRef] [PubMed]
- Chuan, C.-H.; Chew, E. Polyphonic Audio Key Finding Using the Spiral Array CEG Algorithm. In Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, 6 July 2005; pp. 21–24. [Google Scholar]
- Chuan, C.-H.; Chew, E. Audio Key Finding: Considerations in System Design and Case Studies on Chopin’s 24 Preludes. EURASIP J. Adv. Audio Signal Process. 2006, 2007, 1–15. [Google Scholar] [CrossRef] [Green Version]
- Chen, T.-P.; Su, L. Functional harmony recognition of symbolic music data with multi-task recurrent neural networks. In Proceedings of the 19th ISMIR Conference, Paris, France, 23–27 September 2018; pp. 90–97. [Google Scholar]
- Mauch, M.; Dixon, S. Approximate note transcription for the improved identification of difficult chords. In Proceedings of the 11th International Society for Music Information Retrieval Conference, Utrecht, The Netherlands, 9–13 August 2010; pp. 135–140. [Google Scholar]
- Osmalskyj, J.; Embrechts, J.-J.; Piérard, S.; Van Droogenbroeck, M. Neural networks for musical chords recognition. In Proceedings of the Journées d’Informatique Musicale 2012 (JIM 2012), Mons, Belgium, 9–11 May 2012; pp. 39–46. [Google Scholar]
- Sigtia, S.; Boulanger-Lewandowski, N.; Dixon, S. Audio chord recognition with a hybrid recurrent neural network. In Proceedings of the 16th International Society for Music Information Retrieval Conference, Malaga, Spain, 26–30 October 2015; pp. 127–133. [Google Scholar]
- Bernardes, G.; Cocharro, D.; Guedes, C.; Davies, M.E. Harmony generation driven by a perceptually motivated tonal interval space. Comput. Entertain. 2016, 14, 1–21. [Google Scholar] [CrossRef] [Green Version]
- Tymoczko, D. The geometry of musical chords. Science 2006, 313, 72–74. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Grekow, J. Audio Features Dedicated to the Detection of Arousal and Valence in Music Recordings. In Proceedings of the IEEE International Conference on Innovations in Intelligent Systems and Applications, Gdynia, Poland, 3–5 July 2017; pp. 40–44. [Google Scholar]
- Grekow, J. Comparative Analysis of Musical Performances by Using Emotion Tracking. In Proceedings of the 23nd International Symposium, ISMIS 2017, Warsaw, Poland, 26–29 June 2017; pp. 175–184. [Google Scholar]
- Chacon, C.E.C.; Lattner, S.; Grachten, M. Developing tonal perception through unsupervised learning. In Proceedings of the 15th International Society for Music Information Retrieval Conference, Taipei, Taiwan, 27–31 October 2014. [Google Scholar]
- Herremans, D.; Chew, E. MorpheuS: Generating structured music with constrained patterns and tension. IEEE Trans. Affect. Comput. 2019, 10, 520–523. [Google Scholar] [CrossRef]
- Roig, C.; Tardon, L.J.; Barbancho, I.; Barbancho, A.M. Automatic melody composition based on a probabilistic model of music style and harmonic rules. Knowl.-Based Syst. 2014, 71, 419–434. [Google Scholar] [CrossRef]
- Anglade, A.; Benetos, E.; Mauch, M.; Dixon, S. Improving Music Genre Classification Using Automatically Induced Harmony Rules. J. New Music. Res. 2010, 39, 349–361. [Google Scholar] [CrossRef] [Green Version]
- Korzeniowski, F.; Widmer, G. End-to-end musical key estimation using a convolutional neural network. In Proceedings of the 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017; pp. 966–970. [Google Scholar]
- Pérez-Sancho, C.; Rizo, D.; Iñesta, J.M.; de León, P.J.P.; Kersten, S.; Ramirez, R. Genre classification of music by tonal harmony. Intell. Data Anal. 2010, 14, 533–545. [Google Scholar] [CrossRef]
- Rosner, A.; Kostek, B. Automatic music genre classification based on musical instrument track separation. J. Intell. Inf. Syst. 2018, 50, 363–384. [Google Scholar] [CrossRef]
- Mardirossian, A.; Chew, E. SKeFIS—A symbolic (midi) key-finding system. In Proceedings of the 1st Annual Music Information Retrieval Evaluation eXchange, ISMIR, London, UK, 11–15 September 2005. [Google Scholar]
- Papadopoulos, H.; Peeters, G. Local key estimation from an audio signal relying on harmonic and metrical structures. IEEE Trans. Audio Speech Lang. Process. 2012, 20, 1297–1312. [Google Scholar] [CrossRef]
- Peeters, G. Musical key estimation of audio signal based on hmm modeling of chroma vectors. In Proceedings of the 9th International Conference on Digital Audio Effects, Montreal, QC, Canada, 18–20 September 2006; pp. 127–131. [Google Scholar]
- Weiss, C. Global key extraction from classical music audio recordings based on the final chord. In Proceedings of the Sound and Music Computing Conference, Stockholm, Sweden, 30 July–3 August 2013; pp. 742–747. [Google Scholar]
- Albrecht, J.; Shanahan, D. The Use of Large Corpora to Train a New Type of Key-Finding Algorithm: An Improved Treatment of the Minor Mode. Music. Percept. Interdiscip. J. 2013, 31, 59–67. [Google Scholar] [CrossRef]
- Dawson, M. Connectionist Representations of Tonal Music: Discovering Musical Patterns by Interpreting Artificial Neural Networks; AU Press, Athabasca University: Athabasca, AB, Canada, 2018. [Google Scholar]
- Gomez, E.; Herrera, P. Estimating the tonality of polyphonic audio files: Cognitive versus machine learning modeling strategies. In Proceedings of the 5th International Conference on Music Information Retrieval, Barcelona, Spain, 10–14 October 2004; pp. 92–95. [Google Scholar]
- Kania, D.; Kania, P. A key-finding algorithm based on music signature. Arch. Acoust. 2019, 44, 447–457. [Google Scholar]
- Krumhansl, C.L. Cognitive Foundations of Musical Pitch; Oxford University Press: New York, NY, USA, 1990; pp. 77–110. [Google Scholar]
- Krumhansl, C.L.; Kessler, E.J. Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. Psychol. Rev. 1982, 89, 334–368. [Google Scholar] [CrossRef] [PubMed]
- Temperley, D. Bayesian models of musical structure and cognition. Musicae Sci. 2004, 8, 175–205. [Google Scholar] [CrossRef] [Green Version]
- Temperley, D.; Marvin, E. Pitch-Class Distribution and Key Identification. Music. Percept. 2008, 25, 193–212. [Google Scholar] [CrossRef]
- Bernardes, G.; Davies, M.; Guedes, C. Automatic musical key estimation with mode bias. In Proceedings of the International Conference on Acoustics Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 316–320. [Google Scholar]
F | |||||
−2.5 | −2.7 | −1.5 | 0.3 | 1.5 | 2 |
2.5 | 2.7 | 1.5 | −0.3 | −1.5 | −2 |
Directed Axis | ||
---|---|---|
0 | 0 | |
1 | - | |
2 | - | |
3 | - | |
4 | - | |
5 | - | |
6 or | 6 | |
F | - | 5 |
- | 4 | |
- | 3 | |
- | 2 | |
- | 1 |
Number of Notes: | 2 | 3 | 5 | 7 | 9 | 11 | 13 | 15 | |
---|---|---|---|---|---|---|---|---|---|
Pitch Class | 0.29 | 0.29 | 0.25 | ||||||
0.17 | 0.14 | 0.29 | 0.38 | ||||||
0.2 | 0.17 | 0.14 | 0.14 | 0.13 | |||||
0.25 | 0.2 | 0.17 | 0.14 | 0.14 | 0.13 | ||||
F | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
F | |||||||||
G | 0.25 | 0.2 | 0.17 | 0.14 | 0.14 | 0.13 | |||
A | 0.2 | 0.17 | 0.14 | 0.14 | 0.13 | ||||
1 | 0.33 | 0.25 | 0.2 | 0.33 | 0.29 | 0.43 | 0.38 | ||
B |
Number of Notes: | 2 | 3 | 5 | 7 | 9 | 11 | 13 | 15 | |
---|---|---|---|---|---|---|---|---|---|
Method | The proposed algorithm | ? | ? | ? | ? | 2 | 2 | 2 | 2 |
Krumhansl–Kessler | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | |
Temperley | 4 | 4 | 4 | 1 | 1 | 1 | 2 | 2 | |
Albrecht–Shanahan | 1 | 1 | 1 | 1 | 2 | 1 | 2 | 2 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kania, P.; Kania, D.; Łukaszewicz, T. A Hardware-Oriented Algorithm for Real-Time Music Key Signature Recognition. Appl. Sci. 2021, 11, 8753. https://doi.org/10.3390/app11188753
Kania P, Kania D, Łukaszewicz T. A Hardware-Oriented Algorithm for Real-Time Music Key Signature Recognition. Applied Sciences. 2021; 11(18):8753. https://doi.org/10.3390/app11188753
Chicago/Turabian StyleKania, Paulina, Dariusz Kania, and Tomasz Łukaszewicz. 2021. "A Hardware-Oriented Algorithm for Real-Time Music Key Signature Recognition" Applied Sciences 11, no. 18: 8753. https://doi.org/10.3390/app11188753
APA StyleKania, P., Kania, D., & Łukaszewicz, T. (2021). A Hardware-Oriented Algorithm for Real-Time Music Key Signature Recognition. Applied Sciences, 11(18), 8753. https://doi.org/10.3390/app11188753