New Trends in Numerics and Dynamics of Artificial Neural Networks: Theory and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Network Science".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 14795

Special Issue Editors


E-Mail Website
Guest Editor
Escuela de Ingenierías Industriales, Universidad de Málaga, 29071 Málaga, Spain
Interests: recurrent neural networks; dynamical systems; numerical optimization; time series; natural language processing

E-Mail Website
Guest Editor
Department of Automatic Control and Electronics, University of Craiova, A.I. Cuza, No. 13, RO-200585 Craiova, Romania
Interests: stability; dynamical neural networks; nonlinear systems; distributed parameter system

Special Issue Information

Dear Colleagues,

Numerical analysis is one of the pillars, computer algebra being the other, of all computational algorithms. Accurate results of machine learning algorithms for classification, regression, and prediction are supported by theoretical features of numerical methods. The list of examples is overwhelming: principal component analysis based upon numerical linear algebra; optimization with Hopfield networks stemming from concepts rooted in dynamical systems; backpropagation that requires numerical optimizers; etc. On the other hand, research on computational intelligence techniques has led to advances in many numerical methods, with stochastic gradient descent being primus inter pares.

In this Special Issue, we aim at fostering the synergy between these two fields, by encouraging the analysis and design of numerical methods for, in, and from machine learning algorithms. We welcome contributions that highlight satisfactory learning results as soundly based on numerical foundations, as well as ground-breaking numerical methods that provide the basis for efficient practical algorithms, at least at the proof-of-concept stage.

The scope of the issue is deliberately broad, including but not limited to numerical techniques from linear algebra, dynamical systems, kernel methods, optimization, spectral methods, and stochastic formulations, as well as algorithms within neural networks, support vector machines, recurrent networks, and clustering methods.

Prof. Dr. Miguel Atencia
Prof. Dr. Daniela Danciu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • numerical linear algebra
  • numerical methods for dynamical systems
  • numerical optimization
  • geometric numerical integration
  • iterative methods
  • convergence
  • machine learning
  • neural networks 
  • classification 
  • time series forecasting 
  • dimensionality reduction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 663 KiB  
Article
Generalization of Neural Networks on Second-Order Hypercomplex Numbers
by Stanislav Pavlov, Dmitry Kozlov, Mikhail Bakulin, Aleksandr Zuev, Andrey Latyshev and Alexander Beliaev
Mathematics 2023, 11(18), 3973; https://doi.org/10.3390/math11183973 - 19 Sep 2023
Viewed by 1445
Abstract
The vast majority of existing neural networks operate by rules set within the algebra of real numbers. However, as theoretical understanding of the fundamentals of neural networks and their practical applications grow stronger, new problems arise, which require going beyond such algebra. Various [...] Read more.
The vast majority of existing neural networks operate by rules set within the algebra of real numbers. However, as theoretical understanding of the fundamentals of neural networks and their practical applications grow stronger, new problems arise, which require going beyond such algebra. Various tasks come to light when the original data naturally have complex-valued formats. This situation is encouraging researchers to explore whether neural networks based on complex numbers can provide benefits over the ones limited to real numbers. Multiple recent works have been dedicated to developing the architecture and building blocks of complex-valued neural networks. In this paper, we generalize models by considering other types of hypercomplex numbers of the second order: dual and double numbers. We developed basic operators for these algebras, such as convolution, activation functions, and batch normalization, and rebuilt several real-valued networks to use them with these new algebras. We developed a general methodology for dual and double-valued gradient calculations based on Wirtinger derivatives for complex-valued functions. For classical computer vision (CIFAR-10, CIFAR-100, SVHN) and signal processing (G2Net, MusicNet) classification problems, our benchmarks show that the transition to the hypercomplex domain can be helpful in reaching higher values of metrics, compared to the original real-valued models. Full article
Show Figures

Figure 1

14 pages, 634 KiB  
Article
Imaginary Finger Movements Decoding Using Empirical Mode Decomposition and a Stacked BiLSTM Architecture
by Tat’y Mwata-Velu, Juan Gabriel Avina-Cervantes, Jorge Mario Cruz-Duarte, Horacio Rostro-Gonzalez and Jose Ruiz-Pinales
Mathematics 2021, 9(24), 3297; https://doi.org/10.3390/math9243297 - 18 Dec 2021
Cited by 17 | Viewed by 3038
Abstract
Motor Imagery Electroencephalogram (MI-EEG) signals are widely used in Brain-Computer Interfaces (BCI). MI-EEG signals of large limbs movements have been explored in recent researches because they deliver relevant classification rates for BCI systems. However, smaller and noisy signals corresponding to hand-finger imagined movements [...] Read more.
Motor Imagery Electroencephalogram (MI-EEG) signals are widely used in Brain-Computer Interfaces (BCI). MI-EEG signals of large limbs movements have been explored in recent researches because they deliver relevant classification rates for BCI systems. However, smaller and noisy signals corresponding to hand-finger imagined movements are less frequently used because they are difficult to classify. This study proposes a method for decoding finger imagined movements of the right hand. For this purpose, MI-EEG signals from C3, Cz, P3, and Pz sensors were carefully selected to be processed in the proposed framework. Therefore, a method based on Empirical Mode Decomposition (EMD) is used to tackle the problem of noisy signals. At the same time, the sequence classification is performed by a stacked Bidirectional Long Short-Term Memory (BiLSTM) network. The proposed method was evaluated using k-fold cross-validation on a public dataset, obtaining an accuracy of 82.26%. Full article
Show Figures

Figure 1

18 pages, 3671 KiB  
Article
Learning Neural Representations and Local Embedding for Nonlinear Dimensionality Reduction Mapping
by Sheng-Shiung Wu, Sing-Jie Jong, Kai Hu and Jiann-Ming Wu
Mathematics 2021, 9(9), 1017; https://doi.org/10.3390/math9091017 - 30 Apr 2021
Viewed by 1592
Abstract
This work explores neural approximation for nonlinear dimensionality reduction mapping based on internal representations of graph-organized regular data supports. Given training observations are assumed as a sample from a high-dimensional space with an embedding low-dimensional manifold. An approximating function consisting of adaptable built-in [...] Read more.
This work explores neural approximation for nonlinear dimensionality reduction mapping based on internal representations of graph-organized regular data supports. Given training observations are assumed as a sample from a high-dimensional space with an embedding low-dimensional manifold. An approximating function consisting of adaptable built-in parameters is optimized subject to given training observations by the proposed learning process, and verified for transformation of novel testing observations to images in the low-dimensional output space. Optimized internal representations sketch graph-organized supports of distributed data clusters and their representative images in the output space. On the basis, the approximating function is able to operate for testing without reserving original massive training observations. The neural approximating model contains multiple modules. Each activates a non-zero output for mapping in response to an input inside its correspondent local support. Graph-organized data supports have lateral interconnections for representing neighboring relations, inferring the minimal path between centroids of any two data supports, and proposing distance constraints for mapping all centroids to images in the output space. Following the distance-preserving principle, this work proposes Levenberg-Marquardt learning for optimizing images of centroids in the output space subject to given distance constraints, and further develops local embedding constraints for mapping during execution phase. Numerical simulations show the proposed neural approximation effective and reliable for nonlinear dimensionality reduction mapping. Full article
Show Figures

Figure 1

28 pages, 15757 KiB  
Article
Control Method of Flexible Manipulator Servo System Based on a Combination of RBF Neural Network and Pole Placement Strategy
by Dongyang Shang, Xiaopeng Li, Meng Yin and Fanjie Li
Mathematics 2021, 9(8), 896; https://doi.org/10.3390/math9080896 - 17 Apr 2021
Cited by 38 | Viewed by 3382
Abstract
Gravity and flexibility will cause fluctuations of the rotation angle in the servo system for flexible manipulators. The fluctuation will seriously affect the motion accuracy of end-effectors. Therefore, this paper adopts a control method combining the RBF (Radial Basis Function) neural network and [...] Read more.
Gravity and flexibility will cause fluctuations of the rotation angle in the servo system for flexible manipulators. The fluctuation will seriously affect the motion accuracy of end-effectors. Therefore, this paper adopts a control method combining the RBF (Radial Basis Function) neural network and pole placement strategy to suppress the rotation angle fluctuations. The RBF neural network is used to identify uncertain items caused by the manipulator’s flexibility and the time-varying characteristics of dynamic parameters. Besides, the pole placement strategy is used to optimize the PD (Proportional Differential) controller’s parameters to improve the response speed and stability. Firstly, a dynamic model of flexible manipulators considering gravity is established based on the assumed mode method and Lagrange’s principle. Then, the system’s control characteristics are analyzed, and the pole placement strategy optimizes the parameters of the PD controllers. Next, the control method based on the RBF neural network is proposed, and the Lyapunov stability theory demonstrates stability. Finally, numerical analysis and control experiments prove the effectiveness of the control method proposed in this paper. The means and standard deviations of rotation angle error are reduced by the control method. The results show that the control method can effectively reduce the rotation angle error and improve motion accuracy. Full article
Show Figures

Figure 1

19 pages, 2287 KiB  
Article
Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments
by Marat Akhmet, Duygu Aruğaslan Çinçin, Madina Tleubergenova and Zakhira Nugayeva
Mathematics 2021, 9(5), 571; https://doi.org/10.3390/math9050571 - 7 Mar 2021
Cited by 16 | Viewed by 2493
Abstract
This is the first time that the method for the investigation of unpredictable solutions of differential equations has been extended to unpredictable oscillations of neural networks with a generalized piecewise constant argument, which is delayed and advanced. The existence and exponential stability of [...] Read more.
This is the first time that the method for the investigation of unpredictable solutions of differential equations has been extended to unpredictable oscillations of neural networks with a generalized piecewise constant argument, which is delayed and advanced. The existence and exponential stability of the unique unpredictable oscillation are proven. According to the theory, the presence of unpredictable oscillations is strong evidence for Poincaré chaos. Consequently, the paper is a contribution to chaos applications in neuroscience. The model is inspired by chaotic time-varying stimuli, which allow studying the distribution of chaotic signals in neural networks. Unpredictable inputs create an excitation wave of neurons that transmit chaotic signals. The technique of analysis includes the ideas used for differential equations with a piecewise constant argument. The results are illustrated by examples and simulations. They are carried out in MATLAB Simulink to demonstrate the simplicity of the diagrammatic approaches. Full article
Show Figures

Figure 1

Back to TopTop