Next Article in Journal
Macroeconomic Effects of Energy Price: New Insight from Korea?
Next Article in Special Issue
Optimizing Echo State Networks for Enhancing Large Prediction Horizons of Chaotic Time Series
Previous Article in Journal
An Extended Theory of Rational Addiction
Previous Article in Special Issue
Convergence of a Class of Delayed Neural Networks with Real Memristor Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

A Neural Probabilistic Graphical Model for Learning and Decision Making in Evolving Structured Environments

Dipartimento di Ingegneria dell’Informazione e Scienze Matematiche (DIISM), Università di Siena, 53100 Siena, Italy
Mathematics 2022, 10(15), 2646; https://doi.org/10.3390/math10152646
Submission received: 11 June 2022 / Revised: 24 July 2022 / Accepted: 25 July 2022 / Published: 28 July 2022
(This article belongs to the Special Issue Neural Networks and Learning Systems II)

Abstract

:
A difficult and open problem in artificial intelligence is the development of agents that can operate in complex environments which change over time. The present communication introduces the formal notions, the architecture, and the training algorithm of a machine capable of learning and decision-making in evolving structured environments. These environments are defined as sets of evolving relations among evolving entities. The proposed machine relies on a probabilistic graphical model whose time-dependent latent variables undergo a Markov assumption. The likelihood of such variables given the structured environment is estimated via a probabilistic variant of the recursive neural network.

1. Introduction

The development of artificial intelligence (AI) systems capable of keeping up with evolving environments, just like “biological systems can adapt to new environments”, was posed as Challenge One for AI by Rodney Brooks [1]. To date, the challenge is still open. The relevance of machines capable of coping with evolving environments was further pointed out in the modern framework of evolving computation [2,3]. Evolving environments are met in most machine learning applications under real-world circumstances. This is the consequence of two main factors:
  • the world is not stationary, but it changes over time;
  • a learning machine may be applied under different conditions [4].
Examples are found in fault diagnosis for complex dynamical systems [5], modeling of climate and financial data [6,7], detection of fraud [8] or spam [9]. As pointed out by [10], coping with these evolving environments requires approaches that overcome a fundamental assumption that holds in traditional machine learning, namely that data are drawn from a fixed distribution [10]. In this respect, the headmost contribution of this communication is to cope for the first time in a formal manner with environments that change over time according to time-dependent, unknown probabilistic laws. To this aim, the machine sought is expected to realize a statistical model of the environment that probabilistically accounts for the environmental changes.
To this end, several learning machines suitable for sequential data processing may appear to be promising, e.g., hidden Markov models (HMM) [11] or recurrent neural networks [12]. Nonetheless, in our view, the general notion of a structured environment is better expressed in terms of a relation, or set of relations, over individual entities. Each entity may be represented as a variable-size, entity-specific set of attributes that capture certain characteristics, or perspectives, of the world. Attributes may be either discrete or real-valued. Given these premises, traditional learning machines are unfeasible. The second contribution of the present communication is to present a new machine that is suitable for the aforementioned relational scenario. To the best of our knowledge, this is the first approach to modeling and learning structured environments that evolve over time.
An evolving structured environment is then expressed by means of a time-dependent graph process defined over the entities. The entities existing in the environment at time t are represented by the vertices of the graph at t. The edges of the graph encapsulate the binary relation(s) over the entities at time t. Labels in the form of real-valued attributes may, in turn, be attached to the edges. These labels account for the evolving type and characteristics of the relation(s) that hold within the environment. We assume that any decision made by the machine at time t does not depend only on the environment at time t, but also, the evolution of the environment that had occurred so far. To fix ideas, a toy example is given by an autonomous robotic platform that is required to move and assist human workers in setting up the booths and furniture within the exhibition spaces of a large trade fair pavilion. The partitioning of the building into booths, conference rooms, etc. changes from time to time, as do the amount, positioning, and type of furniture. The different rooms and pieces of furniture are the entities to be modeled in the present framework, and their mutual positioning is the relation among the entities. The features, or labels, to be associated with such entities may represent the geometry, size, weight, and specific function of the corresponding items or rooms. In order to be capable of navigating autonomously within such an evolving environment, the moving platform must be capable of keeping track of the changes, progressively adapting to the changeable spaces, and making suitable decisions accordingly.
We use a probabilistic framework, such that the environment can be described in terms of probabilistic laws over time-varying random quantities, e.g., the likelihood of the entities being related to each other at time t. Intrinsically, the machine we are introducing is a novel ‘probabilistic graphical model [13] that represents a particular joint probability density, including a set of latent variables undergoing a first-order Markov assumption, and a set of observable, time-varying random graphs drawn from properly-defined probability density functions (pdf). In the light of our previous experience with neural networks for density estimation [14,15,16], the short-term modeling of these pdfs is realized via a probabilistic radial basis functions (RBF) [17] variant of recursive neural networks (RNN) [18,19], hereby denoted by RBF-RNN. Conversely, the latent variables and their conditional probabilistic dependencies are modeled via a long-term hidden Markov chain, as in HMMs. The potentialities of combinations of short-term predictors and long-term models of evolving environments were shown empirically by [20]. Moreover, suitable change detection tests based on the HMM likelihood were proven to be successful in detecting drifts in evolving environments [21]. The present approach can be seen as an instance of the online Bayesian learning setup for evolving environments [22] extended to graphical or relational data, where the time-dependent RBF-RNNs play the role of the parameterized probabilistic observation model, and the Markov chain encapsulates implicitly the probabilistic transition model.
The underlying assumption is that by observing the evolution of an environment for a long enough period of time, it is possible to learn the fundamental probabilistic laws that rule its behavior, as well as its evolution, such that suitable models of such laws may be later used to make statistically sound predictions, e.g., making educated decisions. As in Just-in-time classifiers for recurrent concepts drift in evolving environments [23], the present approach exploits the notion of recurrent, underlying concepts that re-present after an unpredictable amount of time, and that the machine shall be able to detect in order to react with the adequate probabilistic response. In the present framework, these concepts are the latent states of nature ruling the pdf of the observations.
Before introducing the machine formally, we need to extend the definitions of random variables and pdf to the present, structured setup. Eventually, learning (in terms of estimating proper statistical parametric models) will rely on a training sample of streams of environmental observations over time. Decision making among the actions A 1 , , A k in the environment ε at any future time t will take place within the Bayesian framework, relying on the action-conditional joint pdfs estimated at t via action-specific machines which observe the evolution of ε .

2. Materials and Methods

The present formalization requires assumptions on the type of evolving environments we can cope with. Still, it forms a well-defined ground on which to develop proper algorithms. First, the novel notion of the random environment is introduced, broadening the definition of random graph [24] to a large extent. Let Ω be a sample space, and let V denote any discrete or continuous-valued universe. The latter is the set of entities. A random environment (RE) on V and Ω is a function ε : Ω { ( V , R ) | V V , R V × V } . Labels (namely, real-valued vectors associated with the entities or with the elements of R) can be accommodated in the definition in a straightforward manner. Let E = { ( V , R ) V V , R V × V } be the space of RE outcomes. A pdf for REs over V is defined as any function p : E R such that (1) p ( ε ) 0 , ε E , and (2) E p ( ε ) d ε = 1 . The Lebesgue-measurability of the space of labeled graphs defined on measurable domains is shown in [25], which makes the writing E p ( ε ) d ε meaningful. It is seen that traditional notions of probability theory, such as conditional pdf, joint pdf, statistical independence, etc., are readily extended to REs. An environment is any outcome ε of a RE, i.e., drawn from the corresponding pdf p ( ε ) . This definition covers all cases of labeled/unlabeled entities, as well as environments having variable sizes (e.g., the number of entities making up the environment is not necessarily pre-defined, and it may change over time) and/or variable topology (e.g., two-by-two relations between pairs of entities may evolve, too).
Loosely speaking, a stochastic environment process  X ( t ) is any function that maps discrete or continuous time t T onto a corresponding RE. Since by definition REs are random variables, and since we can bijectively represent X ( t ) as an indexed family { X ( t ) | t T } , it is seen that a stochastic environment process is a special case of random process according to the classic definition of the latter (see, for instance, [26], Section 1.9, page 41). Consequently, an extension of the traditional hidden Markov model (HMM) to REs is given by proposing the novel definition of HMM over environments ( ϵ -HMM) as a pair of stochastic processes: a hidden Markov chain, that is a traditional discrete-time random process, and an observable stochastic environment process which is a probabilistic function of the states of the former.
Let us assume that the evolution of an environment over time t = 1 , , T has been observed, and represented as a sequence Y = ε 1 , , ε T generated by a hidden stochastic environment process. Also, let Y be the outcome of a sequence W = ω 1 , , ω L of latent states of nature, e.g., drifting concepts. No prior segmentation of Y into subsequences Y 1 , , Y L corresponding to the individual states of nature are known in advance. We propose a hybrid neural/Markovian realization [27] of ϵ -HMMs for the probabilistic graphical modelling of p ( Y W ) . Using a standard HMM notation [28], an ϵ -HMM H is formally defined as H = ( S , π , A , B ε ) where: S = { S 1 , , S Q } is a set of states (namely, the Q different values that the hidden Markov chain can assume); π = { P ( S i t = 0 ) , S i S } is the probability distribution of the initial states (t being a discrete time index); A is the Q × Q matrix of the transition probabilities, whose i j -th entry is a i j = P ( S j at time t + 1 S i at time t); finally, B ε is a set of pdfs over REs, called emission probabilities, describing the state-specific statistical distributions of the REs: B ε = { b i ( . ) b i ( ε ) = p ( ε S i ) , S i S , ε E } .
The observable component of the ϵ -HMM is realized via neural models of the emission probabilities [29]. For each state of the ϵ -HMM, a neural network is introduced that estimates the corresponding emission probability given the current status of the environment being observed. Relying on the formalism we introduced in [30], for each ι = 1 , , Q we assume the existence of d N and of the functions ϕ ι : E R d and p ι : R d R such that b ι ( ε t ) = p ι ( ϕ ι ( ε t ) ) . It is seen that there are countless pairs of such functions ϕ ( . ) and p ^ ( . ) , the simplest choice being ϕ ( ε ) = p ( ε ) , p ^ ( x ) = x . The function ϕ ι ( . ) is hereafter referred to as the encoding for ι -th state of the ϵ -HMM, whereas the function p ι ( . ) is called the emission associated with the corresponding state. We assume parametric forms ϕ ι ( ε | θ ϕ ι ) and p ι ( x | θ p ι ) for the encoding and for the emission, respectively, and we let θ ι = ( θ ϕ ι , θ p ι ) . The function ϕ ι ( ε | θ ϕ ι ) is realized via an encoding network, suitable to mapping structured environments ε into real vectors x , as it is found in supervised training of traditional RNNs over graphs [18,19]. Consequently, we identify the parameters θ ϕ ι with the weights of the encoding network. An RBF network is then used to model the emission p ι ( x t | θ p ι ) , where the parameters of the RBF play the role of θ p ι . All the parameters in the ϵ -HMM can be estimated from examples by means of a global optimization algorithm, presented in the next Section, aimed at the maximization of the likelihood of the very ϵ -HMM given time-varying sequences of empirical observations of the evolving environment.

3. Results

The main result of the present communication is the following gradient-ascent global training technique. The proposed technique is based on the maximum likelihood (ML) criterion. Using standard notation [28], the likelihood L of the ϵ -HMM given the sequence Y is given by L = ι F α ι , T , where α ι , T is the forward term for state ι at time T. The sum is extended to the set F of all final states [31] of the ϵ -HMM. Let us write q ι , t in order to represent the fact that the Markov chain is in state S ι at time t. Then, the forward terms α ι , t = P ( q ι , t , ε 1 , , ε t ) and the backward terms β ι , t = P ( ε t + 1 , , ε T | q ι , t ) can be computed recursively as usual [28]. In turn, the forward-backward algorithm [28] can be used for ML estimation of the parameters of the underlying Markov chain, i.e., initial and transition probabilities. For a generic RBF-RNN parameter θ θ ι , instead, gradient-ascent over L entails a learning rule in the form Δ θ = η L θ , where η R + is the learning rate. We can rewrite L θ as:
L θ = q = 1 Q t = 1 T β q , t α q , t b q ( ε t ) b q ( ε t ) θ
The usual computations over the trellis of a standard HMM [28] can be used to obtain the quantities in the right-hand side of Equation (1), except for b q ( ε t ) θ (where b q ( ε t ) is the output from the corresponding RBF-RNN at time t). Hereafter, we focus on the computation of this partial derivative. Since (i) each state of the ϵ -HMM has its own RBF-RNN, (ii) in HMMs the emission probabilities for different states are mutually independent [28], and (iii) in HMMs the individual observations (i.e., environments) in the input sequence are assumed to be mutually independent given the state [28], we can simplify the notation by dropping the indexes ι and t, and we resort to the calculation of the derivatives of the generic emission p ( ϕ ( ε | θ ϕ ) | θ p ) . For any free parameter in the RBF-RNNs, say θ , an explicit formulation for p ( ϕ ( ε | θ ϕ ) | θ p ) θ can be obtained as follows. Eventually, the resulting formulation shall be put in place of b q ( ε t ) θ into Equation (1) and the latter, in turn, into the overall learning rule Δ θ = η L θ . There are three categories of parameters of the RBF-RNN that the algorithm shall estimate:
  • the hidden-to-output connection weights c 1 , , c n of the RBF. In order to ensure the satisfaction of the axioms of probability, these weights must range over ( 0 , 1 ) and sum to one. This is guaranteed by introducing n unconstrained variables γ 1 , , γ n such that
    c i = ς ( γ i ) j = 1 n ς ( γ j )
    where ς ( x ) = 1 / ( 1 + e x ) . The generic variable γ i is estimated via ML. This guarantees the satisfaction of the axioms. The quantity p ( ϕ ( ε | θ ϕ ) | θ p ) c i is computed by applying the chain rule to p ( ϕ ( ε | θ ϕ ) | θ p ) γ i for i = 1 , , n .
  • The mean vector μ i and covariance matrix Σ i of the generic i-th Gaussian kernel K i ( x ) = G ( x ; μ i , Σ i ) in the RBF. For each component j = 1 , , d of the encoding space, ML parametric estimation of Gaussian mixture models [11] is applied in order to compute the quantities p ( ϕ ( ε | θ ϕ ) | θ p ) μ i j and p ( ϕ ( ε | θ ϕ ) | θ p ) σ i j .
  • The parameters U = { v 1 , , v u } of the encoding network. For each v U , the quantity p ( ϕ ( ε | θ ϕ ) | θ p ) v is obtained as follows: p ( ϕ ( ε | θ ϕ ) | θ p ) v = p ( ϕ ( ε | θ ϕ ) | θ p ) y y v , where y represents the output from the neuron that is fed from v. The partial derivative y v is computed as usual. As for p ( ϕ ( ε | θ ϕ ) | θ p ) y , the backpropagation through structures (BPTS) algorithm can be applied [18,19]. First, computation of the derivative p ( ϕ ( ε | θ ϕ ) | θ p ) y is straightforward for the connection weights v between hidden and output neurons. This yields the initialization of the δ ’s to be backpropagated via BPTS. Then, let us turn our attention to the hidden weights v = v m , where and m are the hidden neurons connected by v. Relying on the aforementioned initialization of the δ ’s, the derivative p ( ϕ ( ε | θ ϕ ) | θ p ) v m is finally obtained via standard BPTS.
After the training process has been completed, an estimation of the likelihood at time t of an ϵ -HMM given an evolving random environment ε can be obtained from the forward terms on the t-th column of the trellis, as in plain HMMs [28]. In turn, action-specific ϵ -HMMs shall be trained and applied for each possible action, say A 1 , , A k , in those tasks that require to decide which action to undertake at any time t in the environment ε . In this setup, the i-th ϵ -HMM models the corresponding action-conditional pdf, namely p ( ε 1 , , ε t A i ) . Provided that the prior probability of any individual action is known, be estimated via the usual frequentist approach of Bayesian decision theory [32], the action is chosen to rely on Bayes’ decision rule, as usual, as the action A max that maximizes p ( ε 1 , , ε t A i ) P ( A i ) , i = 1 , , k .

4. Summary and Conclusions

Most environments may be described as relations over entities whose attributes are random variables. In turn, evolving environments do implicitly entail time sequences of varying relations. The goal of this communication was the foundation of an ad-hoc paradigm for learning and modeling the statistical properties of such sequences. The goal was pursued by combining probabilistic graphical models, encoding neural networks, and constrained RBFs within a unifying framework. A ML adaptation algorithm for the parameters of the overall model was devised under an implicit assumption of recurrent concepts drift, where the concepts are represented by the latent variables. We expect applications in real-world scenarios, as well as extensions of the technique to the incremental real-time adaptation of the parameters.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author gratefully acknowledges the valuable contributions from nobody.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIartificial intelligence
HMM      hidden Markov model
pdfprobability density function
RBFradial basis function
RNNrecursive neural network
RBF-RNNprobabilistic radial basis functions variant of recursive neural networks
RErandom environment
ϵ -HMMHMM over environments
MLmaximum likelihood
BPTSbackpropagation through structures

References

  1. Selman, B.; Brooks, R.A.; Dean, T.; Horvitz, E.; Mitchell, T.M.; Nilsson, N.J. Challenge problems for artificial intelligence. In Proceedings of the Thirteenth National Conference on Artificial Intelligence AAAI’96, Portland, OR, USA, 4–8 August 1996; Volume 2, pp. 1340–1345. [Google Scholar]
  2. Benuskova, L.; Kasabov, N. Computational Neurogenetic Modeling, 1st ed.; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  3. Angelov, P.; Filev, D.P.; Kasabov, N. Evolving Intelligent Systems: Methodology and Applications; Wiley-IEEE Press: Hoboken, NJ, USA, 2010. [Google Scholar]
  4. Shafiullah, N.M.; Pinto, L. One After Another: Learning Incremental Skills for a Changing World. arXiv 2022, arXiv:2203.11176. [Google Scholar]
  5. Sayed Mouchaweh, M. Fault Diagnosis of Hybrid Dynamic and Complex Systems; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  6. Overpeck, J.; Meehl, G.; Bony, S.; Easterling, D. Climate data challenges in the 21st century. Science 2011, 331, 700–702. [Google Scholar] [CrossRef] [PubMed]
  7. Cont, R. Statistical Modeling of High-Frequency Financial Data. IEEE Signal Process. Mag. 2011, 28, 16–25. [Google Scholar] [CrossRef]
  8. Mohammadi, M.; Yazdani, S.; Khanmohammadi, M.H.; Ma ham, K. Financial Reporting Fraud Detection: An Analysis of Data Mining Algorithms. Int. J. Financ. Manag. Account. 2020, 4, 1–12. [Google Scholar]
  9. Dada, E.G.; Bassi, J.S.; Chiroma, H.; Abdulhamid, S.M.; Adetunmbi, A.O.; Ajibuwa, O.E. Machine learning for email spam filtering: Review, approaches and open research problems. Heliyon 2019, 5, 180–192. [Google Scholar] [CrossRef] [Green Version]
  10. Polikar, R.; Alippi, C. Guest Editorial Learning in Nonstationary and Evolving Environments. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 9–11. [Google Scholar] [CrossRef]
  11. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  12. Salem, F.M. Recurrent Neural Networks—From Simple to Gated Architectures, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  13. Freno, A.; Trentin, E. Hybrid Random Fields—A Scalable Approach to Structure and Parameter Learning in Probabilistic Graphical Models; Springer: Berlin/Heidelberg, Germany, 2011; Volume 15. [Google Scholar]
  14. Trentin, E.; Lusnig, L.; Cavalli, F. Parzen neural networks: Fundamentals, properties, and an application to forensic anthropology. Neural Netw. 2018, 97, 137–151. [Google Scholar] [CrossRef]
  15. Trentin, E. Soft-Constrained Neural Networks for Nonparametric Density Estimation. Neural Process. Lett. 2018, 48, 915–932. [Google Scholar] [CrossRef]
  16. Trentin, E. Asymptotic Convergence of Soft-Constrained Neural Networks for Density Estimation. Mathematics 2020, 8, 572. [Google Scholar] [CrossRef] [Green Version]
  17. Ghosh, J.; Nag, A. An Overview of Radial Basis Function Networks. In Radial Basis Function Networks 2: New Advances in Design; Howlett, R.J., Jain, L.C., Eds.; Springer: Heidelberg, Germany, 2001; pp. 1–36. [Google Scholar]
  18. Sperduti, A.; Starita, A. Supervised neural networks for the classification of structures. IEEE Trans. Neural Netw. 1997, 8, 714–735. [Google Scholar] [CrossRef] [Green Version]
  19. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2009, 20, 61–80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Salazar, D.S.P.; Adeodato, P.J.L.; Arnaud, A.L. Continuous Dynamical Combination of Short and Long-Term Forecasts for Nonstationary Time Series. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 241–246. [Google Scholar] [CrossRef] [PubMed]
  21. Alippi, C.; Ntalampiras, S.; Roveri, M. A Cognitive Fault Diagnosis System for Distributed Sensor Networks. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1213–1226. [Google Scholar] [CrossRef] [PubMed]
  22. Nakada, Y.; Wakahara, M.; Matsumoto, T. Online Bayesian Learning With Natural Sequential Prior Distribution. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 40–54. [Google Scholar] [CrossRef] [PubMed]
  23. Alippi, C.; Boracchi, G.; Roveri, M. Just-In-Time Classifiers for Recurrent Concepts. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 620–634. [Google Scholar] [CrossRef] [PubMed]
  24. Erdös, P.; Rényi, A. On Random Graphs. Publ. Math. Debrecen 1959, 6, 290–297. [Google Scholar]
  25. Hammer, B.; Micheli, A.; Sperduti, A. Universal Approximation Capability of Cascade Correlation for Structures. Neural Comput. 2005, 17, 1109–1159. [Google Scholar] [CrossRef]
  26. Ross, S. Stochastic Processes; Wiley: New York, NY, USA, 1996. [Google Scholar]
  27. Bongini, M.; Freno, A.; Laveglia, V.; Trentin, E. Dynamic Hybrid Random Fields for the Probabilistic Graphical Modeling of Sequential Data: Definitions, Algorithms, and an Application to Bioinformatics. Neural Process. Lett. 2018, 48, 733–768. [Google Scholar] [CrossRef]
  28. Rabiner, L.R. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef] [Green Version]
  29. Castelli, I.; Trentin, E. Combination of supervised and unsupervised learning for training the activation functions of neural networks. Pattern Recognit. Lett. 2014, 37, 178–191. [Google Scholar] [CrossRef]
  30. Bongini, M.; Rigutini, L.; Trentin, E. Recursive Neural Networks for Density Estimation Over Generalized Random Graphs. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5441–5458. [Google Scholar] [CrossRef] [PubMed]
  31. Bengio, Y. Neural Networks for Speech and Sequence Recognition; International Thomson Computer Press: London, UK, 1996. [Google Scholar]
  32. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Trentin, E. A Neural Probabilistic Graphical Model for Learning and Decision Making in Evolving Structured Environments. Mathematics 2022, 10, 2646. https://doi.org/10.3390/math10152646

AMA Style

Trentin E. A Neural Probabilistic Graphical Model for Learning and Decision Making in Evolving Structured Environments. Mathematics. 2022; 10(15):2646. https://doi.org/10.3390/math10152646

Chicago/Turabian Style

Trentin, Edmondo. 2022. "A Neural Probabilistic Graphical Model for Learning and Decision Making in Evolving Structured Environments" Mathematics 10, no. 15: 2646. https://doi.org/10.3390/math10152646

APA Style

Trentin, E. (2022). A Neural Probabilistic Graphical Model for Learning and Decision Making in Evolving Structured Environments. Mathematics, 10(15), 2646. https://doi.org/10.3390/math10152646

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop