1. Introduction
Reinforcement Learning (RL), as a class of Machine Learning (ML) techniques, targets providing human-level adaptive behavior by construction of an optimal control policy [
1]. Generally speaking, the main underlying objective is learning (via trial and error) from previous interactions of an autonomous agent and its surrounding environment. The optimal control (action) policy can be obtained via RL algorithms through the feedback that environment provides to the agent after each of its actions [
2,
3,
4,
5,
6,
7,
8,
9]. Policy optimality can be reached via such an approach with the goal of increasing the reward over time. In most of the successful RL applications, e.g., Go and Poker games, robotics, and autonomous driving, typically, several autonomous agents are involved. This naturally falls within the context of Multi-Agent RL (MARL), which is a relatively long-established domain; however, it has recently been revitalized due to the advancements made in the single-agent RL approaches. In the MARL domain, which is the focus of this manuscript, multiple decision-making agents interact (cooperate and/or compete) in a shared environment to gain a common or a conflicting goal. Research Questions: In this paper, we aim to answer the following research questions:
How to tackle overfitting, high sensitivity to parameter selection, and sample inefficiency issues of MARL, typically, associated with DNN-based solutions?
How to properly handle a change in the reward model for learning the underlying value function and how to capture uncertainty of the Successor Representation (SR)?
How multi-agent adaptive Kalman Temporal Difference (KTD) can be adopted to work within the SR formulation?
Ho to find a trade-off between exploration and exploitation of MARL?
Challenges: To address the aforementioned research questions, we faced the following challenges:
Learning localized reward functions and dealing with the lack of prior knowledge on observation noise covariance and observation mapping function.
Selecting KF parameters for learning the reward function as its performance is highly dependent on these values.
Encoding continuous states into feature vectors and projecting the reward function as a linear function of the extracted features.
Adopting KTD approach to the SR learning procedure.
Capturing the uncertainty associated with the SR and calculating the value function based on the learned SR values and the reward function.
Exploration/exploitation trade-off, i.e., to select from actions with known associated rewards or explore new possible actions with unknown rewards.
Before, introducing contributions of the paper and its novelties, first, a brief literature review is provided next.
Literature Review: Traditionally, RL algorithms are classified as (i) Model-Free (MF) approaches [
4,
10,
11] where sample trajectories are exploited for learning the value function, and (ii) Model-Based (MB) techniques [
12] where reward functions are estimated by leveraging search trees or dynamic programming [
13]. MF methods, generally, do not adapt quickly to local changes in the reward function. On the other hand, MB techniques can adapt quickly to changes in the environment, but this comes with a high computational cost [
14,
15,
16]. To address the above adaptation problems, Successor Representation (SR) approaches [
17,
18] are proposed as an alternative RL category. The SR method provides the flexibility of the MB algorithm and has computational efficiency comparable to that of the MF algorithms. In SR-based methods, both the immediate reward expected to be received after each action and the discounted expected future state occupancy (which is called the SR) are learned. Afterwards, in each of the successor states, the value function is factorized into the SR and the immediate reward. This factorization only needs learning of the reward function for new tasks, allowing rapid policy evaluation when reward conditions are changed. In scenarios with a limited number of states, the SR and the reward function (thus, the value function) associated with each state can be readily computed. Computation of the value function, however, is infeasible for MARL problems, as in such scenarios we deal with a large number of continuous states [
19]. In other words, conventional approaches developed for single agent scenarios such as single-agent SR, Q-Learning, or policy gradient cannot be directly adopted to MARL to compute the value function. The main problem here is that, typically, from a single agent’s perspective, the environment tends to become unstable as each agent’s policies change during the training process. In the context of deep Q-learning [
20], this leads to stabilization issues as it is difficult to properly use the previous localized experiences. From the perspective of policy gradient, typically, observations demonstrate high variance in coordinating multiple agents.
To leverage SR-based solutions for MARL, value function approximation is unavoidable, and one can use either linear or non-linear estimation approaches [
21,
22]. In both categories, a set of adjustable parameters define the value of the approximated function. Non-linear function approximators, such as Deep Neural Networks (DNNs) [
21,
23,
24,
25], have enabled application of RL methods to complex multi-agent scenarios. While DNN approaches like Deep Q-Networks (DQN) [
26] and Deep Deterministic Policy Gradient (DDPG) [
27] achieved superior results, they suffer from some major disadvantages including the overfitting problem, high sensitivity in choosing parameters, sample inefficiency, and high number of episodes required for training the models. The linear function approximators, on the other hand, transform the approximation problem into a weight calculation problem in order to fuse several local estimators. Convergence can be examined when linear function approximators are utilized, as they are better understood than their non-linear counterparts [
28,
29]. Cerebellar Model Articulation Controllers (CMACs) [
30] and Radial Basis Functions (RBFs) [
31] are usually used as linear estimators in this context. It has been shown, however, that the function approximation process can be better represented via gradual-continuous transitions [
32]. Albeit the computation of the RBFs’ parameters is usually based on prior knowledge of the problem at hand, these parameters can also be adapted leveraging observed transitions in order to improve the autonomy of the approach. In this context, cross entropy and gradient descent methods [
33] can be utilized for the adaptation task. Stability of the gradient descent-based approach was later improved by exploiting a restrictive method in [
32], which is adopted in this manuscript.
After verifying the value function’s structure, to train the value function approximator, the following methodologies can be used: (i) Bootstrapping methods, e.g., Fixed-Point Kalman Filter (FPKF) [
34]; (ii) Residual techniques such as Kalman Temporal Difference (KTD) and Gaussian Process Temporal Difference (GPTD) [
35], which is a special form of the KTD; and (iii) Projected fixed-point methods such as Least Square Temporal Difference (LSTD) [
36]. Among these methodologies, KTD [
37] is a prominent technique as, based on the selected structure, it provides both uncertainty and Minimum Mean Square Error (MMSE) approximation of the value function. In particular, uncertainty is beneficial for achieving higher sample efficiency. The KTD approach, however, requires prior knowledge of the filter’s parameters (e.g., noise covariance of the process and measurement models), which are not readily available in realistic circumstances. Parameter estimation is a well-studied problem within the context of Kalman Filtering (KF), where several adaptive schemes are developed over the years including but not limited to Multiple Model Adaptive Estimation (MMAE) methods [
38,
39,
40] and, innovation-based adaptive schemes [
41]. When the system’s mode is changing, the latter has the superiority to adapt faster and its efficiency was shown in [
42], where different suggested averaging and weighting patterns were compared. MMAE methods were already utilized in the RL problems, for instance, Reference [
43] proposed a multiple model KTD coupled with a model selection mechanism to address issues related to the parameter uncertainty. Existing multiple model methodologies are, however, not easily generalizable to the MARL problem.
In methods proposed in [
16,
44,
45,
46], while the classical TD learning is coupled with DNNs, uncertainty of the value function and that of the SR is not studied. To deal with uncertainty, a good combination of exploitation and exploration should be used to prevent the agent’s overconfidence about its knowledge to fully rely on exploitation. Alternatively, an agent can perform exploration over other possible actions, which might lead to improved results and a reduction in the uncertainty. Although, from computation points of view, it is intractable to find an optimal trade-off between exploitation and exploration, it has been represented that exploration can benefit from the uncertainty in two separate ways, i.e., through added randomness to the value function, and via shifting towards uncertain action selection [
1]. Consequently, the approximated value function’s uncertainty, is a beneficial information for resolving the available conflict between exploration and exploitation [
1,
47]. It was shown in [
47] that the sensitivity of the framework to the parameters of the model can be diminished via uncertainty incorporation within the KTD method. Therefore, the required time and memory to find/learn the best model will be reduced compared to DNN-based methods [
16,
44,
45,
46]. The reduced sensitivity in setting the parameters enhances the reproducibility feature of a reliable approach, which leads to regeneration of more consistent outputs while running multiple learning epochs. Consequently, the risk of getting unacceptable results in real scenarios will decrease [
48]. Geerts et al. [
18] leveraged the KTD framework to estimate the SR for problems with discrete state-spaces, however information related to uncertainty of the estimated SR is not considered in the action selection procedure. We have started our research on signal processing-based RL solutions by introducing the MM-KTD [
4,
5], which is a multiple model Kalman temporal difference approach for single-agent environments with continuous state-space. The AKF-SR is then proposed in [
49], which is an adaptive KF-based successor representation approach developed for single-agent scenarios. This paper targets extending our previous works to multi-agent scenarios with heterogenous and continuous state-spaces.
Contributions: The paper proposes a Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD) framework and its SR-based variant, the Multi-Agent Adaptive Kalman Successor Representation (MAK-SR) framework. MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated with high dimensional multi-agent environments and exploit KTD to address the parameter uncertainty. By leveraging the KTD framework, SR learning procedure is modeled into a filtering problem in this work. Intuitively speaking, the goal is to take advantage of the inherent benefits of the KF, i.e., online second-order learning, uncertainty estimation, and non-stationary handling. Afterwards, RBF-based estimation is utilized within the MAK-TD/SR frameworks in order for continuous states to be encoded into feature vectors and for the reward function to be projected as a linear function of the extracted feature vectors. On the other hand, for learning localized reward functions, we resort to MMAE as a remedy to deal with the lack of prior knowledge on observation noise covariance and observation mapping function. Targeting the identified research questions and by addressing the aforementioned challenges, in summary, the paper makes the following key contributions:
Within the MARL domain, the so-called MAK-TD framework is proposed as compensation for the information inadequacy about a key unknown filter’s parameter, which is the measurement noise covariance. For learning the optimal policy and to simultaneously enhance sample efficiency of the proposed MAK-TD, an off-policy Q-learning approach is implemented.
MAK-TD is extended to MAK-SR by incorporation of the SR learning process into the filtering problem using KTD formulation for learned SR uncertainty approximation. Moreover, adopting KTD is beneficial to reduce the required memory/time to learn the SR while reducing the model’s sensitivity to parameters selection (i.e., more reliability) in comparison to DNN-based algorithms.
A coupled gradient descent and MMAE-based approach is adopted for development of the MAK-SR framework to form a KF-based approximation of the reward function. Via the utilized MMAE formulation, sensitivity to prior knowledge on KF key parameters is reduced.
For establishing a trade-off between exploration and exploitation, an innovative active learning mechanism is implemented to incorporate the uncertainty of the value function obtained from SR learning. Such a mechanism results in efficiently enhancing performance in terms of cumulative reward.
Novelty: The novelty of the proposed frameworks lies in the integration of Kalman temporal different, multiple-model adaptive estimation, and successor representation for MARL problems. Through such an integration, issues related to overfitting and high sensitivity to parameter selection are addressed and changes in the reward model can be accommodated. Furthermore, for establishing a trade-off between exploration and exploitation, an innovative active learning mechanism is implemented to use the obtained uncertainty of the value function. Such a mechanism results in efficiently enhancing performance in terms of cumulative reward.
A multi-agent extension of the OpenAI gym benchmark, a two-dimensional world with continuous space [
50] is utilized to simulate cooperative, competitive scenarios, and mix interaction settings. The proposed MAK-TD/SR frameworks are evaluated through a comprehensive set of experiments and simulations illustrating their superior performance compared to their counterparts. The remainder of the paper is organized as follows: In
Section 2, the basics of RL and MARL are briefly discussed. The proposed MAK-TD framework is presented in
Section 3, and its SR-based variant, the MAK-SR framework, is introduced in
Section 4. Experimental results based on multi-agent RL benchmark are presented in
Section 5.
Section 7, finally, concludes the paper.
3. The MAK-TD Framework
As stated previously, the MAK-TD framework, is a Kalman-based off-policy learning solution for multi-agent networks. More specifically, by exploiting the TD approach represented in Equation (
3), the optimal value function associated with the
ith agent, for (
), can be approximated from its one-step estimation as follows:
By changing the variables’ order, the reward at each time can be represented (modeled) as a noisy observation, i.e.,
where
is modeled as a zero-mean normal distribution with variance of
. By considering the local state-space of each agent, we use localized basis functions to approximate each agent’s value function. Therefore, the following value function can be formed for Agent
i, for (
),
where term
represents a vector of basis functions,
is the policy associated with Agent
i, and, finally,
denotes the vector of the weights. Substituting Equation (
11) in Equation (
10) results in
which can be simplified into the following linear observation model:
with
In other words, Equation (
13) is the localized measurement (reward) of the
agent, which is a linear model of the weight vector
. For approximating localized weight
, first we leverage the observed reward, which is obtained by transferring from state
to
. Second, given that the noise variance of the measurement is not known a priori, we exploit MMAE adaptation by representing it with
M different values (
), for (
). Consequently, a combination of
M KFs is used to estimate
based on each of its candidate values, i.e.,
where superscript
j is used to refer to the
jth matched KF, for which a specific value (
) is assigned to model covariance of the observation model’s noise process. The posterior distribution associated with each of the
M matched KFs is calculated based on its likelihood function. All the matched a posteriori distributions are then added together based on their corresponding weights to form the overall posterior distribution given by
where
is the
KF’s normalized observation likelihood associated with the
agent and is given by
where
. Exploiting Equation (
18), the weight and its error covariance are then updated as follows:
To finalize computation of
based on Equations (
13)–(
20), localized measurement mapping function
is required. As
is formed by the basis functions, its adaptation necessitates the adaptation of the basis functions. The vector of basis functions shown in Equation (
11) is formed as follows:
where
is the number of basis functions. Each basis function is represented by a RBF, which is defined by its mean and covariance parameters as follows:
where
and
are the mean and covariance of
, for (
). Generally speaking, the state-action feature vector can be represented as follows:
where
, and
denotes the number of actions associated with the
agent. The state-action feature vector
, for (
) in Equation (
24) is considered to be generated from
by placing this state feature vector in the corresponding spot for action
while the feature values for the rest of the actions are set to zero, i.e.,
Due to the large number of parameters associated with the measurement mapping function, the multiple model approach seems to be inapplicable. Alternatively, Restricted Gradient Descent (RGD) [
32] is employed, where the goal is to minimize the following loss function:
The gradient of the objective function with respect to the parameters of each basis function is then calculated using the chain rule as follows:
where calculation of the partial derivations is done leveraging Equations (
11), (
23) and (
26). Therefore, the mean and covariance of the RBFs can be adapted using the calculated partial derivative as follows:
where both
and
denote the adaptation rates. Based on [
32], for the sake of stability, only one of the updates shown in Equations (
29) and (
30), will be applied. To be more precise, when the size of the covariance is decreasing (i.e.,
), the covariances of the RBFs are updated using Equation (
30); otherwise, their means are updated using Equation (
29). Using this approach, unlimited expansion of the RBF covariances is avoided.
One superiority that the proposed learning framework shows over other optimization-based techniques (e.g., gradient descent-based methods) is the calculation of the uncertainty for the weights
, which is directly related to the uncertainty of the value function. This information can then be used at each step to select the actions, leading to the most reduction in the weights’ uncertainty. Using the information form of the KF (information filter [
59]), the information of the weights denoted by
is updated as follows:
In Equation (
31), the second element, i.e.,
, represents the information received from the measurement. The action is obtained by maximizing the information of the weights, i.e.,
The second equality in Equation (
32) is constructed as
is a scalar. The projected behavior policy in Equation (
32) is different from that in [
37], where a random policy was proposed, which favored actions with less certainty of the value function. Although reducing the value function’s uncertainty through action selection is an intelligent approach, it is less efficient in sample selection due to the random nature of such policies. Algorithm 1 briefly represents the MAK-TD framework proposed in this work.
Algorithm 1The Proposed MAK-TD Framework |
- 1:
Learning Phase: - 2:
Set for and - 3:
Repeat (for each episode): - 4:
Initialize - 5:
Repeat (for each agent i): - 6:
While do: - 7:
- 8:
Take action , observe - 9:
Calculate via Equations ( 22) and ( 23) - 10:
- 11:
- 12:
- 13:
for do: - 14:
- 15:
- 16:
- 17:
end for - 18:
Compute the value of c and by using and Equation ( 19) - 19:
- 20:
- 21:
RBFs Parameters Update: - 22:
- 23:
if then: - 24:
Update via Equation ( 29) - 25:
else: - 26:
Update via Equation () - 27:
end if - 28:
end while - 29:
Testing Phase: - 30:
Repeat (for each trial episode): - 31:
While do: - 32:
Repeat (for each agent): - 33:
- 34:
Take action , and observe - 35:
Calculate Loss for all agents - 36:
End While
|
4. The MAK-SR Framework
In the previous section, the MAK-TD framework is proposed, which is a MM Kalman-based off-policy learning solution for multi-agent networks. To learn the value function, a fixed model for the reward function is considered, which could restrict its application to more complex MARL problems. SR-based algorithms are appealing solutions to tackle this issue where the focus is instead on learning the immediate reward and the SR, which is the expected discounted future state occupancy. In the existing SR-based approaches that use standard temporal difference methods, the uncertainty about the approximated SR is not captured. In order to address this issue, we extend the MAK-TD framework and design its SR-based variant in this section. In other words, MAK-TD is extended to MAK-SR by incorporation of the SR learning procedure into the filtering problem using KTD formulation to estimate uncertainty of the learned SR. Moreover, by applying KTD, we benefit from the decrease in memory and time spent for the SR learning and also sensitivity of the framework’s performance to its parameters (i.e., more reliable) when compared to DNN-based algorithms.
Exact computation of the SR and the reward function is, typically, not possible within the multi-agent settings as we are dealing with a large number of continuous states. Therefore, we follow the approach developed in
Section 3 and approximate the SR and the reward function via basis functions. For the state-action feature vector
, a feature-based SR, which encodes the expected occupancy of the features, is defined as follows:
We consider that the immediate reward function for pair
can be linearly factorized as
where
is the reward weight vector. The state-action value function (Equation (
8)), therefore, can be computed as follows:
The SR matrix
can be approximated as a linear function of the same feature vector as follows:
The TD learning of the SR then can be performed as follows:
By defining the estimation structure of the SR and reward function, a suitable method must be selected to learn (approximate) the weight vector of the reward
and the weight matrix of the SR
for Agent
i. The proposed multi-agent MAK-SR algorithm contains two main components: KTD-based weight SR learning and radial basis function update. For the latter, we apply the method developed in
Section 3 to approximate the vector of basis functions via representing each of them as a RBF. The gradient of the loss function (
26), with respect to the parameters of the RBFs, is calculated using the chain rule for the mean and covariance of RBFs using (
29) and (
30).
For KTD-based weight SR learning, the SR can be obtained from its one-step approximation using the TD method of Equation (
37). In this regard, the state-action feature vector at time step
k can be considered as a noisy measurement from the system as follows:
where
follows a zero-mean normal distribution with covariance of
. Considering Equations (
36) and (
38) together, the feature vector
can be approximated as
s Matrix
is then mapped to a column vector
by concatenating its columns. Using the vec-trick characteristic of Kronecker product denoted by ⊗, then we can rewrite Equation (
39) as follows:
where
represents an identity matrix of appropriate dimension. More specifically, Equation (
40) is used to represent the localized measurements (
) linearly based on vector
, which requires estimation. Therefore, we use the following linear state model:
to complete the required state-space representation for KF-based implementation. The noise associated with the state model (Equation (
41)), i.e.,
, follows a zero-mean normal distribution with covariance of
. Via implementing the KF’s recursive equations, we use the new localized observations to estimate
and its corresponding covariance matrix
. After this step, vector
is reshaped to form a (
) matrix in order to reconstruct Matrix
. Equation (
35) is finally used to form the state-action value function for associated with (
). Algorithm 2 summarizes the proposed MAK-SR framework.
Algorithm 2The Proposed MAK-SR Framework |
- 1:
Learning Phase: - 2:
Initialize: for - 3:
Parameters: for - 4:
Repeat (for each episode): - 5:
Initialize - 6:
Repeat (for each agent i): - 7:
While do: - 8:
Reshape into to construct 2-D matrix . - 9:
- 10:
Take action , observe and . - 11:
Calculate via Equations ( 23) and ( 25). - 12:
Update reward weights vector: Perform MMAE to update . - 13:
Update SR weights vector: Perform KF on Equations ( 40) and ( 41) to update . - 14:
Update RBFs parameters: Perform RGD on the loss function to update and . - 15:
end while
|
It is worth mentioning that, unlike the DNN-based networks for multi-agent scenarios, the proposed multiple-model frameworks require far less memory due to their sequential data processing nature. In other words, storing the whole episodes’ information for all the agents is not needed as the last measured data (assuming one-step Markov decision process) can be leveraged given the sequential nature of the incorporated filters. Finally, note that the proposed MAK-SR and MAK-TD frameworks are designed for systems with a finite number of actions. One direction for future research is to consider extending the proposed MAK-SR framework to applications where the to action-space is infinite-dimensional. This might occur in continuous control problems [
54,
60] where number of possible actions at each state is infinite.
5. Experimental Results
The performances of the proposed MAK-SR and MAK-TD frameworks are evaluated in this section, where a multi-agent extension of the OpenAI gym benchmark is utilized.
Figure 1 illustrates snapshots of the environment utilized for evaluation of the proposed approaches. More specifically, a two-dimensional world is implemented to simulate competitive, cooperative, and/or mix interaction scenarios [
50]. The utilized benchmark is currently one of the most standard environments to test different multi-agent algorithms, where time, discrete action space, and continuous observations are the basics of the environment. Such a multi-agent environment is a natural curriculum in that the environment difficulty is determined based on the skills of the agents cooperating or competing. The environment does not have a stable equilibrium, therefore, allowing the participating agents to become smarter irrespective of their intelligence level. In each step, the implemented environment provides observations and rewards once the agents performed their actions. The proposed platforms are implemented on a computer with a 3.79 GHz AMD Ryzen 9, 12-core processor. The frameworks are evaluated via several experiments, which are implemented through the OpenAI Gym multi-agent RL benchmarks. The parameters related to the proposed MAK-SR and MAK-TD are set randomly. In the designed deep models, the learning rate is set as
, and the models are trained with the mini-batches of size 128 using Adam Optimizer.
and
are based on the Actor-Critic approach.
and
receive an observation as input consisting of the current state, next state, gained reward, and the action taken by the agents at each step in the environment. For
, based on the received state data (current and next state) and the actions taken by all the agents, the future return is approximated considering all the agent’s policies.
In what follows, we discuss different multi-agent environments exploited in this work as well as the experimental assumptions considered during testing of the proposed methods. Finally, the results of the experiments will be represented and explained.
5.1. Environments
In the represented multi-agent environments, we do not impose any assumption or requirement on having identical observations or action spaces for the agents. Furthermore, agents are not restricted to follow the same policy
while playing the game. In the environments, a different number of agents and possible landmarks can be placed to establish different interactions such as cooperative, competitive, or mixed strategies. The strategy in each environment is to keep the agents in the game as long as possible. Each test can be fully cooperative when agents communicate to maximize a shared return, or can be fully competitive when the agents compete to achieve different goals. The mixed scenario for the predator–prey environments (a variant of the classical predator–prey) is defined in a way that a group of slower agents must cooperate against another group of faster agents to maximize their returned reward. Each agent takes a step by choosing one of five available actions, i.e., no movement, left, right, up, and down, transiting to a new state, and receiving a reward from the environment. Moreover, each agent will receive a list of observations in each state, which contains the agent’s position and velocity, relative positions of landmarks (if available), and its relative position to other agents in the environment. That is how an agent knows the position and general status of the agents (friends and adversaries), enabling the decision-making process of that agent. As shown in
Figure 1, each environment has its own margins. An agent that leaves the area will be punished by
points, the game will be reset, and a random configuration will be initiated to start the next state, which begins immediately. The red agents play the predator role and receive
points intercepting (hunting) a prey (small green agents). The green agents that are faster than red agents (predators) will receive
points by each interception with the red ones. As their job is to follow the prey, the predators will be punished proportionally to their distance to the prey (green agents). In contrast, the opposite will happen to the green agents as they keep the maximum distance from the predators. The proposed MAK-TD/SR frameworks are evaluated against DQN [
26], DDPG [
27], and MADDPG [
57]. We evaluate the algorithms in terms of loss, returned discounted reward, and the number of collisions between agents.
5.2. Experimental Assumptions
In the proposed frameworks, we exploit related RBFs based on the different agents’ sizes of observations and a bias parameter. The size of the observation vector at each local agent (localized observation vector), which represents the number of global and local measurements available locally, varies across different scenarios based on the type and the number of agents present/active in the environment. Irrespective of size of the localized observation vectors, the size of the localized feature vectors, which represents the available five actions, is considered to be 50. Mean and covariance of the RBFs are initialized randomly for all the agents in all the environments. For example, consider a Predator–Prey scenario with 2 preys optimizing their actions against one predator. In this toy-example (discussed for clarification purposes), considering 9 RBFs together with localized observation vectors of size 12 for the predator and 10 for the preys, the mean vector associated with the predator and the preys are of dimensions
and
, respectively. Consequently, for this Predator–Prey scenario,
, which is initialized randomly contains three agents with random values with the mean size
and the covariance,
where
and
are the identity matrices of size (
) and (
), respectively. Based on Equation (
25), the vector of basis function is represented as follows:
where
is calculated based on Equation (
24) for (
.,
, In all the scenarios, the time step chosen to be 10 milliseconds and the discount factor is
. The transition matrix is initiated to
, and for the process noise covariance, a small value of
is considered. The covariance matrix associated with the noise of the measurement model is selected from the following set:
For initializing the weights, we sample from a zero mean Gaussian initialization distribution , where and . By considering the aforementioned initial parameters, each experiment is initiated randomly and consists of 1000 learning episodes together with 1000 test episodes. Given small number of available learning episodes, the proposed MAK-TD/SR frameworks outperformed their counterparts across different metrics including sample efficiency, cumulative reward, cumulative steps, and speed of the value function convergence.
5.3. Results
Initially, the agents are trained over different number of episodes, after which 10 iteration each of 1000 episodes is implemented for testing to compute different results evaluating performance and efficiency of the proposed MAK-TD/SR frameworks. First, to evaluate stability of the incorporated RBFs, a Monte Carlo (MC) study is conducted where 10 RBFs are used across all the environments. The results are averaged over multiple realizations leveraging MC sampling as shown in
Table 1,
Table 2 and
Table 3.
Figure 2b shows the rewards gained by all the agents in a Predator–Prey environment. It is worth mentioning that the average number of the steps taken by all the agents in the defined environments is also represented in
Table 3, showing MAK-SR remarkable results in contrast with the other algorithms. Results related to cumulative distance walked by the agents (computed by multiplying the number of the steps by
m for each step) are also shown in
Figure 3 for different environments admitting superiority of the MAK-SR framework in contrast with other solutions. The loss function associated with each of the five implemented methods is shown in
Figure 4.
6. Discussion
The results shown in
Section 5 illustrate the inherent stability of the utilized RBFs and the proposed MAK-TD and MAK-SR frameworks. Capitalizing on the results of
Table 1,
Table 2 and
Table 3, the MAK-SR can be considered as the most sample-efficient approach. It is worth noting that although MAK-SR outperforms the MAK-TD approach, we included both, as the learned representation is not transferable between optimal policies in the SR learning. For such scenarios, MAK-TD is an alternative solution providing, more or less, similar performance to that of the MAK-SR. To be more precise, when solving a previously unseen MDP, a learned SR representation can only be used for initialization. In other words, the agents still have to adjust the SR representation to the policy, which is only optimal within the existing MDP. This limitation urges us to represent the MAK-TD as another trusted solution.
As it can be seen from
Table 1, the average loss associated with the proposed MAK-SR is better than that of the MAK-TD. Both frameworks, however, outperform their counterparts, which can be attributed to their improved sample selection efficiency. This excellence can also be seen for the Predator–Prey
environment in
Figure 2a. The calculated losses mostly have small values after the beginning of the experiments, indicating stability of the implemented frameworks. As can be seen, other approaches cannot provide that level of performance that is achieved by MAK-SR and MAK-TD with such low number of training episodes in this experiment. The other three DNN-based approaches can reach such an efficiency with a much greater amount of experience (more than 10,000 experiments) and use much more memory space to save the batches of the information.
As can be seen in
Table 2 and
Figure 2b, the rewards gained in the MAK-SR are also better than those of the MAK-TD and are much higher than the other approaches. This can be considered exceptional considering the limited utilized experience. For all other environments, this better performance in the gained reward can be seen in
Figure 5 where four different reward functions for five discussed algorithms in four experiment environments are shown. As expected, the performance of each model improves over time as being trained through different training episodes. The proposed MAK-SR and MAK-TD provide exceptional performances given the small number of training episodes utilized in these experiments.
,
, and
, however, fail to achieve the same performance level.
Evaluating reliability of the proposed learning frameworks is of significance to verify their applicability in real-world scenarios. A reliable learning procedure should be able to provide consistency in its performance and generate reproducible results over multiple runs of the model [
48]. Generally speaking, performance of RL-based solutions, particularly DNN-based approaches, are highly variable because of their dependence on a large number of tunable parameters. Hyperparameters, implementation details, and environmental factors are among these parameters [
61]. This can result in unreliability of DNN-based RL algorithms in real-world scenarios compared to the proposed frameworks that are less dependent on parameter selection and fine-tuning. To better illustrate reliability of the proposed frameworks, another experiment is conduced where the initial parameters in each run are generated randomly. More specifically, we have repeated each test 10 times consisting of 1000 learning episodes together with 1000 test episodes. A reliable RL algorithm should be consistent in regenerating performance across different training sessions, i.e., reproducibility feature. As can be seen from
Figure 6, for all four test scenarios (i.e., cooperative, competitive, and mixed strategies) DNN-based methods (
,
, and
) have higher variance illustrating their sensitivity to the underlying parameters that can be attributed to reduced reliability. As can be seen from
Figure 6, MAK-SR outperforms other approaches in terms of the received awards. In both MAK-SR and MAK-TD algorithms, positive effect of uncertainty usage in the action selection procedure is noticeable. The ability to produce stable performance across different episodes is another aspect for investigating reliability of RL models. Stability of different models can also be compared through
Figure 6. It can be seen that the proposed MAK-SR algorithm is more stable than its counterparts as fewer sudden changes occur during different episodes.
With regards to potential future works, on the one hand, the proposed frameworks can be implemented and applied to higher-dimensional MARL environments, e.g., large-scale IoT applications such as indoor localization scenarios in unconstrained environments. One interesting scenario here is to consider a heterogenous network of multiple agents using different tracking/localization algorithms with application to Contact Tracing (CT). Another direction for future research is to focus on optimization of the current SR-based solution. In its current form, the SR weight matrix is approximated by mapping into a one-dimensional vector and applying KF leveraging the KTD framework. For application to higher dimensions, this vectorized approach can result in potential information loss as such more complex approximation techniques should be developed while being mindful of potential computation overhead.
7. Conclusions
The paper proposed the MAK-TD framework and its SR-based variant, the MAK-SR framework, as efficient alternatives to DNN-based MARL solutions. The main objective of these developments is to address sample inefficiency, memory problems, and lack of prior information issues of DNN-based MARL techniques. The novelty of the proposed frameworks lies in the integration of Kalman temporal different, multiple-model adaptive estimation, and successor representation for MARL problems. Through such an integration, aforementioned issues related to overfitting and high sensitivity to parameter selection are addressed and changes in the reward model are accommodated. More specifically, by leveraging the KTD framework, SR learning procedure is modeled into a KF problem and RBFs are used to encode the continuous space into feature vectors. For learning localized reward functions, we resort to MMAE to deal with the lack of prior knowledge on the underlying parameters. Additionally, via learning the value function as the inner product of the SR and the weight vector of the reward function, the models can deal with changes in the reward function. Finally, an innovative active learning mechanism is implemented to use the obtained uncertainty of the value function and establish a trade-off between exploration and exploitation. The proposed MAK-TD/SR frameworks are evaluated via several experiments across four different environments, which are implemented through the OpenAI Gym multi-agent RL benchmarks. In these experiments, different number of agents in cooperative, competitive, and mixed (cooperative-competitive) scenarios are utilized. For evaluation purposes, we looked at the average loss, average accumulative reward, the number of steps, and reproducibility/stability aspects of reliability computed over multiple realizations. Based on the results, the proposed MAK-TD/SR frameworks outperformed their counterparts across different evaluation metrics. For example, for the competition scenario, the MAK-SR achieved total average loss of , while its DNN-based counterparts achieved total average loss of 10,158.18, 10,710.37, and 107.39 for MADDPG, DDPG, and DQN, respectively. Finally, MAK-TD/SR and MAK-TD require much less time and space to find the best policy, while the other three DNN-based approaches can reach such an efficiency with a much higher amount of experience (more than 10,000 experiments) and need much more memory space to save the batches of the information.