1. Introduction
In an animal or human being, the learning phase may often be viewed as a series of choices between multiple possible reactions. Even in basic repetitive experiments under strictly regulated conditions, preference chains are mostly volatile, recommending that the probability governs the choice of feedback. It is also helpful to identify structural adjustments in the series of alternatives that reflect changes in trial-to-trial outcomes. From this perspective, most of the learning analysis explains the probability of a trial-to-test occurrence that describes a stochastic mechanism.
Experiments in mathematical learning have recently shown that the behavior of a basic learning experiment follows a stochastic model. Thus, it is not a novel idea (for detail, see [
1,
2]). However, following 1950, two crucial characteristics emerged mainly in the Bush, Estes, and Mosteller study. Firstly, one of the suggested models’ most important features is the inclusive character of the learning process. Second, such models can be examined in this way so that they cannot hide their statistical features.
Symmetries have emerged in mathematical formulations many times, and they have been shown to be important for solving problems or furthering research. It is possible to see high-quality research that uses nontrivial mathematics and related geometries in the context of important issues from a wide range of fields.
In learning theory and mathematical biology, the solution to the subsequent equation is of great importance
where
is an unknown and
are the learning-rate parameters that measure the effectiveness of the responses in a two-choice situation.
In 1976, Istrăţescu [
3] used the above functional equation to inspect the involvement of predatory animals that prey on two distinct types of prey. Markov transitions were used to describe this behavior by converting the states
x and
to
and
with
and
.
Bush and Wilson [
1] used such operators to examine the movement of a fish in two-choice circumstances. They claimed that under such behavior, there are four possible events: left-reward, right-nonreward, right-reward, left-nonreward.
It is widely assumed that getting rewarded on one side would increase the probability of that side being selected in the following trial. However, the reasoning for non-rewarded trials is less apparent. According to an extinction or reinforcement theory (see
Table 1), the probability of choosing an unrewarded side in the next trial would decrease. In contrast, a model that relies on habit formation or secondary reinforcement (see
Table 2) would suggest that simply choosing a side would increase the probability of selecting that side in the upcoming trials.
In 2015, Berinde and Khan [
4] generalized the above idea by proposing the following functional equation
where
and
are given contraction mappings with
and
.
Recently, Turab and Sintunavarat [
5] utilized the above ideas and suggested the functional equation stated below
where
is an unknown,
and
. The aforementioned functional equation was used to study a specific kind of psychological resistance of dogs enclosed in a small box.
Several other studies on human and animal actions in probability-learning scenarios have produced different results (see [
6,
7,
8,
9,
10,
11,
12]).
Here, by following the above work with the four possible events (right-reward, right-nonreward, left-reward, left-nonreward) discussed by Bush and Wilson [
1], we propose the following general functional equation
for all
where
is an unknown and
are given mappings. In addition,
is a non-expansive mapping with
and
for
.
Our objective is to prove the existence, uniqueness, Hyers–Ulam (HU)- and Hyers–Ulam–Rassias (HUR)-type stability conclusions of Equation (
4) by using the appropriate fixed-point method. Following that, we provide two examples to demonstrate the importance of our findings.
The following stated outcome will be required in the advancement.
Theorem 1 ([
13])
. Let be a complete metric space and be a Banach contraction mapping (BCM) defined byfor some and for all Then has only one fixed point. Furthermore, the Picard iteration (PI) in , defined as for all , where , converges to the unique fixed point of . 2. Main Results
Let
where
with
. We indicate the class
consisting of all continuous real-valued functions by
such that
and
We can see that
is a normed space (for the detail, see [
4,
12]), where
is given by
for all
.
For computational convenience, we write (
4) as
where
is an unknown function such that
. In addition,
are BCMs with contractive coefficients
, respectively, and accomplishing the conditions
The primary goal of this part is to use fixed-point techniques to determine the existence and uniqueness results of (
7). We begin with the outcome stated below.
Theorem 2. Consider the probabilistic functional Equation (7) with (8). Suppose that and such thatwhere and Assume that there is a nonempty subset of such that is a Banach space (BS), where is given in (6), and the mapping from to defined for each byfor all is a self mapping. Then is a BCM with the metric d induced by . Proof. Let
be a metric induced by
on
. Thus
is a complete metric space. We deal with the operator
from
defined in (
10).
In addition,
is continuous and
for all
Therefore,
is a self operator on
Furthermore, it is clear that the solution of (
7) is equivalent to the fixed point of
. Since
is a linear mapping, so for
, we obtain
where
. Thus, to evaluate
, we mark the following framework
where
. To obtain this, let
, and for each
with
we obtain
Our aim is to use the definition of the norm (
6) here. Therefore, by utilizing (
8) with the condition
we have
As
are contraction mappings with the contractive coefficients
respectively, we obtain
Hence,
where
is defined in (
9). This gives that
As
this implies that
is a BCM with metric
d induced by
. □
Theorem 3. Consider the probabilistic Equation (7) with (8). Suppose that and where is defined in (9). Assume that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (10) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in . Furthermore, the iteration in defined byfor all where , converges to the unique solution of (7). Proof. From Theorem 2, it is clear that
defined for each
by (
10) is a BCM with metric
d induced by
. Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem. □
A similar estimation approach has been applied in a group control system (for the detail, see [
14]).
We shall look at a unique situation here. If are contraction mappings with contractive coefficients , respectively, then by Theorems 2 and 3, the outcomes are as follows.
Corollary 1. Consider the probabilistic Equation (7) associated with (8). Assume that with and there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each byfor all is a self mapping. Then is a BCM with the metric d induced by . Corollary 2. Consider the probabilistic Equation (7) associated with (8). Assume that with and there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (12) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in . Furthermore, the iteration in given asfor all where , converges to the unique solution of (7). The conditions and are sufficient, but not necessary, to prove the main results. In the following results, we use different conditions to prove the main conclusion.
Theorem 4. Consider the probabilistic Equation (7) with (8). Assume that there exist such thatand that wherewith and Suppose that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each byfor all is a self mapping. Then is a BCM with the metric d induced by . Proof. Let
be a metric induced by
on
. Thus
is a complete metric space. We deal with the operator
from
defined in (
16).
In addition,
is continuous and
for all
Therefore,
is a self operator on
Furthermore, it is clear that the solution of (
7) is equivalent to the fixed point of
. Since
is a linear mapping, so for
, we obtain
where
. Thus, to evaluate
, we mark the following framework
where
. To obtain this, let
, and for each
with
we obtain
Here, we use the norm (
6) and the condition (
14). Thus, we have
As
are contraction mappings with the contractive coefficients
respectively, we obtain
where
is defined in (
15). This gives that
As this implies that is a BCM with metric d induced by . □
Theorem 5. Consider the probabilistic functional Equation (7) with (8). Suppose that (14) holds and where is defined in (15). Assume that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (16) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in . Furthermore, the iteration in can be defined byfor all where , converges to the unique solution of (7). Proof. From Theorem 4, it is clear that
defined for each
by (
16) is a BCM with metric
d induced by
. Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem. □
We shall look at a unique situation here. If are contraction mappings with contractive coefficients , respectively, then by Theorems 4 and 5, the outcomes are as follows.
Corollary 3. Consider the probabilistic functional Equation (7) associated with (8). Assume that there exist defined in (14) and whereSuppose that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each byfor all is a self mapping. Then is a BCM with the metric d induced by . Corollary 4. Consider the probabilistic Equation (7) associated with (8). Assume that (14) holds and where is defined in (18). Suppose that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (19) is a self mapping. Then, the functional Equation (7) with (8) has a unique solution in . Furthermore, the iteration in is defined asfor all where , converges to the unique solution of (7). Remark 1. Our proposed probabilistic Equation (7) is a generalization of the functional equations discussed in [
6,
8].
We now offer the following examples to show the significance of our results.
Example 1. Consider the probabilistic functional equation given belowfor all with and . If we set the mappings byfor all , then our Equation (7) reduces to the Equation (21). It is easy to see that satisfy our boundary conditions (8), and . In addition,for all . This implies that are contraction mappings with coefficientsrespectively, and is a non-expansive mapping with . Ifand there is a nonempty set of such that is a BS, and the mapping from given in (21) for all is a self mapping. Then, all constraints of Theorem 2 are fulfilled, and therefore, we get the existence of a solution to the functional Equation (21). If we define (whereas is an identity function), by considering as an initial approximation , then by Theorem 3, the next iteration converges to a unique solution of (21):for all . Example 2. Consider the probabilistic functional equation given belowfor all with and . If we set the mappings byfor all , then our Equation (7) reduces to the Equation (22). It is easy to see that and satisfy our boundary conditions (8). In addition,for all . This implies that are contraction mappings with coefficientsrespectively, and is a non-expansive mapping with . Furthermore,and there is a nonempty set of such that is a BS, and the mapping from given in (22) for all is a self mapping. Then, all hypotheses of Theorem 4 are fulfilled, and therefore, we get the existence of a solution to the functional Equation (22). If we define as an initial approximation, then by Theorem 5, the next iteration converges to a unique solution of (22):for all . 4. Conclusions
The predator–prey analogy is among the most appealing paradigms in a two-choice scenario emerging in mathematical biology. In such models, a predator has two possible prey choices, and the solution occurs when the predator is attracted to a particular type of prey. In this paper, we proposed a general functional equation that can cover numerous learning theory models in the existing literature. We also discussed the existence, uniqueness, and stability results of the suggested functional equation. The functional equations that appeared in [
3,
4,
8] focused on just two cases, while our proposed functional Equation (
4) covers all the possible cases discussed by Bush and Wilson in [
1]. In addition, in [
3,
4,
12], the authors used the boundary conditions
and
to prove their main results, but in Theorem 4, we did not employ such assumptions. Therefore, our method is novel and can be applied to many mathematical models associated with mathematical psychology and learning theory.
To conclude, we propose the following open problem for the interested readers.
Question: Can we use another method to prove the conclusions of Theorems 2 and 3?