Next Article in Journal
Semi-Local Integration Measure for Directed Graphs
Previous Article in Journal
Analyticity of the Cauchy Problem for a Three-Component Generalization of Camassa–Holm Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Algebraic Recognition Approach in IoT Ecosystem

by
Anvar Kabulov
1,2,
Islambek Saymanov
1,2,3,*,
Akbarjon Babadjanov
4 and
Alimdzhan Babadzhanov
5
1
School of Mathematics and Natural Sciences, New Uzbekistan University, Mustaqillik Ave. 54, Tashkent 100007, Uzbekistan
2
Applied Mathematics and Intelligent Technologies Faculty, National University of Uzbekistan, Tashkent 100174, Uzbekistan
3
College of Engineering, Central Asian University, Milliy Bog’ Street 264, Tashkent 111221, Uzbekistan
4
Department of Information Systems, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250, USA
5
Department of Algorithmization, Engineering Federation of Uzbekistan, Tashkent 100003, Uzbekistan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 1086; https://doi.org/10.3390/math12071086
Submission received: 13 February 2024 / Revised: 14 March 2024 / Accepted: 26 March 2024 / Published: 4 April 2024
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
The solution to the problem of identifying objects in the IoT ecosystem of the Aral region is analyzed. The problem of constructing a correct algorithm with linear closure operators of a model for calculating estimates for identifying objects in the IoT ecosystem of the Aral region is considered. An algorithm operator is developed, which is considered correct for the problem Z, is the sum of q operators of the assessment calculation model, and is described by a set of numerical parameters 3 · n · m · q , where n is the number of specified features, m is the number of reference objects, and q is the set of recognized objects. Within the framework of the algebraic approach, several variants of linear combinations of recognition operators are constructed, the use of which gives the correct answer on the control material, and this is proven in the form of theorems. The constructed correct recognition algorithms, which are the easiest to use, where there is no optimization procedure, make it possible to quickly solve the issue of identifying incoming information flows in the IoT ecosystem of the Aral region.

1. Introduction

In recent years, solutions to applied problems of classification recognition and prediction have achieved great developments. In many real cases, the solution scheme remains the same; the set of possible solutions is divided into subsets in such a way that solutions that are close in some metric fall into one subset [1,2]. In the future, solutions that fall into the same subset do not differ, and all objects corresponding to these solutions are included in one class [3]. The information gained from past experience is presented in the following form: various objects are described in some way and their descriptions are divided into a finite number of non-overlapping classes [4]. When a new object appears, a decision is made to assign it to one class or another [5]. It is proposed to choose a generalized algorithm such that it achieves extreme forecasting quality [6]. Let us consider algorithmic models for solving classification problems. Among these models, we can identify the most frequently encountered ones when solving applied problems.
The work [7] discusses practical precedent-based recognition algorithms based on logical or algebraic correction of various heuristic recognition algorithms. The recognition problem is solved in two stages. First, algorithms of a certain group are applied independently to recognize an arbitrary object, then an appropriate corrector is applied to calculate the final collective solution. The general concepts of the algebraic approach, descriptions of practical algorithms for logical and algebraic correction, and the results of their practical comparison are given.
In [8], the problem of classification (recognition) based on precedents was studied. The issues of increasing the recognition ability and learning speed of logical correctors—recognition procedures based on constructing correct sets of elementary classifiers—have been studied. The concept of a correct set of elementary classifiers of a general form is introduced, and on this basis, a qualitatively new model of a logical corrector is constructed and studied. This model uses a wider class of correcting functions than previously constructed models of logical correctors.
In ref. [9], the supervised classification problem with a large number of classes was studied and the ECOS circuit (error output codes) was optimized. First, an initial binary matrix was randomly formed, the number of rows was equal to the number of classes, and each of the columns corresponded to the union of several classes into two macroclasses. In the ECOS approach, the binary classification problem was solved for the recognized object and each association. The object belonged to the class whose code string was closest. A generalization of the ESOS approach was given, offering a solution to the discrete optimization problem when searching for optimal associations, the use of the probabilities of correct classification in dichotomous problems, and the degree of information content of dichotomies. If the algorithms for solving dichotomous problems are correct, then the algorithm for recognizing the original problem is also correct.
In ref. [10], the complexity of the logical analysis of integer data was studied. For special problems of searching for frequent and infrequent elements in data, on the solution of which the training of logical classification procedures is based, the asymptotics of the typical number of solutions were given. The technical basis for obtaining these estimates was the methods for obtaining similar estimates for the intractable discrete problem of constructing (enumerating) dead-end coverings of an integer matrix, formulated in the work as the problem of finding “minimal” infrequent elements. The new results mainly concern the study of metric (quantitative) properties of frequently occurring elements.
The purpose of the research in this article is to develop effective object recognition methods to solve the identification problem in the IoT ecosystem of the Aral region. The objective of the research is to develop correct recognition algorithms based on an algebraic approach in solving the problem of identifying [11,12,13,14,15] the flow of information in the IoT ecosystem of the Aral region, which is described in detail in work [15] of the authors of this paper.
The IoT ecosystem of the Aral region covers a network of sensors for determining groundwater levels and salinity of water and soil and transmitting information via communication channels to the system server for identification and further processing. This article solves the applied problem of identification in the ecosystem of the Aral region, where formulas for recognizing correct algorithms are specifically given. Moreover, the correctness is proven in the theorems of this work in terms of the algebraic approach.
The public benefit of the results of this study is that the results are used in the ecosystem of the Aral region in monitoring the use of water resources and calculating salinity levels for agricultural use in an interactive mode.
The problems of recognizing disjoint classes are considered in refs. [8,16,17,18]. The methodology used to develop and test the algebraic recognition approach was proposed and described in the works of academician Yu.I. Zhuravlev and his students. In this work, based on the idea of the algebraic approach of academician Yu.I. Zhuravlev, a much simpler and significantly more efficient computational algorithm of the nearest neighbor and an average distance algorithm are proposed for solving these problems using algebraic methods. Methods have been developed that make it possible to more economically encode the recognition operator, which reduces the required memory and more efficiently uses the constructed correct algorithms for solving applied problems. The algebraic approach shows that each algorithm A can be represented as A = B × C , two operators, recognition operator B, and the decision rule C. Within the framework of the algebraic approach, several variants of linear combinations of recognition operators A are constructed, the use of which gives the correct answer on the control material, and this is proven in the form of theorems.
Thus, taking into account the analysis of the works of scientists in the field of recognition researched on the topic of the article, this work investigates the problem of constructing a correct algorithm with linear closure operators for a model for calculating estimates. An algorithm operator was developed, which is considered correct for problem Z, represents the sum of q operators from the model for calculating estimates, and is described by a set of 3 · n · m · q (where n is the number of predetermined features, m is the number of reference objects, q is the set of recognized objects) numerical parameters. An operator belonging to the linear closure of a model of the type of calculation of estimates was constructed [19,20,21,22]. The completeness of the linear closure of this model was proven for all problems in which for each class there is at least one stationary pair u , v , and this correct algorithm is written explicitly.

2. Models Based on the Calculation of Estimates

In the work [23], the so-called parametric recognition algorithms were considered, such collections of algorithms in which each algorithm is encoded in a one-to-one way by a set of numerical parameters. In these models, the proximity between parts of previously classified objects and the object to be classified was analyzed [24]. Based on a set of estimates, a general estimate for the object was developed and, according to the introduced decision rule, the belonging of the recognized object to one or another class was determined [25,26].
In the article, as the initial model (A), a model was considered that is related to the model for calculating estimates, supplemented by some simple type recognition algorithms: the nearest neighbor algorithm, the average distance algorithm, etc.
A feature of the algorithms of this class is that for calculating estimates that determine the belonging of a recognized object, there are simple analytical formulas that replace complex enumeration procedures that arise when calculating proximity estimates using a system of support sets [27].
In these models, the division of the algorithm into recognition operators and decision rules was carried out in a natural way [28,29,30,31,32].
We will consider only algorithms represented in the form A = B · C , where B is an arbitrary recognition operator. It turns out that an essential part of the algorithm is the operator—B; decision rule—C can be made standard for all algorithms and programs. Any recognizing vote operator maps task Z to a numeric matrix of votes or scores B Z = G i j q · l , G i j = G j S i ; moreover, the value G i j has a clear, meaningful interpretation. This value can be considered as the degree of belonging of the examined object S i to the class expressed by a number K j . After introducing appropriate normalizations, the value G i j can also be considered as the value of the membership function of elements S i of the set K j .
Before introducing the submodel used in what follows, let us write how the algebra of recognizing operators is constructed [33,34]. Let Z = J , S ˜ 2 be a fixed recognition problem with classes K 1 , , K l , also let B 1 and B 2 recognize operators, c is a real number
B i Z = G u v i q · l , i = 1 , 2 .
Then, the sum and product of the operators B 1 and B 2 as well as the multiplication of the operator by a real number is defined as follows
B 1 + B 2 Z = G u v 1 + G u v 2 q × l
B 1 × B 2 Z = G u v 1 × G u v 2 q × l
c B Z = c · G q × l , i = 1 , 2 .
Obviously, all these operations are commutative and associative; moreover, the operation of addition is distributive with respect to the operation of multiplication by a number. Due to these properties, if B is the original set of operators, a B is the closure of the family B with respect to the introduced operations (algebraic closure) from a B can be represented as operator polynomials b i 1 , , b i k , B i 1 · B i 2 · · B i k .
Here, b i k —constants and original operators B i k —play the role of variables for ordinary polynomials.
Note that if we have one algorithm for applying the original operators B i k to any problem Z, then it is easy to construct an algorithm for applying the operator polynomial to Z.
For any operator polynomial, the algorithm to be applied to the problem is constructed in a similar way: the original operators are applied to the problem, and then the resulting matrices are multiplied by the corresponding scalars and finally added, as in the example just considered. The evaluation matrix obtained by applying the algorithm a B from the closure algebra does not allow such a simple interpretation as the matrix of the number of votes in voting algorithms [16,17,18]. The new recognizing operators are a formal extension of the original space of meaningful operators. Such formal extensions are often used in mathematics; thus, the field of complex numbers is a formal extension of the field of real numbers, for example, the process of formal Galois extensions is known. In the Galois extension, an algebraic equation of any degree is solved elementarily. However, for a long time, no physical interpretation of the Galois expansion was found [35]. Only in recent years has it been established that the elements of the Galois extension are naturally interpreted in connection with problems of error correction in information transmission.
There is currently no meaningful interpretation for the algebraic extension of the space of operators. However, with the help of these extensions, difficult extremal problems are relatively easily solved, including the problem of synthesizing an error-free algorithm for a given recognition problem [36].
The degree of an operator polynomial is introduced similarly to the degree of an ordinary polynomial in many variables from the terms:
b i 1 , , b i n B i 1 · B i 2 · · B i k
the term with the largest K equal, for example, is chosen, and the degree of the polynomial is assumed to be equal to K. Based on the definition of the degree of polynomials, it is easy to distinguish in the extension a B a system of nested extensions
a 0 B = B , a 1 B = b i B i , , a k B = the totality of all polynomials of degree not higher than K
obviously, a 0 B a 1 B a k B a B the set a k B is called the k-th power extension of the original space of operators B . The set is of particular importance a 1 B , it consists of all possible linear forms from the original operators B . The elements are represented as:
b i B i , B i B
When forming the elements included in the a 1 B operation, the product of operators is not used [37,38]. Therefore, this extension is an extension of the original space using the “+” operation, “multiplication by a number”; therefore, the set is a 1 B , also commonly denoted as L B and is called a linear extension of the set B . And so, let a set of A algorithms be given A , and each A is represented as A = B × C and B the initial model of operators. We take a fixed decision threshold rule C d 1 , d 2 . We introduce a family of algorithms L B · C d 1 , d 2 , , a k B · C d 1 , d 2 , , a B · C d 1 , d 2 called, respectively, a linear extension L A , an algebraic extension of the k-th degree a k A , and an algebraic extension a A of a family of algorithms A .
We see that the constructed sets of algorithms consist, in the execution of the corresponding operators, and a fixed threshold decision rule c is applied to the result of the operator’s action c d 1 , d 2 .
Let some set of recognition problems be given Z and the initial set of recognizing operators be chosen in some way B . Let also within the framework B , not for every problem B , there exist an operator B Z and an algorithm B Z × C Z , which gives an error-free solution to the problem Z. Then, the scheme for constructing a correct recognition algorithm consists of the following steps:
Stage 1. Some extension is chosen a k A in which the existence of an error-free solution is guaranteed for each problem Z from Z . In this case, it is natural to find, if possible, the smallest degree of expansion of K [39]. The implementation of the first stage in the literature is usually called the study of the completeness of the expansion. The theorems proved in this case are usually analogous to the existence theorem.
Stage 2. In the chosen extension, a k A for an admissible problem Z, either an error-free (correct algorithm) is constructed, or, if the former is associated with large computational difficulties, an algorithm for solving with respect to, which is sufficiently acceptable in accuracy Z. The exact calculation of the minimum degree of K is a laborious task, which at present can be solved only for relatively narrow classes of problems Z , and the original family of algorithms A . Therefore, in the studies carried out for the minimum degree of expansion, an upper estimate is constructed that guarantees the completeness of the expansion. So, for the voting model, such an estimate is
K = ln q + ln l + ln ( d 1 + d 2 ) + ln d 2 ln d 1 ln 1 1 q
built in [16].
Here, q is the number of recognizable objects, l is the number of classes, d 1 , d 2 and are the parameters of the threshold decision rule.
For most real problems, this estimate is overestimated, and therefore, the algorithm built in the second stage has computational redundancy. Of particular importance is the fact that the above estimate is constructed for problems with intersecting classes, while the majority of real problems are problems with non-intersecting classes. Later, we will see that for a wide class of problems it is sufficient to consider the degree of 1, that is, to use only a linear closure.
Algorithms of the class for calculating estimates allow solving recognition problems of all types: assigning an object to one of the given classes, automatic classification, choosing a feature system to describe recognition objects, and evaluating their effectiveness.

3. Algebraic Methods for Solving Recognition Problems with Non-Crossing Classes

In [12], algebraic methods for solving recognition problems with a finite number of intersecting classes were developed. For each recognition problem Z in terms of algebra over families of heuristic algorithms, a correct algorithm was constructed, e.g., an algorithm that correctly classifies a given finite sample of objects for each of the classes. Due to the fact that the problem with intersecting classes was considered, the algorithm constructed in the above works is rather cumbersome [40,41,42,43,44,45]. The description of the algorithm itself requires the use of large memory (the amount of memory grows proportionally) q 2 · l 2 , where q—the number of recognition objects in a given task, l—the number of classes. It turns out that if we consider only problems with non-intersecting classes, then similar algebraic methods can be used to obtain a much simpler description and a much more effective computational algorithm.
Basic concepts and designations. Let a set of admissible objects be given S and it is known that S can be represented as the sum of a finite number of disjoint subsets K j , j = 1 , l ¯ called classes
S = K 1 K l , K u K v = , u = 1 , l ¯ , v = 1 , 2 , , u 1 , u + 1 , , l .
Objects S are descriptions of some real objects using successive values of a finite number of predefined features 1 , 2 , 3 , , n . Each attribute i can be associated with its set of values M i , which we will assume to be a metric space with distance ρ i x y . This paper considers the following sets M 1 , , M n and metrics ρ 1 , , ρ n .
1. M i —there is a finite or infinite interval, a half-interval, or a finite segment of the numerical axis. Then, ρ i x y = x y . In this case, the sign i is called numerical.
2. M i = 0 , 1 then ρ i x y = x y the sign i is called binary.
3. M i = 0 , 1 , , k i 1 —a finite set of integers; then defined by the following Table 1:
There are zeros on the diagonal in the table, in addition (the table is symmetrical with respect to the main diagonal).
Not every table of the specified type defines a metric. A necessary and sufficient condition for the latter is the fulfillment of inequalities for any u , v , t such that 0 u , v , t k 1 occurs ρ u , v = t .
These inequalities ensure that the triangle axiom holds true. Signs are called graded or scalable.
4. M i = a i 1 , , a i k i finite non-numeric set.
Then ρ x , y = 1 , if x y 0 , if x = y .
The attribute in this case is called named.
In the future, we will consider only numerical, binary, graded, and named features.
A set S is a collection of sets; a 1 S , , a n S , such cases will be noted separately.
Consider the predicates P i S = S K j , j = 1 , l ¯ . It is easy to notice that for each S, one of these predicates is equal to 1 and the rest are equal to 0.
The task of recognizing Z is to use some information J 0 about the sets S , K 1 , , K l compute the value of the predicates P j S , for each finite number of objects S 1 , , S q S . In view of the previous remark, this is the same as specifying for each S i S ˜ q = S 1 , , S q the number of the predicates t such that P t S i = 1 . The latter distinguishes the recognition problem with non-overlapping classes from the general recognition problem.
For a complete formalization of the description of task Z, it is necessary to determine the initial information J 0 .
In this paper, we restrict ourselves to only one type: J 0 —consists of an enumeration of reference objects, for each of which the number of the class containing this object is indicated.
For non-overlapping classes, information J 0 can also be in the form of a learning table T n m l , where n—determines the number of features, m—the number of objects, l—the number of classes:
J o = T n m l = a 11 a 12 a 1 n a m 1 1 a m 1 2 a m 1 n K 1 a m j 1 1 a m j 1 2 a m j 1 n a m j 1 a m j 2 a m j n K j a m l 1 , 1 a m l 1 , 2 a m l 1 , n a m 1 a m 2 a m n K l
We will use such standard information presented in the form of a table. We will always count:
S 1 , , S m 1 belongs to K i
S m j 1 , , S m j belongs to K j
S m l 1 + 1 , , S m belongs to K l
We introduce important notation for what follows:
S m j 1 , , S m j = K ˜ j , S 1 , , S m / K ˜ j = C K ˜ j , j = 1 , 2 , , l , m 0 = 0 .
The set K ˜ j consists of all reference objects belonging to the class K j .
In recognition problems with intersecting classes, algorithms A were considered such that A Z = A J 0 , S ˜ q = x i j where x i j = P i S i .
These algorithms calculate information vectors for each object
α ˜ S i = α i 1 , α i 2 , , α i l = P 1 S i , P 2 S i , , P l S i
and are called correct for problem Z.
For problems with intersecting classes, arbitrary binary vectors can be used as information vectors.
For problems with non-intersecting classes, only those containing exactly one-unit coordinate are informational, the rest of the coordinates are equal to zero. The last remark will be essential in what follows.
Let A be an arbitrary algorithm that translates the recognition problem Z, with l—classes, Z = J 0 , S ˜ q into the matrix of answers β i j q × l , A J 0 , S ˜ q = β i j q × l , β i j 0 , 1 , Δ .
Equality β i j = 1 , β i j = 0 , β i j = Δ means, respectively, that the algorithm A calculated for the object S i : S i K j , S i K j , turns out to be from the calculation of the object’s belonging S i to the class K j . If β i j 0 , 1 , then this also does not mean that, that β i j = P j S i is, the algorithm can also make errors in addition to failures.
Such algorithms are called incorrect for problem Z. Obviously, correct algorithms are a special case of incorrect ones.
For arbitrary incorrect algorithms (and hence for correct ones) hold.
Theorem 1.
Each algorithm A can be represented as A = B × C (multiplication means sequential execution), and if A Z = β i j q × l , β i j 0 , 1 , Δ that B Z = a i j —numerical matrix, C a i j q · l = β i j q · l .
Theorem 1 [16] shows that each algorithm A can be divided into two successive stages. In the 1st stage, task Z is converted into a numerical matrix of standard sizes q—rows, l—columns, the number of rows is equal to the number of recognized objects, and the number of columns is equal to the number of classes.
In the 2nd stage, according to this numerical matrix, answers are finally formed to questions about the belonging of objects S i , , S q to classes K 1 , , K l .
The value a i j is naturally interpreted as the values of the measures of belonging of objects S i to classes K j . Stage B is called the recognition operator, stage C is the decision rule.
In what follows, only threshold decision rules are considered.
C * a i j q · l = C a i j q · l
The rule is applied element by element. Let a be a number and d 1 , d 2 also be numbers (thresholds) and 0 < d 1 < d 2 , then
C * a = 1 , if a > d 2 0 , if a < d 1 Δ , if d 1 < a < d 2

4. Basic Model of Recognizers (B)

Simple Heuristic Operators

Let, according to the accepted notation, the initial information J 0 presented in the form of a table T n m l (see Section 3).
Also let the set of features 1 , 2 , , n be divided into subsets
M ^ 1 = 1 , 2 , , n 1 , M ^ 2 = n 1 + 1 , , n 2 , M ^ 3 = n 2 + 1 , , n 3 , M ^ 4 = n 3 + 1 , , n
numerical, binary, graded, and named features.
For each of the signs, a metric M ^ 1 is introduced ρ i x , y = x y .
For each of the features included in M ^ 2 , M ^ 4 , a metric is introduced
ρ i x , y = 1 , x y 0 , x = y
For each of the graded features included in M 3 , either a metric is specified ρ i x , y = x y , or a metric is defined using a table specifying x , y values for each pair ρ i x , y (See the definition of the metric in Section 3).
Let S = a 1 , , a n , S = b 1 , , b n admissible objects.
Let us put
ρ 1 ( S , S ) = min i = 1 , n ¯ ρ i ( a i , b i )
ρ 2 S , S = 1 n i = 1 n ρ i 2 a i , b i
ρ 3 α β S , S = α · ρ 1 S , S + β · ρ 2 S , S α + β = 1 , 0 α 1
We introduced two fixed metrics ρ 1 , ρ 2 and a family of metrics ρ 3 α β depending on the parameter in the space of admissible objects α . Any of the introduced metrics can be used in the operator described below.
The nearest neighbor operator β min 1 is as follows:
(a)
in the selected metric ρ of the three entered, the following are calculated ρ S , S = ρ t , t = m j 1 + 1 , , m j distance from the recognized object S to the objects included in the set K ˜ j .
(b)
The value is calculated min t ρ t = p j S , j = 1 , 2 , , l , t = m j 1 + 1 , , m j .
(c)
The values are formed in the following way:
G j S = 1 p i S + 1 , j = 1 , 2 , , l , B min 1 S = G 1 S , , G j S , , G l S
The greater the value, the closer in the selected metric is the recognizable object to the object G j S closest to it from, K ˜ j , to the object included in the learning table and belonging to the class K j . Obviously, if the operator B min 1 is consistently presented with a set of recognizable objects S 1 , , S q , then they will translate it into a matrix G j S i q · l .
Thus
B min 1 T n m l , S ˜ q = B min 1 Z = G 1 S G j S G l S G 1 S i G j S i G l S i G 1 S q G j S q G l S q
The system of operators described by us B min 1 is somewhat different from the operators that are commonly called nearest neighbor operators. However, we have retained the operator’s schematic diagram.

5. Average Distance Operator B sr 1

Clause
(a)
of the definition of this operator, exactly repeats the corresponding clause of the definition of the operator B min 1 .
(b)
The value is calculated 1 m j m j 1 t = m j 1 + 1 m j ρ t = ρ j S
(c)
as in the definition, B min 1 it is calculated G j S i = 1 ρ j S + 1 j = 1 , 2 , , l
As for the operator B min 1 , it is easy to see that
B s r 1 Z = B s r 1 T n m l , S ˜ q = G 1 S G j S G l S G 1 S i G j S i G l S i G 1 S q G j S q G l S q

Voting Type Operators

Let the objects in the training information K j belong to the class S u 1 , , S u t , those K j = S u 1 , , S u t .
In our notation S u i = a u i 1 , , a u i n .
As before, ρ 1 , , ρ n we denote the metrics in the sets M 1 , , M n of feature values 1 , 2 , , n . Let us enter the parameters: ε r v 0 , r = u 1 , , u t , v = 1 , 2 , , n .
These parameters will further define the proximity function for the recognized object and the set K ˜ j . Parameters are also entered p r v > 0 , r = u 1 , , u t , v = 1 , 2 , , n .
The parameter p r v is the weight of the v-th feature in the reference object S r . There are no other parameters. The operator is defined as follows. In the set of pairs, r , v : r = u 1 , , u t , v = 1 , , n a subset is distinguished Ω , which is called the support set of operators in what follows.
If the recognized object is equal to S = a 1 , , a n , then the proximity function B Ω , S , K ˜ j is introduced as follows: if the r , v Ω inequalities ρ v a r v , a t v ε r v that B Ω , S , K ˜ j = 1 .
Otherwise B Ω , S , K ˜ j = 0 .
In other words, the proximity function is equal to 0 if Ω there is a pair r , v in such that ρ v a r v , a t v > ε r v . The number of votes G j S for S is calculated in the following way: G j S = r , v Ω p r v · B Ω , S , K ˜ j .
Obviously, if the proximity function B for a set Ω is 0, then the number of votes in this set for a recognized object is 0. If B = 1 , then the number of votes is equal to G j S = r , v Ω p r v .
In this case, the number of votes is equal to the sum of the weights of pairs r , v —a sign-reference for all pairs included in Ω .
If a problem with non-intersecting classes is considered, then the sets K ˜ j , j = 1 , 2 , , n in the learning information do not intersect, and therefore, the parameters ε r v , p r v are determined independently for each class K.
Thus, we have completely defined the operator that translates the training information and the object S into a numeric string G 1 S , , G l S .
Similarly, the set of recognizable objects S 1 , , S q is translated by the operator into a numerical matrix of voices G j S i q · l .
For problems with intersecting classes, in place of the parameters ε r v , p r v , one should consider ε r v d , p r v d and enter them independently for each class. Thus, different parameters can be assigned to the same pair r , v , S r J 0 n , m , l , 1 v n different parameters can be compared ε r v 1 , , ε r v l , p r v 1 , , r v l . But G j S only S r v d , p r v d .
The constructed voting model is an essential generalization of the basic model. Indeed, in order to obtain a basic model from it, it is necessary to set the parameter ε r v equal for different and identical v, and the parameters p r v = p r γ v .
The fact that in this model one support set is considered, in contrast to the basic model, is not a limitation, since sums of operators can be considered. One can obtain an operator operating on an arbitrary system of support sets.
We will mainly explore this voting model. Let us show that already in a linear closure, under the fulfillment of natural assumptions, it is possible to construct an algorithm that is correct for any preassigned control sample.
The construction of the operator of this algorithm is very simple, and, accordingly, the algorithm, with a sufficiently large volume of training sample, gives the correct answer almost everywhere. In real calculations, it is not necessary to remember all the parameters, p r v · ε r v it is enough to limit ourselves to only a small part.

6. Completeness of Linear Closure of the Second Model Voting

We will consider problems with non-overlapping classes K 1 , , K l and assume that the information J 0 is given in the form of a learning table T n m l . The task of recognition will be to classify the final sample S 1 , S 2 , , S q , S i = b i 1 , b i n .
As before, in what follows, we will assume that in the recognizable sample K ˜ j the objects belong to the class S q j 1 + 1 , , S q j , q = 0 , q l = q .
The purpose of this section is to single out a set of basic operators of the considered model for calculating estimates, construct their linear closure, and prove the fact that for each class object K ˜ j from the sample these operators S ˜ q form a sufficiently G j large estimate G u and for u j .
All basic operators, as well as operators from the linear closure, will be constructed explicitly.
We will first need a standard condition relating the type of training information T n m l and a recognizing sample, S ˜ q namely, we will assume that:
for any two different objects S u , S v from the collection S ˜ q among the objects included in T n m l and belonging to the class K, there is such an object S t and such a sign, r what ρ r a t r , b u r ρ a t r , b u r .
In this case, it is customary to say that objects from the system S ˜ q are pairwise non-isomorphic.
When proving the completeness of a linear closure, we will rely on the notion of a marked pair. Since in this paper only problems with non-overlapping classes are considered, the notion of a marked pair will be somewhat changed.
Definition 1.
Pair S u , j , S u S ˜ q , S ˜ u K j ,   1 j l is called marked in the operator B if B v Z = a t v q · l , 1 v w , a u j 1 , a r w < δ 0 for all S such that S ˜ u K r , r = 1 , 2 , , l .
From the definition of a marked pair, it can be seen that such a pair appears in the operator B if: a u j = G j S u the estimate for an object S u by class K j is large enough, all estimates a t v = G v S t for the case when the object S t does not belong to the class K v are sufficiently small in absolute value.
Let B 1 , , B w be given in some model such that each pair S u , j , S u K j is marked with at least one operator B 1 .
Theorem 2.
There is a linear combination i = 1 n a i B i = B ˜ L ˜ B such that A ˜ = B ˜ · C , C the threshold decision rule algorithm is correct for problem Z.
Proof of Theorem 2.
Recall that the threshold decision rule C is defined by the relation C a t v q · l = C a t v q · l .
C a = 1 , if a > C 2 0 , if a < C 1 , 0 C 1 C 2 Δ , if C 1 a C 2
Since every pair S u , j , such that S ˜ u K j , S u S ˜ q is marked by some operator B v , 1 v w , this operator is worth the evaluation G u j = G j S u 1 .
All other systems either mark or do not mark this pair. Operators that do not mark a pair S u , j give an estimate that does not exceed an arbitrarily small value in absolute value δ ; therefore, one can choose δ such that
δ < C 1 W C 1 + C 2
W—number of operators in the system C 1 , C 2 —parameters of the decision rule C.
Let us put
B ˜ = C 1 + C 2 B 1 + + B w
Consider what estimate the operator will build B ˜ for an object S u from the class K j . The couple S u , j is marked. This means that the operator B v constructs an estimate not less than 1. The remaining operators either also construct an estimate for this pair not less than 1, or an estimate less than δ and in turn, δ inequality Equation (22) holds. In the worst case, all other operators construct S ˜ q negative small estimates for to K j . Then, if the estimate S u for K j in the operator B ˜ is denoted by G ˜ u j then from Equation (23) it is easy to obtain the inequality:
G ˜ u j C 1 + C 2 · 1 w 1 w · C 1 C 1 + C 2 > C 1 + C 2 · 1 C 1 C 1 + C 2 = C 2
Applying the decision rule, we get that C G ˜ u j = 1 , and algorithm A establishes this inclusion.
This is true for any pair S u , j such that S u S ˜ q , S ˜ u K j , j = 1 , , l since all such pairs are marked by some operator from the system B 1 , , B w .
Now, let S u S ˜ q be G ˜ u r the estimate built by the operator for the object S u according to the class K r . Since S u does not belong, K r the pair S u , r is not marked in any of the operators B 1 , , B w . Consequently, each of these operators constructs an estimate for S u the class K r that does not exceed in absolute value δ , from the definition of the operator B ˜ and the estimate δ that G ˜ u r < C 1 + C 2 W · C 1 C 1 + C 2 W = C 1 .
It can be seen from the definition of decision rule C that the algorithm A ˜ = B ˜ · C establishes the inclusion S u K , this is true for any pair S u , r such that S u S ˜ q , S u K r , j = 1 , 2 , , l .
The theorem has been proven.
The purpose of further constructions is to find a system of operators B 1 , , B w that, for an arbitrary problem Z = T m n l , S ˜ q , mark any pair S u , j such that S u S ˜ q , S u K j ,   j = 1 , 2 , , l and do not mark any pair S u , K r , S u S ˜ q , S ˜ u K r ,   j = 1 , , l .
If we manage to find such a system B 1 , , B w , then, according to A, the algorithm correctly solves problem Z. The correct algorithm for Z can be written as:
C 1 + C 2 i = 1 w B i · C = A
Consider a sample S ˜ q = S i , , S q , S i = b i 1 , , b i n and a system of operators B j such that all parameters ε u v , P u v , u = 1 , , m j 1 , m j + 1 , , m , v = 1 , 2 , , n are chosen the same: ε u v = 0 , P u v = ε 0 , where ε is a sufficiently small number.
In other words, this means that if all pairs S ˜ u K j u , v are assumed to be small, ε u v , P u v , v = 1 , 2 , , m , u = m j 1 + 1 , , m j no restrictions are imposed on the parameters yet.
Let B be an arbitrary operator from B j and S = a 1 , , a n an arbitrary admissible object.
Let also B S = G 1 j S , , G j j S , , G l j S .
Lemma 1.
Let 0 G t j S < m · n · ε , t = 1 , 2 , , j 1 , j + 1 , , l .
Proof of Lemma 1.
When forming a value, G t j S only pairs u , v are considered such that u = m t 1 + 1 , , m t , v = 1 , , n only objects belonging to the class are considered K j . □
From these pairs, in turn, pairs are selected that are included in the reference set Ω t .
Let us denote the number of pairs from K t Ω t through n t . Obviously, there is an inequality: n t n m t m t 1 < n · m
Since P u v = ε when comparing the elements a, (the value of the feature in the recognizable object S , G t j S either 0 can be added to the value, if p v a t v , a u v > ε u v = 0 , or the value P u v = ε if p v a t v , a u v ε u v = 0 .
If from the last assertion, as well as from the inequality, one easily obtains:
Consequence: Let B j B j , B Z = B T m n l , S q = a r t j q · l .
Then the elements— a r t j at t j satisfy the inequalities:
0 a r t j < n · m · ε , r = 1 , 2 , , q
The proof of the corollary is obtained if we consistently apply Lemma 1 to recognizable objects S 1 , , S q from the sample S ˜ q .
We see that the operator B from the family B constructs numerical matrices in which the elements of all columns except the j-th can be made arbitrarily small with an appropriate choice of the value of ε . If it is required that a r t j < δ , it is enough to put ε = δ n · m .
Consider now a pair u , v such that 1 v n , in other words, an object S u K ˜ j . The pair u , v corresponds to the element in the learning table a u v . In the control sample, as was previously accepted, the objects belong to the class and the rest of the objects do not belong to the class. For brevity, we will assume that the objects that do not belong K ˜ j form the class Q j , and the remaining objects form the class C Q j . Consider the values p v a u v , b t v , t = 1 , q ¯ , that is, the distance of the value v-th of the feature on the object S u , in the learning table T m n l to the value of that feature in S t from S q .
Let us arrange the objects from S ˜ q in ascending order of value p v a u v , b t v , t = 1 , q ¯ , objects with equal values of elements, arrange among themselves in an arbitrary way.
We get:
S r 1 , S r 2 , , S r i , , S r q
0 p v a u v , b r 1 v p v a u v , b r 2 v p v a u v , b r q v
Let us assign to the objects of the sequence the sign “+” if they belong to the class and Q j the sign “−” if they belong to the class C Q j . The result might be, for example, the following sequence:
S + r 1 , S + r 2 , S r 3 , S r 4 , S + r 5 , , etc .
Definition 2.
A pair u , v is called stationary if in the constructed sequence the signs “+, −” take place equal to one change of sign, and if the change of sign occurs on the elements S r i , S r i + 1 then:
p v a u v , b r i v < p v a u v , b r i + 1 v
Otherwise, the pair u , v is called non-stationary.
Let us first consider the case when for each number j = 1 , 2 , , l there is at least one pair u , v stationary. In this case, the basic operators from the family B j are relatively easy to define, in different ways for the following two cases:
(A)
The sequence has the form S + r 1 , , S + r i , S r i + 1 , , S r q , that is relatively a u v all objects of the class Q j are closer than all objects of class C Q , then the operator B j B j is redefined as follows:
  • The support set Ω j is composed of one stationary pair u , v , 1 v n , u m j 1 , , m j .
  • ε u v = 1 2 p v a u v , b r i + 1 , V other ε r w = 0 .
  • P = N otherwise P u w = ε j , ε j —a fairly small value.
(B)
The sequence looks like S r t , , S + r i , S + r i + 1 , , S + r q . The operator is searched in the form B j 1 B j 2 ; B j 1 , B j 2 B j . As before, in both operators, the support set Ω , is composed of one critical pair u , v . P u v = N , other P r w = ε . But:
In operator B j 1 : ε u v = p v a u v , b r q v + 1 .
In operator B : ε u v = 1 2 p u a u v , b r i 1 v + p v a u v , b r q .
Thus, we completely defined the operators B 1 B 1 , B 1 B l .
Let the result of applying the operator B j to the problem Z = T m n l , S q be denoted by b r t q · l .
Lemma 2.
If S r C Q S r S ˜ q K j then if S r C Q then b r j j = 0 .
Proof of Lemma 2.
1. Consider the first case of defining the operator B j . Then, there is a stationary pair u , v for which the descriptions of the above sequence are as follows:
S + r 1 , , S + r w , S r w + 1 , , S r q
where
Q = S r 1 , , S r w , C Q = S r w + 1 , , S r q , p v a u v , b r w v < p v a u v , b r w + 1 v
When defining the operator B, the quantities ε v are chosen in such a way that the following inequalities are satisfied:
p v a u v , b r w v < ε v < p v a u v , b r w + 1 v
From inequality [19], it is easy to see that for each object from Q the proximity function for this S over the reference set S i , Ω = u , v is equal to 1. Therefore, b i j = N , i = r i , , r w .
Similarly, the proximity function for objects from C Q is Ω = u , v equal to 0. And therefore, b i j j = 0 , i = r w + 1 , , r q .
2. Consider the second case in the definition of the operator B j . In this case, there is a stationary point u , v and the sequence corresponding to it has the form:
S r , , S r w , S + r w + 1 , , S + r q ; Q = S r w + 1 , S r q , C Q = S r 1 , , S r w
In the operator, B j the value ε v is chosen in such a way that the proximity function for the reference value Ω j = u , v is equal to 1 for all objects from the selection S ˜ q .
Therefore, if B i j Z = b r t i j q · l , then b r t i j = N , r = 1 , q ¯ .
In the operator B j r Z = b r t j i q · l , the value ε v is chosen in such a way that the proximity function in the reference set Ω j = u , v is equal to 1 for objects from C Q , and is equal to 0 for objects from Q. That is why
b r j j r = N , S r C Q b r j j r = 0 , S r Q
From equality [21,22], and also from the fact that B j = B j 1 B j 2 equalities follow:
b r j j = N , S r C Q b r j j = 0 , S r Q
The Lemma is proven.
Consider now the operator: B = B 1 + + B l and put B Z = B T m n l , S ˜ q = b r t q · l .
In Lemma 2, when applying the operator, B j , the elements of all columns with the exception of j are not exceeded.
Objects S r K j obtain grade N.
Objects S r K j S ˜ q obtain grade 0.
Because b r t = b r t 1 + + b r t l by definition of the operator B j , then if
S r K j S ˜ q b r j N + m · n · l 1 · ε
S C K j S ˜ q b r j m · n · l · ε
Having appropriately chosen the values, N , ε and using Theorem 3, we prove the theorem.
Theorem 3.
If in the problem Z for each number j = 1 , l ¯ there is a stationary pair, then the algorithm A = C 1 + C 2 i = 1 l B i · C C 1 , C 2 is correct for problem Z.
Since the operator we have constructed C 1 + C 2 i = 1 l B i belongs to the linear closure of a model of the type of calculation of estimates, then we proved the completeness of the linear closure of this model for all problems in which for each class there is at least one stationary pair, u , v , and moreover, we wrote this correct algorithm A explicitly.
The verification of the fact that such a stationary pair really exists is not difficult.
For this, enough for each pair u , v , where u = m j 1 + 1 , , m j , v = 1 , 2 , , n calculate all p v a u v , b i v , i = 1 , 2 , , q and check whether all inequalities of the 1st or 2nd group are fulfilled simultaneously. If all inequalities of groups 1 and 2 are simultaneously satisfied, then the pair u , v is stationary and can be constructed B j in the same way as was performed in the proof of the theorem.
The conditions for the existence of a stationary pair essentially mean the following:
There is an element S u , S u K j and features in the learning table T m n l .
1. The distance according to the V attribute from S v to all elements of the control sample that belong is K j strictly less than all such distances for objects of the control sample that do not belong to K;
2. The distance according to the V attribute from the object in the control sample that does not belong is K j less than all such distances for the objects of the control sample that belong to K j .
Consider the 2nd case, when the construction of a correct algorithm in a linear closure is quite simple. As before, we arrange the objects of the control set S ˜ q in sequence by increasing the distance from some value of the feature u, in the reference S w , S w K ˜ j , 1 u n . We put the constructed sequence over the elements of the sequence with the sign “+”, if, S r i K j and the sign “−”, if S r i C K j denoted by π w , u . Let us find in this sequence the last element in order S + r p , this element must also be such that the next element S r p has the property:
p v a w u , b r t u < p v a w u , b r p u
The set of elements of the sequence π w , u following the element S + r t is denoted by M w , u .
We introduce the set M j = S w K ˜ j u = 1 n M w , u .
Similarly to the previous one, in each sequence, we select the first element in order S + r t , and moreover, such that S r p the inequality holds for the previous element:
p v a w u , b r p u < p v a w u , b r t u
Element S + r t in the sequence π w , u is denoted by M + w , u . We introduce the set
M j + = S w K ˜ j u = 1 n M + w , u
Definition 3.
The problem Z = J , S ˜ q is called monotonic if for each j = 1 , 2 , , l one of the two equalities is satisfied:
S ˜ q C K ˜ j = M j
S ˜ q K ˜ j = M j +
The meaning of the monotonicity condition is as follows, an arbitrary reference object is chosen to belong to K ˜ j and an arbitrary feature with number u, in the control sample S ˜ q we select all objects that do not belong to the class K ˜ j and such that, relative to the selected feature, they are farther from S w than all objects S ˜ q belonging to the class K ˜ j .
The set of all such objects not belonging to the class K ˜ j is denoted by M w , u . Condition [17] is that if all elements of the set M j w , u are summed up, then all elements from the control sample that do not belong to the class are obtained K ˜ j .
Construction of operators B j , j = 1 , 2 , , for a monotonic problem Z.
1. The choice of the reference subset Ω , since the problem Z is monotone, then one of the conditions [17] or [18] is satisfied, the choice of the reference set is the same in both cases, so we will assume that we have:
S ˜ q C K j = M j S w K ˜ j u = 1 n M w , u
The set M w , u forms a cover of the set S ˜ q C K , but this cover may be redundant, in other words, some sets M w , u may be removed so that the cover remains a cover. Removing such extra sets, we construct irreducible covers for S ˜ q C K j .
Let the constructed irreducible cover have the form:
S ˜ q C K j = M w 1 , u 1 M w 2 , u 2
Then, Ω j = w 1 , u 1 , , w 2 , u 2 .
Choice of options ε ˜ j .
All parameters of this group, with the exception of the parameters, ε w i u i , i = 1 , 2 , , are chosen so large that the following inequalities are satisfied: p v a r v b i v < ε r v , i = 1 , 2 , , q or ε r v = max i = 1 , q ¯ p v a r v , b i v + 1 .
The parameters ε w i u i = 1 2 p u i a w i u i b t u i + p a w i u i b t u i are chosen so that the following inequalities hold:
In the sequence π w i , u i , we find the last element S + p and the element following it S t , then p u i a w i u i b p u i < ε w i u i < p u i a w i u i b t u i or ε w i u i = 1 2 p u i a w i u i b t u i + p a w i u i b t u i .
This choice of parameters ε w i u i occurs in cases where the relation is satisfied: ε ˜ q C K j = M j . If this relation is not satisfied, but the relation ε ˜ q K j = M j + .
Then, the operator B j is sought in the form of a difference B j 1 B j 2 , and the values ε i j are chosen differently for the operators that make up the difference.
Options p ˜ γ ˜ . The parameters γ S i = γ i are assumed to be equal to a sufficiently small value; δ more precisely, the value δ will be indicated later for all S i K ˜ j and also for such S i numbers of which are not contained in the set W 1 , , W 2 . For the rest S i , we assume: γ S i = γ i = N .
Similarly, P u = = P u r = N rest P i , P i = δ . The values N are chosen to be sufficiently large; the exact values for N are given later. The operator definition B j is complete for the case: S ˜ q C K j = M j .
Theorem 4.
The operator for calculating estimates defined above marks all objects from the class in the control sample K j and only them. In other words, if B Z = G i j q · l and objects S u 1 , , S u n K j at the remaining objects do not belong to the control sample K j , then G u t j N , t = 1 , K ¯ G v j < δ , V u 1 , , u k .
Proof of Theorem 4.
Threshold parameters ε u v and reference sets were chosen in such a way that for each object S u p = b u p 1 , , b u p n from K j the relation was fulfilled p v a u v b u p v ε u v in such a way, according to the constructed support set, the proximity function is equal to 1, and the total estimate is not less than u , v p u v · γ u v N · N = N 2 > N .
If the object of the control set does not belong, then K j it has a mark u , v in the sequences, and therefore there is such a pair π (by the definition of a monotone problem) that in the sequence π u , v this element is after all elements K j , then ε u v they are chosen so that the corresponding inequality p v a u v b i v ε u v violated. In this case, the proximity function for the object S i C K ˜ j is equal to 0, and G i j = 0 < δ .
The evaluation operator constructed in this way puts large estimates for control elements from K ˜ j , and small estimates for elements from C K ˜ j . The last assertion easily implies the correctness of the algorithm composed of the previously defined operator and the threshold decision rule.
The Theorem has been proven. □
The proof for the 2nd case of the monotone problem is carried out according to the same principle (see the proof for the stationary problem) only the operator is sought as the difference between two operators.
The simple cases presented by us are an illustration for the proof of the main theorem on the well-posedness of the linear closure. The proof of this theorem is technically more complicated, it is divided into a series of steps; however, each individual step implements a construction of the same type that was used in the last theorems.
When proving the main theorem, in essence, it is the linear closure that is used.
The proof of the linear closure well-posedness theorem will be carried out in two stages. In the first stage, we will introduce one additional constraint on the control objects of the training sample.
Practical tasks really satisfy him. In the first stage, the well-posedness theorem will be proved under this constraint.
Definition 4.
Control objects S 1 , , S q are called consistent with objects S 1 , , S m if for each pair S u , S v , S u K j , S v K j there is at least one object S r K j and attribute W such that p w a r w , b u w < p w a r w , b v w .
The lack of consistency in the control sample means that in the control there are objects S v K j that are closer in all respects to all objects in the class K j . In this case, all learning information is collected in such a way as to assign a closer object S v to a class K j with greater preference than a more distant object S q and thereby make a mistake.
We will first consider classes in which there are no such pathological cases, that is, the situation when the control objects are consistent with the training ones.
Let, as before, the objects S 1 , , S m ; S i = a i 1 , , a i n i = 1 , n ¯ .
They form learning information J, and the objects S m j 1 + 1 , , S m j , m 0 = 0 , m l = m belong to the class K j and do not belong to other classes.
The objects S 1 , , S q , S i = b i 1 , , b i n , j = 1 , l ¯ , i = 1 , q ¯ form a control sample and the objects S q j 1 + 1 , , S q i belong to a class K j and do not belong to other classes.
As before, we denote:
S 1 , , S m K j = K , S 1 , , S m K ˜ j = C K ˜ j
S 1 , , S q K j = K j 1 , S 1 , , S q K ˜ j 1 = C K j 1
Operator formation B j , j = 1 , l ¯ .
1. For all pairs u , v such that S k C K ˜ j , V = 1 , K ¯ we assume ε u v = 0 , p u v = γ u v = δ .
2. Consider an arbitrary control object S t K j 1 .
By the definition of a training-consistent control sample, for each S r K j 1 there is such a pair S k , W , S k K j , 1 W n .
What p w a k w , b t w < p w a k w , b r w . For the pair k , w we assume:
p k w = N , γ k w = N , ε k w = 1 2 p w a k w , b t w + p w a k w , b r w
Obviously p w a k w , b t w < ε k w < p w a k w , b r w .
Let us include in the support set all pairs k , w for all S r K j 1 .
We have defined an auxiliary operator: B j t , 1 j l , S t K j 1
B j = S t K j 1 B j t , j = 1 , 2 , , l
Formation of operator B. B = j = 1 l B j
We need to study what numeric matrix the operator B translates into task Z, with a training sample S 1 , , S m and a control sample S 1 , , S q . The evaluation of the elements of this matrix will be performed sequentially by the steps of constructing the operator B. The first step is to form the operator B j t .
Let B j t Z = G α β j , t q · l .
Lemma 3.
If β j that G α β j , t · δ 2 · n · m .
Proof of Lemma 3.
The formation of estimates with respect to the class B β is carried out for some of the pairs u , v such that for each such pair the term included in the estimate does not exceed: P u γ n = δ · δ = δ 2 .
The total number of pairs u , v , S u C K j is obviously n · m m j m j 1 < n · m j .
From this, it easily follows that G α β j , t · δ 2 · n · m .
The Lemma is proven.
Lemma 4.
If β j , S α C K j , that G α β j , t = 0 .
Proof of Lemma 4.
In the base set of the operator B j t for each element S α C K j there is a pair k , w , S k K j , 1 w n such that p w a k w , b t w < p w a k w , b α w .
By choosing the parameters E k w , as shown above, we obtain p w a k w , b t w > ε k w .
According to the definition of the proximity function for the unique support set of the operator B j t , for the object S α it is equal to 0. Therefore, G α β j , t 0 , which proves the Lemma. □
Lemma 5.
G α β j , t N 2 .
Proof of Lemma 5.
For each pair k , w included in the base set of the operator, B j t the parameter ε k w , is chosen so that p a k w , b t w > ε k w . Due to inequality, the proximity function in the base set of the operator B j t is equal to 1; therefore, G t j j , t = k , w P k w γ k w = N · N > N 2 .
The Lemma is proven.
Let B j ( Z ) = G α β ( j ) q · l , j = 1 , l ¯ .
By definition of the operator B j .
G α β ( j ) = G α β ( j , q j 1 + 1 ) + + G α β ( j , q j )
Using the matrix G α β ( j , t ) q · l and Equation (41), we obtain the following inequalities:
G j α ( j ) N 2 , α = q i 1 + 1 , , q i G j α ( j ) = 0 , α = 1 , 2 , , q i 1 , q i + 1 , , q
True 0 G α β ( j ) q · n · δ 2 for others α β .
S S S q j 1 S q j 1 + 1 S t 1 S t S t + 1 S q j S q 1 2 j 1 j j + 1 l 0 0 0 0 0 G α β ( j ) q · n · δ 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 N 2 0 0 0 0 0 0 0 N 2 0 0 0 0 0 0 0 N 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Because B = j = 1 l B j .
And G α β q · l is the matrix into which the operator transforms the problem, then G α β = j = 1 l G α β ( j ) , α = 1 , q ¯ , β = 1 , l ¯ .
Lemmas 3–5 clearly imply the following inequalities G α β 1 · N 2 , β = 1 , l ¯ at β = j , α = q j 1 + 1 , , q ; q 0 = q , q 1 = q .
For everyone else α , β : 0 G α β n · m · l · δ 2 .
C 1 , C 2 is a decision rule; then, we can choose the values in such a way δ , N that the algorithm A = B · C ( C 1 , C 2 ) will give correct answers for all elements S 1 , , S q across all classes K 1 , , K l .
Recall that the decision rule C 1 , C 2 is applied to numerical matrices element by element, that is, C G α β q · l = C G α β q · l and C G α β = 1 , G α β > C 2 0 , G α β < C 1 , 0 < C 1 < C 2 Δ , C 1 G α β C 2 .
From the definition of the decision rule and inequalities, it is clear that the parameter N must be chosen so that the inequality 1 · N 2 > C 2 .
Therefore, one can put N = C 2 l + 1 .
With this choice of the parameter N, each of the objects S q j 1 + 1 , , S q j will be assigned by the algorithm A to the class K j .
It follows from the definition of the threshold decision rule and inequalities that the parameter δ can be chosen so that the inequality n · m · l · δ 2 < C 1 .
Therefore, it suffices to take as δ any positive quantity satisfying the inequalities δ < C 1 n · m · q · l .
In this case, each of the objects S 1 , , S q j 1 S q j + 1 , , S q will not be assigned to the class K j , j = 1 , l ¯ , q 1 = 0 , q l = q .
We have proven.
Theorem 5.
A = B · C ( C 1 , C 2 ) with threshold decision rule C and operator B.
B = j = 1 l B j , B j = S t K j B j t
Operators B j t are operators for calculating estimates, correct for the problem Z = S 1 , , S m , α i j m · l , S 1 , , S q .
Each of the operators B j is specified by a set of numerical parameters ε u v , p u v , where u = 1 , m ¯ , v = 1 , n ¯ . m—the number of objects in the learning sample, n—the number of features participating in the description of objects, that is, a set of 3 · n · m numerical parameters.
From all that has been said above, it follows that with the described choice of parameters N, the matrix β i j q · l , into which the problem Z is converted by the algorithm A = B · C ( C 1 , C 2 ) .
As follows:
S 1 S q 1 S q 1 + 1 S q j S q j + 1 S q l 1 S q l 1 + 1 S q 1 2 j l 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1
Here, the operator B in terms of elementary operators has the form B = j = 1 l S t K j B j t that is, the operator B belongs to the linear closure of the previously introduced families of estimation algorithms.
Significant memory is required to write code B; however, when solving real applied problems, the operator code can be placed in the RAM of modern computers. So, if n = 100 , m = 50 , q = 100 , that is, the training material consists of 50 objects described by 100 features, and it is required to store 1.5 × 10 76 numbers in memory.
In the future, we will consider methods that make it possible to more economically encode the operator B, which will reduce the required memory and more efficiently use the constructed correct algorithms for solving applied problems.
Summarizing all the above, we can state that we have proved the theorem.
Theorem 6.
Algorithm
A = j = 1 l S t K j B j ( S t ) · C ( C 1 , C 2 )
with an operator from the linear closure of the score, the calculation model is correct for problem Z, and the operator is the sum of q operators from the score calculation model and is described by a set of 3 · n · m · q numerical parameters.
The only condition for constructing a correct rule is the consistency of the training control sample.

7. Results

In [15], the use of Internet of Things technologies in ecology (using the example of the Aral Sea region) as a system is proposed (Figure 1). The system was developed as a website. The Django framework was used to develop the server side of the system and Vue.js was used to develop the client side as a single page application (SPA) using the Quasar GUI framework. To start the system, you need to open this website ecoaral.uz in one of the modern browsers. After this, the main window of the system will be reflected on the browser page.
In this article, to identify input information flows, we considered the so-called parametric recognition algorithms, i.e., such sets of algorithms in which each algorithm is one-to-one encoded by a set of numerical parameters. These models analyze the proximity between parts of previously classified objects and the object to be classified. Based on a set of assessments, a general assessment of the object is generated and, according to the introduced decision rule, the belonging of the recognized object to one or another class is determined.
In this article, as the initial model (A), we consider a model related to the model for calculating estimates, supplemented with some simple recognition algorithms such as the nearest neighbor algorithm, the average distance algorithm, etc.
The peculiarity of algorithms of this class is that in order to calculate estimates that determine the identity of a recognized object, there are simple analytical formulas that replace complex enumeration procedures that arise when calculating proximity estimates using a system of support sets.
In these models, the division of the algorithm into recognition operators and decision rules is carried out in a natural way.
We will only consider algorithms that can be represented in the form A = B · C , where B is an arbitrary recognition operator. It turns out that an essential part of the algorithm is the operator—B; the decisive rule is that C can be made standard for all algorithms and programs. Any voting recognition operator maps task Z into a numeric matrix of votes or ratings B ( Z ) = Γ i j q · l Γ i j = Γ j S
Moreover, the value Γ i j has a clear, meaningful interpretation. This value can be considered as the degree of belonging of the examined object S i to class K j , expressed as a number. Let
( α ˜ β ˜ ) = i = 1 6 j = 1 10 α i β j min
α i —these are sensor data, β j —salinity class parameters. Sensor data in Table 2: H C O 3 (bicarbonate), C l (chlorine), S O 4 (sulfuric acid), C a (calcium), M g (magnesium), N a (sodium) in Table 3 and Table 4: Salinity is divided into five classes (non-saline, slightly saline, moderately saline, highly saline, and very highly saline), and each class consists of 10 points.
If the sensor receives data a i and it checks the value a i b i min if the given condition completed, and indicates which salinity class it belongs to. If the a i b i min values correspond to more than one salinity class, then the information was intercepted somewhere (see Figure 2).
To solve the problem of comparative classification in an information system in electronic resources based on an algorithm for calculating estimates, a correct algorithm was used
A = i = 1 l S t K j B S t · C C 1 , C 2
The work of the recognition algorithm in this case is based on the analysis of the data structure and its importance for classification purposes. With this approach, reliability is assessed by the quality of work of the recognition algorithm on control material. In other words, the algorithm carries out a classification and its results are compared with those that are known a priory. Obviously, with a sufficiently large volume of control, the results reflect the quality of the algorithm. It was this approach that was implied and incorporated into the software package, the application of which to the initial information made it possible to obtain the following: relative weights of feature groups ρ i S , S = min p ( a i b i ) , vote formula G j S i = 1 ρ j ( S , S ) + 1 and the sum of operators B 1 + B 2 + B 3 + B 4 = G 1 + G 2 + G 3 + G 4 .
Example for C l :
ρ 1 S , S = 0.001 0.5 = 0.499                    ρ 2 S , S = 0.004 0.5 = 0.496
ρ 3 S , S = 0.015 0.5 = 0.485                    ρ 4 S , S = 0.02 0.5 = 0.48
ρ 5 S , S = 0.025 0.5 = 0.475                    ρ 6 S , S = 0.035 0.5 = 0.465
ρ 7 S , S = 0.045 0.5 = 0.455                    ρ 8 S , S = 0.055 0.5 = 0.445
ρ 9 S , S = 0.07 0.5 = 0.43                       ρ 10 S , S = 0.09 0.5 = 0.41
ρ 11 S , S = 0.12 0.5 = 0.38                      ρ 12 S , S = 0.16 0.5 = 0.34
ρ 13 S , S = 0.2 0.5 = 0.3                         ρ 14 S , S = 0.24 0.5 = 0.26
ρ 15 S , S = 0.29 0.5 = 0.21                     ρ 16 S , S = 0.32 0.5 = 0.18
ρ 17 S , S = 0.36 0.5 = 0.14                     ρ 18 S , S = 0.46 0.5 = 0.04
ρ 19 S , S = 0.57 0.5 = 0.07                     ρ 20 S , S = 0.59 0.5 = 0.09
ρ 21 S , S = 0.61 0.5 = 0.11                     ρ 22 S , S = 0.63 0.5 = 0.13
ρ 23 S , S = 0.64 0.5 = 0.14                     ρ 24 S , S = 0.68 0.5 = 0.18
ρ 25 S , S = 1.3 0.5 = 0.8
G 1 S , S = 1 0.499 + 1 = 0.6671                      G 2 S , S = 1 0.496 + 1 = 0.6684
G 3 S , S = 1 0.485 + 1 = 0.6734                      G 4 S , S = 1 0.48 + 1 = 0.67567
G 5 S , S = 1 0.475 + 1 = 0.67796                    G 6 S , S = 1 0.465 + 1 = 0.68259
G 7 S , S = 1 0.455 + 1 = 0.68728                    G 8 S , S = 1 0.445 + 1 = 0.69204
G 9 S , S = 1 0.43 + 1 = 0.6993                       G 10 S , S = 1 0.41 + 1 = 0.7092
G 11 S , S = 1 0.38 + 1 = 0.7246                      G 12 S , S = 1 0.34 + 1 = 0.7462
G 13 S , S = 1 0.3 + 1 = 0.79365                      G 14 S , S = 1 0.26 + 1 = 0.79365
G 15 S , S = 1 0.21 + 1 = 0.8264                      G 16 S , S = 1 0.18 + 1 = 0.8474
G 17 S , S = 1 0.14 + 1 = 0.87719                    G 18 S , S = 1 0.04 + 1 = 0.9615
G 19 S , S = 1 0.07 + 1 = 0.9345                      G 20 S , S = 1 0.09 + 1 = 0.9174
G 21 S , S = 1 0.11 + 1 = 0.9009                      G 22 S , S = 1 0.13 + 1 = 0.8849
G 23 S , S = 1 0.14 + 1 = 0.8772                      G 24 S , S = 1 0.18 + 1 = 0.84745
G 25 S , S = 1 0.6 + 1 = 0.625

8. Conclusions

The solution to the problem of identifying objects in the IoT ecosystem of the Aral region was analyzed. The problem of constructing a correct algorithm with linear closure operators of a model for calculating estimates for identifying objects in the IoT ecosystem of the Aral region was considered.
Within the framework of the algebraic approach, several variants of linear combinations of recognition operators were constructed, the use of which gives the correct answer on the control material, and this was proven in the form of theorems.
An operator belonging to the linear closure of the model of the type of calculation of estimates was constructed, which was the sum of q operators of the model of calculation of estimates and was described by a set of numerical parameters 3 · n · m · q , where n was the number of specified characteristics, m was the number of reference objects, and q was the set of recognized objects. The completeness of the linear closure of this model was proven for all problems in which for each class there is at least one stationary pair u , v , and this correct algorithm A was written explicitly.
The results obtained in this article, namely the proven theorems, made it possible to construct correct algorithms on control material based on a combination of linear recognition operators.
The constructed correct recognition algorithms, which are the easiest to use, where there is no optimization procedure, made it possible to quickly solve the issues of identifying incoming information flows in the IoT ecosystem of the Aral region.

Author Contributions

Conceptualization, A.K. and I.S.; methodology, A.K.; software, I.S.; validation, A.K., I.S. and A.B. (Akbarjon Babadjanov); formal analysis, I.S.; investigation, A.K.; resources, I.S. and A.B. (Alimdzhan Babadzhanov); data curation, I.S.; writing—original draft preparation, A.K., I.S. and A.B. (Akbarjon Babadjanov); writing—review and editing, A.K., I.S., A.B. (Akbarjon Babadjanov) and A.B. (Alimdzhan Babadzhanov); visualization, I.S.; supervision, A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data generated. Data sharing not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ali, E.H.; Jaber, H.A.; Kadhim, N.N. New algorithm for localization of iris recognition using deep learning neural networks. Indones. J. Electr. Eng. Comput. Sci. 2023, 29, 110–119. [Google Scholar] [CrossRef]
  2. Laabab, I.; Ziani, S.; Benami, A. Solar panels overheating protection: A review. Indones. J. Electr. Eng. Comput. Sci. 2023, 29, 49–55. [Google Scholar] [CrossRef]
  3. Shary, S.P.; Zhilin, S.I. Simple, fast and reliable methods for maximization of recognizing functional. J. Comput. Technol. 2023, 28, 87–100. [Google Scholar]
  4. Jiang, J.; Zhang, X. Feature selection based on self-information combining double-quantitative class weights and three-order approximation accuracies in neighborhood rough sets. Inf. Sci. 2024, 657, 119945. [Google Scholar] [CrossRef]
  5. Saha, A.; Pal, S.; Islam, A.; Islam, A.; Alam, E.; Islam, M. Hydro-chemical based assessment of groundwater vulnerability in the Holocene multi-aquifers of Ganges delta. Sci. Rep. 2024, 14, 1265. [Google Scholar] [CrossRef] [PubMed]
  6. Varasree, B.; Kavithamani, V.; Chandrakanth, P.; Basi, R.A.; Padmapriya, R.; Senthamil, S.R. Wastewater recycling and groundwater sustainability through self-organizing map and style based generative adversarial networks. Groundw. Sustain. Dev. 2024, 25, 101092. [Google Scholar] [CrossRef]
  7. Bashir, R.N.; Bajwa, I.S.; Shahid, M.M.A. Internet of Things and Machine-Learning-Based Leaching Requirements Estimation for Saline Soils. IEEE Internet Things J. 2020, 7, 4462–4472. [Google Scholar] [CrossRef]
  8. Bondarenko, N.N.; Zhuravlev, Y.I. Algorithm for Choosing Conjunctions for Logical Recognition Methods. Comput. Math. Math. Phys. 2012, 52, 649–652. [Google Scholar] [CrossRef]
  9. Berlian, M.H.; Sahputra, T.E.R.; Ardi, B.J.W.; Dzatmika, L.W.; Besari, A.R.A.; Sudibyo, R.W.; Sukaridhoto, S. Design and Implementation of Smart Environment Monitoring and Analytics in Real-Time System Framework Based on Internet of Underwater Things and Big Data. In Proceedings of the 2016 International Electronics Symposium (IES), Denpasar, Indonesia, 29–30 September 2016. [Google Scholar] [CrossRef]
  10. Dou, Q.; Lijun, W.; Derek, R.M.; Anthony, G.C. Real-Time Hyperbola Recognition and Fitting in GPR Data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 51–62. [Google Scholar] [CrossRef]
  11. Jalali, M.; Morteza, Z.; Abdolali, B. Deterministic Solution of Algebraic Equations in Sentiment Analysis. Multimed. Tools Appl. 2023, 82, 35457–35462. [Google Scholar] [CrossRef]
  12. Kabulov, A.; Yarashov, I.; Otakhonov, A. Algorithmic Analysis of the System Based on the Functioning Table and Information Security. In Proceedings of the 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 1–4 June 2022. [Google Scholar] [CrossRef]
  13. Kabulov, A.; Saymanov, I.; Yarashov, I.; Karimov, A. Using Algorithmic Modeling to Control User Access Based on Functioning Table. In Proceedings of the 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 1–4 June 2022. [Google Scholar] [CrossRef]
  14. Navruzov, E.; Kabulov, A. Detection and analysis types of DDoS attack. In Proceedings of the 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 1–4 June 2022. [Google Scholar] [CrossRef]
  15. Kabulov, A.; Saymanov, I.; Yarashov, I.; Muxammadiev, F. Algorithmic method of security of the Internet of Things based on steganographic coding. In Proceedings of the 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 21–24 April 2021. [Google Scholar] [CrossRef]
  16. Zhuravlev, Y.I.; Ryazanov, V.V.; Aslanyan, L.H.; Sahakyan, H.A. On a Classification Method for a Large Number of Classes. Pattern Recognit. Image Anal. 2019, 29, 366–376. [Google Scholar] [CrossRef]
  17. Zhuravlev, Y.I.; Ryazanov, V.V.; Ryazanov, V.V.; Aslanyan, L.H.; Sahakyan, H.A. Comparison of Different Dichotomous Classification Algorithms. Pattern Recognit. Image Anal. 2020, 30, 303–314. [Google Scholar] [CrossRef]
  18. Zhuravlev, Y.I.; Sen’ko, O.V.; Bondarenko, N.N.; Ryazanov, V.V.; Dokukin, A.A.; Vinogradov, A.P. A Method for Predicting Rare Events by Multidimensional Time Series with the Use of Collective Methods. Pattern Recognit. Image Anal. 2019, 29, 763–768. [Google Scholar] [CrossRef]
  19. Kabulov, A.; Normatov, I.; Urunbaev, E.; Muhammadiev, F. Invariant Continuation of Discrete Multi-Valued Functions and Their Implementation. In Proceedings of the 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 21–24 April 2021. [Google Scholar] [CrossRef]
  20. Kabulov, A.; Normatov, I.; Seytov, A.; Kudaybergenov, A. Optimal Management of Water Resources in Large Main Canals with Cascade Pumping Stations. In Proceedings of the 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Vancouver, BC, Canada, 9–12 September 2020. [Google Scholar] [CrossRef]
  21. Kabulov, A.V.; Normatov, I.H. About problems of decoding and searching for the maximum upper zero of discrete monotone functions. J. Phys. Conf. Ser. 2019, 1260, 102006. [Google Scholar] [CrossRef]
  22. Kabulov, A.V.; Normatov, I.H.; Ashurov, A.O. Computational methods of minimization of multiple functions. J. Phys. Conf. Ser. 2019, 1260, 102007. [Google Scholar] [CrossRef]
  23. Kernfeld, E.; Misha, K.; Shuchin, A. Tensor–Tensor Products with Invertible Linear Transforms. Linear Algebra Its Appl. 2015, 485, 545–570. [Google Scholar] [CrossRef]
  24. Lagovsky, B.A.; Rubinovich, E.Y. Algebraic Methods for Achieving Super-Resolution by Digital Antenna Arrays. Mathematics 2023, 11, 1056. [Google Scholar] [CrossRef]
  25. Maxmudjanov, S.; Raximjon, A.; Jahongir, M.; Rizamat, R. Water Resource Management Electromagnetic Flow Meters Analysis. In Proceedings of the 2022 International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey, 20–22 October 2022. [Google Scholar]
  26. Niswar, M.; Wainalang, S.; Ilham, A.A.; Zainuddin, Z.; Fujaya, Y.; Muslimin, Z.; Paundu, A.W.; Kashihara, S.; Fall, D. IoT-Based Water Quality Monitoring System for Soft-Shell Crab Farming. In Proceedings of the 2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS), Bali, Indonesia, 1–3 November 2018. [Google Scholar] [CrossRef]
  27. Singh, M.; Suhaib, A. IoT Based Smart Water Management Systems: A Systematic Review. Mater. Today Proc. 2021, 46, 5211–5218. [Google Scholar] [CrossRef]
  28. Lagovsky, B.A.; Rubinovich, E.Y. Increasing the Angular Resolution and Range of Measuring Systems Using Ultra-Wideband Signals. Autom. Remote Control 2023, 84, 1065–1078. [Google Scholar] [CrossRef]
  29. Memarian, A.; Damarla, S.K.; Huang, B.; Han, Z.; Marvan, M. Shape-Based Pattern Recognition Approaches toward Oscillation Detection. Ind. Eng. Chem. Res. 2024, 463, 4018–4029. [Google Scholar] [CrossRef]
  30. Chang, Y.; Zhang, W.; Wang, H.; Liu, Y. Blind Recognition of BCH and RS Codes with Small Samples Intercepted Bitstream. IEEE Trans. Commun. 2023. [Google Scholar] [CrossRef]
  31. Younes, O.S.; Alharbi, A.; Yasseen, A.; Alshareef, F.; Albalawi, F.; Albalawi, U.A. CeTrivium: A Stream Cipher Based on Cellular Automata for Securing Real-Time Multimedia Transmission. Comput. Syst. Sci. Eng. 2023, 47, 2895–2920. [Google Scholar] [CrossRef]
  32. Borlido, C.; Gehrke, M. Substitution Principle and semidirect products. Math. Struct. Comput. Sci. 2023, 33, 486–535. [Google Scholar] [CrossRef]
  33. Thai-Nghe, N.; Nguyen, T.-H.; Nguyen, C. Deep Learning Approach for Forecasting Water Quality in IoT Systems. Int. J. Adv. Comput. Sci. Appl. 2020, 11, 64–79. [Google Scholar] [CrossRef]
  34. Wang, S.; Li, W.; Lu, R.; Yang, X.; Xi, J.; Gao, J. Neural Network Acceleration Methods via Selective Activation. IET Comput. Vis. 2023, 17, 295–308. [Google Scholar] [CrossRef]
  35. Bojanczyk, M.; Nguyen, L.T.D. Algebraic Recognition of Regular Functions. In Proceedings of the 50th International Colloquium on Automata, Languages, and Programming (ICALP 2023), Paderborn, Germany, 10–14 July 2023. [Google Scholar] [CrossRef]
  36. Lagovsky, B.; Rubinovich, E. A modified algebraic method of mathematical signal processing in radar problems. Results Control Optim. 2024, 14, 100405. [Google Scholar] [CrossRef]
  37. Vaishali, A.K.; Sachin, A.N.; Prafulla, B. A Segmentation-based Token Identification for Recognition of Audio Mathematical Expression. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 1–9. [Google Scholar] [CrossRef]
  38. Gurevich, I.B.; Moroni, D.; Pascali, M.A.; Vashina, V.V. On Some Scientific Results of the ICPR-2020. Pattern Recognit. Image Anal. 2021, 31, 357–363. [Google Scholar] [CrossRef]
  39. Gurevich, I.B.; Yashina, V.V. Descriptive Models of Information Transformation Processes in Image Analysis. Pattern Recognit. Image Anal. 2021, 31, 402–420. [Google Scholar] [CrossRef]
  40. Juraev, G.; Abdullaev, T.R.; Rakhimberdiev, K.; Bozorov, A.X. Mathematical modeling of key generators for bank lending platforms based on blockchain technology. In Artificial Intelligence, Blockchain, Computing and Security; CRC Press: London, UK, 2023; Volume 2, pp. 741–749. [Google Scholar]
  41. Juraev, G.U.; Kuvonchbek, R.; Toshpulov, B. Application Fuzzy Neural Network Methods to Detect Cryptoattacks on Financial Information Systems Based on Blockchain Technology. In Internet of Things, Smart Spaces, and Next Generation Networks and Systems: NEW2AN 2022; Koucheryavy, Y., Aziz, A., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13772. [Google Scholar] [CrossRef]
  42. Rakhimberdiev, K.; Ishnazarov, A.; Khayitova, O.; Abdullayev, O.K.; Jorabekov, T. Methods and algorithms for the formation of distance education systems based on blockchain and artificial intelligence technologies in the digital economy. In Proceedings of the 6th International Conference on Future Networks & Distributed Systems (ICFNDS’22), Tashkent, Uzbekistan, 15 December 2022; Association for Computing Machinery: New York, NY, USA; pp. 568–574. [Google Scholar] [CrossRef]
  43. Karimov, M.M.; Arzieva, J.T.; Rakhimberdiev, K. Development of Approaches and Schemes for Proactive Information Protection in Computer Networks. In Proceedings of the 2022 International Conference on Information Science and Communications Technologies (ICISCT), Tashkent, Uzbekistan, 28–30 September 2022; pp. 1–5. [Google Scholar] [CrossRef]
  44. Rakhimberdiev, K.; Ishnazarov, A.; Allayarov, P.; Ollamberganov, F.; Kamalov, R.; Matyakubova, M. Prospects for the use of neural network models in the prevention of possible network attacks on modern banking information systems based on blockchain technology in the context of the digital economy. In Proceedings of the 6th International Conference on Future Networks & Distributed Systems (ICFNDS ’22), Tashkent, Uzbekistan, 15 December 2022; Association for Computing Machinery: New York, NY, USA; pp. 592–599. [Google Scholar] [CrossRef]
  45. Liu, J.; Zhang, H. Fast implementation of object detection algorithm based on homomorphic model transformation. Neurocomputing 2024, 577, 127313. [Google Scholar] [CrossRef]
Figure 1. Scheme of functioning of the IoT environmental monitoring system.
Figure 1. Scheme of functioning of the IoT environmental monitoring system.
Mathematics 12 01086 g001
Figure 2. Table of attributes belonging to classes.
Figure 2. Table of attributes belonging to classes.
Mathematics 12 01086 g002
Table 1. Set of integers.
Table 1. Set of integers.
X012 K i 1
Y
00 ρ 01 ρ 02 ρ 0 ( k i 1 )
1 ρ 10 0 ρ 12 ρ 1 ( k i 1 )
2 ρ 20 ρ 21 0 ρ 2 k i 1
k i 1 ρ k i 1 , 0 ρ k i 1 , 1 0
Table 2. Number of parameter votes.
Table 2. Number of parameter votes.
Valid
(HCO3)
Valid
(Cl)
Valid
(SO4)
Valid
(Na)
FrequencyPercentValid PercentCumulative
Percent
0.680.560.500.5114.04.04.0
0.730.670.600.5814.04.08.0
0.780.670.680.6814.04.012.0
0.890.680.680.7814.04.016.0
0.930.680.690.8014.04.020.0
0.940.680.690.8014.04.024.0
0.950.680.690.8014.04.028.0
0.950.680.690.8014.04.032.0
0.950.700.690.8114.04.036.0
0.950.700.700.8114.04.040.0
0.950.710.710.8114.04.044.0
0.960.710.720.8114.04.048.0
0.960.740.720.8114.04.052.0
0.960.740.740.8114.04.056.0
0.960.760.750.8214.04.060.0
0.960.780.770.8214.04.064.0
0.970.820.810.8214.04.068.0
0.970.830.820.8214.04.072.0
0.970.850.830.8314.04.076.0
0.980.880.840.8414.04.080.0
0.980.890.860.8514.04.084.0
0.980.910.870.8714.04.088.0
0.980.920.890.8714.04.092.0
0.990.960.900.9314.04.096.0
1.001.000.980.9614.04.0100.0
Total25100100
Table 3. Descriptive Statistics.
Table 3. Descriptive Statistics.
No.RangeMinimumMaximumMeanStd. DeviationVariance
ρ i H C O 3 250.320.681.000.93230.081300.007
ρ i ( C l ) 250.440.561. 000.76750.110960.012
ρ i S O 4 250.480.500.980.75280.105440.011
ρ i ( N a ) 250.450.510.960.80070.092270.009
Valid251.662.243.903.25330.340470.116
Table 4. Statistics.
Table 4. Statistics.
ρ i HCO 3 ρ i ( Cl ) ρ i S O 4 ρ i ( Na ) Number of Votes
Valid2525252525
Missing00000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kabulov, A.; Saymanov, I.; Babadjanov, A.; Babadzhanov, A. Algebraic Recognition Approach in IoT Ecosystem. Mathematics 2024, 12, 1086. https://doi.org/10.3390/math12071086

AMA Style

Kabulov A, Saymanov I, Babadjanov A, Babadzhanov A. Algebraic Recognition Approach in IoT Ecosystem. Mathematics. 2024; 12(7):1086. https://doi.org/10.3390/math12071086

Chicago/Turabian Style

Kabulov, Anvar, Islambek Saymanov, Akbarjon Babadjanov, and Alimdzhan Babadzhanov. 2024. "Algebraic Recognition Approach in IoT Ecosystem" Mathematics 12, no. 7: 1086. https://doi.org/10.3390/math12071086

APA Style

Kabulov, A., Saymanov, I., Babadjanov, A., & Babadzhanov, A. (2024). Algebraic Recognition Approach in IoT Ecosystem. Mathematics, 12(7), 1086. https://doi.org/10.3390/math12071086

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop