Next Article in Journal
A Note on the LogRank Conjecture in Communication Complexity
Next Article in Special Issue
Fuzzy Divergence Measure Based on Technique for Order of Preference by Similarity to Ideal Solution Method for Staff Performance Appraisal
Previous Article in Journal
Bipolar Solitary Wave Interactions within the Schamel Equation
Previous Article in Special Issue
An Integrated Group Decision-Making Framework for the Evaluation of Artificial Intelligence Cloud Platforms Based on Fractional Fuzzy Sets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Choquet-like Integrals with Multi-Neighborhood Approximation Numbers for Novel Covering Granular Reduction Methods

1
School of Mathematics and Data Science, Shaanxi University of Science and Technology, Xi’an 710021, China
2
Shaanxi Joint Laboratory of Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(22), 4650; https://doi.org/10.3390/math11224650
Submission received: 8 October 2023 / Revised: 30 October 2023 / Accepted: 30 October 2023 / Published: 15 November 2023

Abstract

:
Covering granular reduction is an important issue in multi-covering information systems. The main methods to solve this problem are set operators. How to solve this problem by quantitative analysis is an interesting topic. Furthermore, as a type of nonlinear fuzzy aggregation function (which is a quantitative tool), Choquet-like integrals with fuzzy measures are widely used in many files. However, the corresponding fuzzy measures in Choquet-like integrals are given by man, not by data. In this work, we present two types of multi-neighborhood approximation numbers in multi-covering information systems, which are used to establish Choquet-like integrals. Furthermore, they are applied to deal with the problem of granular reduction in multi-covering information systems. First, the notions of lower and upper multi-neighborhood approximation numbers are presented in a multi-covering information system, as well as their properties. Furthermore, some conditions under which multi-covering information systems induce the same lower and upper multi-neighborhood approximation numbers are presented. Second, two covering granular reduction methods based on multi-neighborhood approximation numbers are presented in multi-covering information systems. Third, multi-neighborhood approximation numbers are used to establish Choquet-like integrals, which are applied in covering granular reduction. Finally, these methods are compared with existing methods through experiments, which are used to demonstrate the effectiveness and benefits of our methods.

1. Introduction

Granular computing [1] originated from the fuzzy information granulation theory proposed by Zadeh and is a method for knowledge representation and data mining. The core idea of granular computing is to divide information into different “granulations” according to certain relationships, then obtain the information in the data by processing the information granulations. In recent years, granular computing has been widely used in the fields of data mining [2], pattern recognition [3], and others [4]. Rough set theory is a classical granular computing theoretical model. Rough set theory, as a tool to organize, conceptualize, and analyze various types of data in data mining, was presented by Pawlak [5] in 1982. It is well known that the relations of objects are equivalence relations in Pawlak’s rough sets. However, this condition is excessive in practice [6,7]. Therefore, by extending equivalence relations or partitions, rough sets have been extended to generalized rough sets, such as binary relation-based rough sets [8], neighborhood-based rough sets [9], and covering-based rough sets [10,11].
Covering-based rough sets [12,13,14] were proposed to manage the type of covering data and have enriched Pawlak’s rough sets in many ways, such as covering approximation models [15,16], covering reduction problems [17], and covering axiomatic systems [18]. Furthermore, they have been used in many real applications, such as decision rule synthesis [19,20], knowledge reduction [21,22], and other fields [23,24]. In theory, covering-based rough set theory has been connected with many theories, such as lattice theory [25,26], matroid theory [27,28], and fuzzy set theory [29,30,31]. All these works have been mainly investigated in a covering approximation space. When coverings were extended into families of coverings, some researchers generalized these works in multi-covering information systems, such as multi-covering rough sets [32], variable precision multi-covering intuitionistic fuzzy rough sets [33], and attribute reduction methods in multi-covering intuitionistic fuzzy tables [34]. For multi-covering information systems, it is important to find those coverings which are not useful for data mining. To solve this problem, Wang et al. [17] studied data compression in multi-covering information systems for covering granular reduction. This is very interesting, as one can obtain a new system with a multi-covering information system, which has the same reducts as the original data. However, it is possible that after data compression, the dimension of the new information system is also big, and one can not find reducible elements in the new information system. For example, we find that there is no reducible element in Example 9 using Wang’s method [17] in this paper. Therefore, a quantitative analysis method of covering granular reduction in the multi-covering information system will be proposed in this paper. The motivations of this paper are as follows:
( 1 ) Recently, quantitative analysis of covering-based rough sets has become an interesting topic. Different notions of upper approximation numbers are presented for this topic. For example, Wang and Zhu presented the notion of upper approximation numbers to establish a matroidal structure of covering-based rough sets in [35] as follows:
Suppose C is a covering of U. For all X U , we define
f C ( X ) = | { K C : K X } | .
f C ( X ) is called the upper approximation number of X with respect to C , where | { K C : K X } | denotes the cardinality of { K C : K X } .
In [36], Wang et al. defined another upper approximation number as follows:
Suppose ( L , , , , ) is a bounded lattice, and C is a covering of L. For all x L ,
f C ( x ) = | { K C : K x } |
is called the upper approximation number of x.
All these definitions of upper approximation numbers are presented in a covering of U (i.e., a covering approximation space ( U , C ) ). Suppose Δ is a family of coverings of U. Then, we call ( U , Δ ) a multi-covering information system. We consider whether there exists a new definition of upper multi-neighborhood approximation numbers in the multi-covering information system, as well as a new definition of multi-neighborhood lower approximation numbers. Their corresponding applications in multi-covering information systems should also be considered in this paper.
( 2 ) Furthermore, (generalized) Choquet integrals [37,38,39] with fuzzy measures, as an important type of non-linear aggregation function, have been utilized in different areas, such as attribute reduction [40], face recognition [41], decision making [31], feature fusion [42], and others [43,44,45]. However, corresponding fuzzy measures in (generalized) Choquet integrals are given by man, not by data. Therefore, according to different data characteristics, it is important to establish a type of fuzzy measure that is suitable for the data itself. In this paper, we will induce fuzzy measures by multi-neighborhood approximation numbers in a generalized Choquet integral (i.e., Choquet-like integrals), which is also used in covering granular reduction.
In this work, a pair of multi-neighborhood approximation numbers are defined as a measurement to quantify covering-based rough sets and multi-covering information systems. Then, several of their basic properties are presented, and some conditions under which multi-covering information systems induce the same lower and upper multi-neighborhood approximation numbers are presented. Furthermore, multi-neighborhood approximation numbers are used to establish Choquet-like integrals. Finally, we propose covering granular reduction methods based on multi-neighborhood approximation numbers and corresponding Choquet-like integrals in multi-covering information systems, respectively. Additionally, some experiments are given to demonstrate the effectiveness and benefits of our methods.
The rest of this work is listed as follows: Section 2 reviews several fundamental notions of covering-based rough sets. Section 3 gives concepts of the lower and upper multi-neighborhood approximation numbers, as well as their properties. In Section 4, we present some conditions under which multi-covering information systems induce the same lower and upper multi-neighborhood approximation numbers. Section 5 introduces two types of covering granular reductions based on multi-neighborhood approximation numbers in multi-covering information systems. In Section 6, multi-neighborhood approximation numbers are used to establish Choquet-like integrals, which are applied to covering granular reduction. Finally, Section 7 concludes this paper and indicates further works.

2. Preliminaries

In this section, we present several basic definitions and propositions involved in covering-based rough sets and Choquet-like integrals based on overlap functions.

2.1. Covering-Based Rough Sets

A non-empty finite set is called a universe in this paper.
Definition 1
([10,46]). Suppose U is a universe and C is a family of subsets of U. If none of the subsets in C are empty and C = U , then C is called a covering of U. Moreover, ( U , C ) is called a covering approximation space.
For any covering C of U, all neighborhoods in it form a new covering. For every x U ,
N C ( x ) = { K C : x K } , C o v ( C ) = { N C ( x ) : x U }
.
Then, N C ( x ) is called the neighborhood of x with respect to C , and C o v ( C ) is called the induced covering by C . C o v ( C ) is also a covering of U.
Suppose Δ is a family of coverings of U. We call ( U , Δ ) a multi-covering information system.
Definition 2
(Neighborhood [10,47]). Suppose ( U , Δ ) is a multi-covering information system. For any x U ,
N Δ ( x ) = { N C ( x ) C o v ( C ) : C Δ }
is called the neighborhood of x with respect to Δ. When the covering is clear, we omit the lowercase Δ in the neighborhood.
For any x U , there is x N Δ ( x ) U . According to the above definition, we obtain a property of the neighborhood of a point.
Proposition 1
([10,47]). Suppose ( U , Δ ) is a multi-covering information system. For all x , y U , if y N Δ ( x ) , then N Δ ( y ) N Δ ( x ) .
Inspired by the above proposition, if y N Δ ( x ) and x N Δ ( y ) , then N Δ ( x ) = N Δ ( y ) . For a family of coverings, all neighborhoods form a new covering.
Definition 3
([10,47]). Suppose ( U , Δ ) is a multi-covering information system, and C o v ( Δ ) = { N Δ ( x ) : x U } . We call C o v ( Δ ) the induced covering by Δ.
C o v ( Δ ) is also a covering of U. For any x U , N Δ ( x ) is the minimal subset that includes x in C o v ( Δ ) ; each element in C o v ( Δ ) cannot be written as the union of other elements in C o v ( Δ ) .
Definition 4
(Approximations [10]). Suppose ( U , Δ ) is a multi-covering information system. For any X U ,
Δ ̲ ( X ) = { x U : N Δ ( x ) X } ,   a n d   Δ ¯ ( X ) = { x U : N Δ ( x ) X }
are called the lower and upper neighborhood approximations of X with respect to Δ, respectively.

2.2. Choquet-like Integrals Based on Overlap Functions

First, the definition of a fuzzy measure is listed in Definition 5.
Definition 5
([48]). Given a universe U and a set function m : P ( U ) [ 0 , 1 ] , where P ( U ) is the power set of U, m is called a fuzzy measure on U if the following statements hold:
(1) 
m ( ) = 0 , m ( U ) = 1 ;
(2) 
A , B U , A B implies m ( A ) m ( B ) .
We recall the concept of an overlap function, which is a non-associative operator.
Definition 6
([49]). O : [ 0 , 1 ] 2 [ 0 , 1 ] is called an overlap function if, for any a , b , c , d [ 0 , 1 ] , it satisfies the following conditions: (1) O ( a , b ) = O ( b , a ) ; (2) O ( a , b ) = 0 iff a b = 0 ; (3) O ( a , b ) = 1 iff a b = 1 ; (4) O ( a , b ) O ( a , c ) if b c ; (5) O is continuous.
Some types of O are shown in Table 1.
Based on the notions of an overlap function and a fuzzy measure, the notion of a Choquet-like integral is proposed as follows:
Definition 7
([43]). Given an overlap function O and a real-valued function f : U [ 0 , 1 ] with U = { x 1 , x 2 , , x n } , a Choquet-like integral of f with respect to the fuzzy measure m and overlap function O is defined as:
O f d m = i = 1 n [ O ( m ( X ( i ) ) , f ( x ( i ) ) ) O ( m ( X ( i + 1 ) ) , f ( x ( i ) ) ) ] ,
where { x ( 1 ) , x ( 2 ) , , x ( n ) } is a permutation of { x 1 , x 2 , , x n } such that f ( x ( 1 ) ) f ( x ( 2 ) ) f ( x ( n ) ) , X ( i ) = { x ( i ) , x ( i + 1 ) , , x ( n ) } and X ( n + 1 ) = .

3. Main Properties of Multi-Neighborhood Approximation Numbers

In this section, the notions of lower and upper multi-neighborhood approximation numbers are presented in a multi-covering information system. Then, several properties of lower and upper multi-neighborhood approximation numbers are proposed. Moreover, some relationships between lower and upper multi-neighborhood approximation numbers are investigated.
Inspired by the definition of the lower and upper approximations in Definition 4, we present the definitions of lower and upper multi-neighborhood approximation numbers.
Definition 8.
Suppose ( U , Δ ) is a multi-covering information system. For every X U ,
f Δ ̲ ( X ) = | { x U : N Δ ( x ) X } | ,   a n d   f Δ ¯ ( X ) = | { x U : N Δ ( x ) X } |
are called the lower and upper multi-neighborhood approximation numbers of X with respect to Δ, respectively.
Remark 1.
For Definition 8, we have the following two remarks:
(1) 
According to Definition 8, we can use multi-neighborhood approximation numbers to analyze and mine data from a more intuitive perspective. This breaks through the previous method of using sets. For any P Δ and P , f P ̲ ( X ) and f P ¯ ( X ) are the lower and upper multi-neighborhood approximation numbers of X U with respect to P , respectively.
(2) 
Definition 4 gives the notion of the neighborhood approximation operator in covering rough sets. Approximation is a core in rough set models; however, the main contribution of rough sets is knowledge representation instead of the model itself. In this paper, Definition 8 is a quantitative representation of Definition 4. That is to say, multi-neighborhood approximation numbers are quantitative representations of neighborhood approximation operators. Hence, Definition 4 can be seen as a quantitative tool of knowledge representation. In the following studies, we find that multi-neighborhood approximation numbers can satisfy almost all the properties of neighborhood approximation operators, especially the inclusion relation between multi-neighborhood approximation numbers, i.e., “For every X U , f Δ ̲ ( X ) f Δ ¯ ( X ) ”. Therefore, they also have a certain approximation ability in knowledge representation.
Example 1.
Suppose S = ( U , Δ ) is a multi-covering information system, where U = { x 1 , x 2 , x 3 ,  x 4 , x 5 , x 6 } and Δ = { C 1 , C 2 , C 3 } .
C 1 = { K 1 , K 2 , K 3 } , w h e r e K 1 = { x 1 , x 2 , x 3 } , K 2 = { x 2 , x 3 , x 4 } , K 3 = { x 3 , x 4 , x 5 , x 6 } , C 2 = { K 4 , K 5 , K 6 } , w h e r e K 4 = { x 1 , x 6 } , K 5 = { x 2 , x 3 , x 4 } , K 6 = { x 2 , x 3 , x 4 , x 5 } , C 3 = { K 7 , K 8 , K 9 } , w h e r e K 7 = { x 1 , x 2 , x 3 } , K 8 = { x 1 , x 6 } , K 9 = { x 2 , x 3 , x 4 , x 5 } .
We can calculate all N C j ( x i ) ( i = 1 , 2 , 3 , 4 , 5 , 6 ; j = 1 , 2 , 3 ) , which are listed in Table 2.
By Definition 2 and Table 2, all N Δ ( x i ) ( i = 1 , 2 , 3 , 4 , 5 , 6 ) are calculated in Table 3.
Suppose X = { x 1 , x 2 , x 3 } . Then, the lower and upper multi-neighborhood approximation numbers of X are calculated according to Definition 8 and Table 3.
f Δ ̲ ( X ) = | { x 1 , x 2 , x 3 } | = 3 f Δ ¯ ( X ) = | { x 1 , x 2 , x 3 , x 4 , x 5 } | = 5 .
By the properties of lower neighborhood approximations in Definition 4, we present seven properties of the lower multi-neighborhood approximation number in the following proposition.
Proposition 2.
Suppose ( U , Δ ) is a multi-covering information system. For every X U , the following statements about the lower multi-neighborhood approximation number hold:
(1) 
f Δ ̲ ( ) = 0 , f Δ ̲ ( U ) = | U | ;
(2) 
If X Y , then f Δ ̲ ( X ) f Δ ̲ ( Y ) ;
(3) 
0 f Δ ̲ ( X ) | U | ;
(4) 
f Δ ̲ ( X ) + f Δ ̲ ( Y ) f Δ ̲ ( X Y ) + f Δ ̲ ( X Y ) ;
(5) 
For any C Δ , f { C } ̲ ( X ) f Δ ̲ ( X ) ;
(6) 
For any Δ Δ , f Δ ̲ ( X ) f Δ ̲ ( X ) ;
(7) 
f Δ ̲ ( X ) + f Δ ̲ ( X ¬ ) | U | , where X ¬ is the complement of X in U.
Proof. 
( 1 ) : Since for every x U , there is N Δ ( x ) , so f Δ ̲ ( ) = | { x U : N Δ ( x ) } | = | | = 0 . N Δ ( x ) U for any x U , so f Δ ̲ ( U ) = | { x U : N Δ ( x ) U } | = | { x U } | = | U | ;
( 2 ) : For every x U , if N Δ ( x ) X , then N Δ ( x ) Y . Therefore, { x U : N Δ ( x ) X } { x U : N Δ ( x ) Y } . Hence, f Δ ̲ ( X ) f Δ ̲ ( Y ) ;
( 3 ) : By Statement ( 2 ) , we have f Δ ̲ ( ) f Δ ̲ ( X ) f Δ ̲ ( U ) . By Statement ( 1 ) , we have 0 f Δ ̲ ( X ) | U | ;
( 4 ) : Since { x U : N Δ ( x ) X } { x U : N Δ ( x ) Y } = { x U : N Δ ( x ) X Y } and { x U : N Δ ( x ) X } { x U : N Δ ( x ) Y } { x U : N Δ ( x ) X Y } , therefore, f Δ ̲ ( X ) + f Δ ̲ ( Y ) f Δ ̲ ( X Y ) + f Δ ̲ ( X Y ) ;
( 5 ) : Inspired by Definition 2, for every x U and C Δ , we have N Δ ( x ) N C ( x ) . Hence, | { x U : N C ( x ) X } | | { x U : N Δ ( x ) X } | , i.e., f { C } ̲ ( X ) f Δ ̲ ( X ) ;
( 6 ) : Inspired by Definition 2, for any x U and Δ Δ , N Δ ( x ) N Δ ( x ) . Hence, | { x U : N Δ ( x ) X } | | { x U : N Δ ( x ) X } | , i.e., f Δ ̲ ( X ) f Δ ̲ ( X ) ;
( 7 ) : Inspired by Statement ( 4 ) , we have f Δ ̲ ( X ) + f Δ ̲ ( X ¬ ) f Δ ̲ ( X X ¬ ) + f Δ ̲ ( X X ¬ ) = f Δ ̲ ( U ) = | U | . This completes the proof. □
Example 2.
Suppose ( U , Δ ) is a multi-covering information system in Example 1, X = { x 1 , x 2 , x 3 } , Y = { x 1 , x 2 , x 3 , x 6 } , and Δ = { C 1 , C 2 } . Then,
f Δ ̲ ( X ) = 3 , f Δ ̲ ( X ¬ ) = | { x 4 , x 5 , x 6 } | = 3 , f Δ ̲ ( Y ) = | { x 1 , x 2 , x 3 , x 6 } | = 4 , f Δ ̲ ( X Y ) = 4 , f Δ ̲ ( X Y ) = 3 , f { C 1 } ̲ ( X ) = 3 , f Δ ̲ ( X ) = 3 .
Hence, we have
(1) 
f Δ ̲ ( ) = 0 , f Δ ̲ ( U ) = | U | = 6 ;
(2) 
f Δ ̲ ( X ) f Δ ̲ ( Y ) ;
(3) 
0 f Δ ̲ ( X ) | U | ;
(4) 
f Δ ̲ ( X ) + f Δ ̲ ( Y ) f Δ ̲ ( X Y ) + f Δ ̲ ( X Y ) ;
(5) 
f { C 1 } ̲ ( X ) f Δ ̲ ( X ) ;
(6) 
f Δ ̲ ( X ) f Δ ̲ ( X ) ;
(7) 
f Δ ̲ ( X ) + f Δ ̲ ( X ¬ ) | U | .
Inspired by the properties of the lower multi-neighborhood approximation number, we present seven properties of the upper multi-neighborhood approximation number in the following proposition.
Proposition 3.
Suppose ( U , Δ ) is a multi-covering information system. For any X U , the following properties of the upper multi-neighborhood approximation number hold:
(1) 
f Δ ¯ ( ) = 0 , f Δ ¯ ( U ) = | U | ;
(2) 
If X Y , then f Δ ¯ ( X ) f Δ ¯ ( Y ) ;
(3) 
0 f Δ ¯ ( X ) | U | ;
(4) 
f Δ ¯ ( X ) + f Δ ¯ ( Y ) = f Δ ¯ ( X Y ) + f Δ ¯ ( X Y ) ;
(5) 
For every C Δ , f { C } ¯ ( X ) f Δ ¯ ( X ) ;
(6) 
For every Δ Δ , f Δ ¯ ( X ) f Δ ¯ ( X ) ;
(7) 
For every x U , f Δ ¯ ( { x } ) 1 .
Proof. 
( 1 ) : f Δ ¯ ( ) = | | = 0 . Since N Δ ( x ) U for any x U , therefore, f Δ ¯ ( U ) = | { x U : N Δ ( x ) U } | = | { x : x U } | = | U | ;
( 2 ) : For every x U , N Δ ( x ) Y if N Δ ( x ) X . Hence, { x U : N Δ ( x ) X } { x U : N Δ ( x ) Y } . Therefore, f Δ ¯ ( X ) f Δ ¯ ( Y ) ;
( 3 ) : Inspired by Statement ( 2 ) , f Δ ¯ ( ) f Δ ¯ ( X ) f Δ ¯ ( U ) . According to ( 1 ) , 0 f Δ ¯ ( X ) | U | ;
( 4 ) :
f Δ ¯ ( X ) + f Δ ¯ ( Y ) = | { x U : N Δ ( x ) X } | + | { x U : N Δ ( x ) Y } | = | { x U : N Δ ( x ) ( X Y ) } | + | { x U : N Δ ( x ) ( X Y ) } | = f Δ ¯ ( X Y ) + f Δ ¯ ( X Y ) ;
( 5 ) : Inspired by Definition 2, for every x U and C Δ , we have N Δ ( x ) N C ( x ) . Hence, | { x U : N Δ ( x ) X } | | { x U : N C ( x ) X } | , i.e., f { C } ¯ ( X ) f Δ ¯ ( X ) ;
( 6 ) : Inspired by Definition 2, for every x U and Δ Δ , we have N Δ ( x ) N Δ ( x ) . Hence, | { x U : N Δ ( x ) X } | | { x U : N Δ ( x ) X } | , i.e., f Δ ¯ ( X ) f Δ ¯ ( X ) ;
( 7 ) : Since x N Δ ( x ) for every x U , f Δ ¯ ( { x } ) 1 . This completes the proof. □
Example 3.
Suppose ( U , Δ ) is a multi-covering information system in Example 1, X = { x 1 , x 2 , x 3 } , Y = { x 1 , x 2 , x 3 , x 6 } , and Δ = { C 1 , C 2 } . Then,
f Δ ¯ ( Y ) = | { x 1 , x 2 , x 3 , x 4 , x 5 , x 6 } | = 6 , f Δ ¯ ( X ) = 5 , f Δ ¯ ( { x 3 } ) = 4 , f Δ ¯ ( X Y ) = 6 , f Δ ¯ ( X Y ) = 5 , f { C 1 } ¯ ( X ) = 6 , f Δ ¯ ( X ) = 5 .
Therefore, we have
(1) 
f Δ ¯ ( ) = 0 , f Δ ¯ ( U ) = | U | = 6 ;
(2) 
f Δ ¯ ( X ) f Δ ¯ ( Y ) ;
(3) 
0 f Δ ¯ ( X ) | U | ;
(4) 
f Δ ¯ ( X ) + f Δ ¯ ( Y ) = f Δ ¯ ( X Y ) + f Δ ¯ ( X Y ) ;
(5) 
f { C 1 } ¯ ( X ) f Δ ¯ ( X ) ;
(6) 
f Δ ¯ ( X ) f Δ ¯ ( X ) ;
(7) 
f Δ ¯ ( { x 3 } ) 1 .
The following results show two relationships between lower and upper multi-neighbor hood approximation numbers, respectively. Rough set theory mainly realizes the approximate representation of knowledge through the inclusion relation of upper and lower approximate operators. Therefore, it is particularly important to study the relationship between the size of the lower and upper multi-neighborhood approximation numbers.
Proposition 4.
Suppose ( U , Δ ) is a multi-covering information system. For every X U , f Δ ̲ ( X ) f Δ ¯ ( X ) .
Proof. 
For every x U , if N Δ ( x ) X , then N Δ ( x ) X . Therefore, f Δ ̲ ( X ) = | { x U : N Δ ( x ) X } | | { x U : N Δ ( x ) X } | = f Δ ¯ ( X ) . This completes the proof. □
Example 4.
Suppose ( U , Δ ) is a multi-covering information system in Example 1, X = { x 1 , x 2 , x 3 } . Since f Δ ̲ ( X ) = 3 and f Δ ¯ ( X ) = 5 , f Δ ̲ ( X ) f Δ ¯ ( X ) .
The following proposition gives the complementary relationship between the lower and upper multi-neighborhood approximation numbers.
Proposition 5.
Suppose ( U , Δ ) is a multi-covering information system. For any X U , f Δ ¯ ( X ) + f Δ ̲ ( X ¬ ) = | U | .
Proof. 
f Δ ¯ ( X ) + f Δ ̲ ( X ¬ ) = | { x U : N Δ ( x ) X } | + | { x U : N Δ ( x ) X ¬ } | = | { x U : N Δ ( x ) X } | + | { x U : N Δ ( x ) X = } | = | { x U : N Δ ( x ) X } { x U : N Δ ( x ) X = } | = | U | .
This completes the proof. □
Example 5.
Suppose ( U , Δ ) is a multi-covering information system in Example 1, X = { x 1 , x 2 , x 3 } . Since f Δ ̲ ( X ¬ ) = 1 and f Δ ¯ ( X ) = 5 , f Δ ¯ ( X ) + f Δ ̲ ( X ¬ ) = | U | = 6 .

4. Conditions for Multi-Covering Information Systems Produce the Same Multi-Neighborhood Approximation Number

In this section, we propose several conditions under which multi-covering information systems produce the same multi-neighborhood approximation number. Firstly, we consider conditions for a special multi-covering information system in which there is only one covering, i.e., | Δ | = 1 in the multi-covering information system ( U , Δ ) .
The notion of reducible elements in a covering is used to find those covering elements that are redundant in data mining through covering-based rough sets [10]. Suppose C is a covering of U and K C . K is called a reducible element in C if K can be expressed as a union of some elements in C { K } ; otherwise, K is an irreducible element in C . If all elements in C are irreducible, then C is irreducible; otherwise, C is reducible. The following definition presents this concept.
Definition 9
([10]). Suppose C is a covering of U. Then, the family of all irreducible elements of C is called the reduct of C , denoted as r e d u c t ( C ) .
In [10], the reduct of C is also a covering and does not have reducible elements. The relationships between f { C } ̲ ( X ) and f { R e d ( C ) } ̲ ( X ) and f { C } ¯ ( X ) and f { R e d ( C ) } ¯ ( X ) are presented for any X U .
Proposition 6.
Suppose ( U , { C } ) is a multi-covering information system. For every X U ,
f { C } ̲ ( X ) = f { R e d ( C ) } ̲ ( X ) ,   a n d   f { C } ¯ ( X ) = f { R e d ( C ) } ¯ ( X )
.
Proof. 
Inspired by Definition 9, we have N C ( x ) = N R e d ( C ) ( x ) for all x U . Hence, f { C } ̲ ( X ) = f { R e d ( C ) } ̲ ( X ) and f { C } ¯ ( X ) = f { R e d ( C ) } ¯ ( X ) . This completes the proof. □
To find another condition for the special multi-covering information system, a new definition is presented in the following definition.
Definition 10.
Suppose C 1 , C 2 are two coverings of U. We call C 1 and C 2 companions if, for every x U , there is N C 1 ( x ) = N C 2 ( x ) . If C 1 and C 2 are companions, we denote it by C 1 c o C 2 .
Based on Definition 10, the following proposition about companions is presented.
Proposition 7.
Suppose ( U , { C 1 } ) and ( U , { C 2 } ) are two multi-covering information systems. If C 1 c o C 2 , then for any X U ,
f { C 1 } ̲ ( X ) = f { C 2 } ̲ ( X ) ,   a n d   f { C 1 } ¯ ( X ) = f { C 2 } ¯ ( X )
.
Proof. 
According to Definition 10, it is immediate. This completes the proof. □
Then, we find conditions for any multi-covering information system.
Suppose ( U , Δ ) is a multi-covering information system and C 1 , C 2 , C 3 Δ . We can obtain the following facts:
(I)  
C 1 c o C 1 ;
(II)  
If C 1 c o C 2 , then C 2 c o C 1 ;
(III) 
If C 1 c o C 2 and C 2 c o C 3 , then C 1 c o C 3 .
Therefore, c o is an equivalence relation on Δ . That is to say, we can classify Δ by this equivalence relation c o and denote the partition with respect to c o as U / c o . We use [ C ] to denote the equivalence class of all elements in Δ , which are c o -equivalent to C .
Proposition 8.
Suppose ( U , Δ ) is a multi-covering information system, X U , Δ / c o = { [ C i ] : 1 i m } , and Δ 1 Δ . If, for any 1 i m , Δ 1 [ C i ] , then
f Δ 1 ̲ ( X ) = f Δ ̲ ( X ) ,   a n d   f Δ 1 ¯ ( X ) = f Δ ¯ ( X )
.
Proof. 
Suppose f Δ 1 ̲ ( X ) f Δ ̲ ( X ) ; then, there exists x U such that N Δ ( x ) X but N Δ 1 ( x ) X . Therefore, there exists C Δ such that Δ 1 [ C ] = , which is contradictory to Δ 1 [ C i ] for any 1 i m . Therefore, f Δ 1 ̲ ( X ) = f Δ ̲ ( X ) . Suppose f Δ 1 ¯ ( X ) f Δ ¯ ( X ) . Then, there exists x U such that N Δ 1 ( x ) X but N Δ ( x ) X = . Hence, there exists C Δ such that Δ 1 [ C ] = , which is contradictory to Δ 1 [ C i ] for any 1 i m . Hence, f Δ 1 ¯ ( X ) = f Δ ¯ ( X ) . This completes the proof. □
Inspired by the above proposition, we present the definition of a base of Δ .
Definition 11.
Suppose ( U , Δ ) is a multi-covering information system, Δ / c o = { [ C i ] : 1 i m } , and Δ 1 Δ . We call Δ 1 a base of Δ if, for any 1 i m , | Δ 1 [ C i ] | = 1 .
Based on Definition 11, Proposition 8 can be represented as shown in the following corollary.
Corollary 1.
Suppose ( U , Δ ) is a multi-covering information system, X U , and Δ 1 Δ . If Δ 1 is a base of Δ, then
f Δ 1 ̲ ( X ) = f Δ ̲ ( X ) ,   a n d   f Δ 1 ¯ ( X ) = f Δ ¯ ( X )
.
Proof. 
According to Proposition 8 and Definition 11, it is immediate. This completes the proof. □

5. Multi-Neighborhood Approximation Numbers for Covering Granular Reductions in Multi-Covering Information Systems

In this section, two types of covering granular reductions based on multi-neighborhood approximation numbers are presented in multi-covering information systems. Moreover, corresponding methods are compared with others through an example.

5.1. Two New Types of Covering Granular Reductions in Multi-Covering Information Systems

In this subsection, the definition of a lower multi-neighborhood approximation number reduct is presented first.
Definition 12.
Suppose I S = ( U , Δ ) is a multi-covering information system and C Δ . Then, C is called L-superfluous in I S if f Δ { C } ̲ ( X ) = f Δ ̲ ( X ) for every X U ; otherwise, C is called L-indispensable in I S . The set of all L-indispensable elements in I S is called the L-core of I S , denoted as C o r e L ( D ) . Assume P Δ ; then, P is referred to as a lower multi-neighborhood approximation number reduct of I S if P satisfies the following conditions:
(1) 
For every X U , f P ̲ ( X ) = f Δ ̲ ( X ) ;
(2) 
For every C P , there exists X U such that f P { C } ̲ ( X ) f P ̲ ( X ) .
Method 1 is the corresponding method that shows the corresponding steps for computing all lower multi-neighborhood approximation number reducts in a multi-covering information system.
Method 1: The method for lower multi-neighborhood approximation number reducts of I S .
Input: A multi-covering information system I S = ( U , Δ ) .
Output: All lower multi-neighborhood approximation number reducts of I S .
  • Step 1: Calculate N C ( x ) for all x U and C Δ ;
  • Step 2: Calculate N Δ ( x ) for all x U according to Definition 2;
  • Step 3: Calculate | { X U : f P ̲ ( X ) = f Δ ̲ ( X ) } | for all P Δ ;
  • Step 4: If | { X U : f P ̲ ( X ) = f Δ ̲ ( X ) } | = 2 | U | and | { X U : C P , f P { C } ̲ ( X ) = f Δ ̲ ( X ) } | < 2 | U | , then P is a lower multi-neighborhood approximation number reduct of I S .
  • Step 5: Obtain all lower multi-neighborhood approximation number reducts of I S .
For Method 1, the time complexity of Steps 1 and 2 is O ( | U | | Δ | | C | ) , where | C | = m a x C i Δ | C i | ; the time complexity of Steps 3–5 is O ( 2 | Δ | + | U | | U | ) . Hence, the time complexity of Method 1 is O ( | U | | Δ | | C | + 2 | Δ | + | U | | U | ) . Inspired by Method 1, we consider a house evaluation issue in [17] using Method 1.
Example 6.
Suppose U = { x 1 , , x 15 } is a set of fifteen houses, E = {price, structure; surrounding} is a set of attributes, { h i g h , m i d d l e , l o w } is the value set of ‘‘price”, {reasonable, ordinary, unreasonable} is the value set of ‘‘structure”, and { q u i e t , n o i s y , q u i t e n o i s y } is the value set of ‘‘surrounding”. There are four specialists to evaluate the attributes of these houses, which are { A , B , C , D } . Then, it is possible that their evaluation results for the same attribute will not be the same as one another. The evaluation results are listed below.
For the attribute “price”:
A : h i g h = { x 1 , x 2 , x 4 , x 10 , x 15 } , m i d d l e = { x 6 , x 8 , x 9 , x 13 , x 14 } , l o w = { x 3 , x 5 , x 7 , x 11 , x 12 } ; B : h i g h = { x 1 , x 2 , x 4 , x 15 } , m i d d l e = { x 6 , x 8 , x 9 , x 10 , x 13 , x 14 } , l o w = { x 3 , x 5 , x 7 , x 11 , x 12 } ; C : h i g h = { x 1 , x 2 , x 8 , x 10 , x 15 } , m i d d l e = { x 4 , x 6 , x 9 , x 13 , x 14 } , l o w = { x 3 , x 5 , x 7 , x 11 , x 12 } ; D : h i g h = { x 1 , x 2 , x 8 , x 15 } , m i d d l e = { x 4 , x 6 , x 9 , x 10 , x 13 , x 14 } , l o w = { x 3 , x 5 , x 7 , x 11 , x 12 } .
For the attribute “surrounding”:
A : q u i e t = { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , n o i s y = { x 1 , x 2 , x 4 , x 7 , x 8 , x 10 , x 11 , x 12 } , q u i t e n o i s y = { x 15 } ; B : q u i e t = { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , n o i s y = { x 2 , x 4 , x 7 , x 8 , x 10 , x 11 , x 12 , x 15 } , q u i t e n o i s y = { x 1 } ; C : q u i e t = { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , n o i s y = { x 2 , x 4 , x 7 , x 8 , x 10 , x 11 , x 12 , x 15 } , q u i t e n o i s y = { x 1 } ; D : q u i e t = { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , n o i s y = { x 1 , x 4 , x 7 , x 8 , x 10 , x 11 , x 12 , x 15 } , q u i t e n o i s y = { x 2 } .
For the attribute “structure”:
A : r e a s o n a b l e = { x 1 , x 2 , x 3 , x 7 , x 11 , x 12 } , o r d i n a r y = { x 5 , x 6 , x 9 , x 13 , x 14 } , u n r e a s o n a b l e = { x 4 , x 8 , x 10 , x 15 } ; B : r e a s o n a b l e = { x 1 , x 7 , x 11 , x 12 } , o r d i n a r y = { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , u n r e a s o n a b l e = { x 2 , x 4 , x 8 , x 10 , x 15 } ; C : r e a s o n a b l e = { x 2 , x 3 , x 7 , x 11 , x 12 , x 15 } , o r d i n a r y = { x 6 , x 9 , x 13 , x 14 } , u n r e a s o n a b l e = { x 1 , x 4 , x 5 , x 8 , x 10 } ; D : r e a s o n a b l e = { x 5 , x 7 , x 11 , x 12 , x 15 } , o r d i n a r y = { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , u n r e a s o n a b l e = { x 1 , x 2 , x 4 , x 8 , x 10 } .
Then for every attribute, we can obtain a covering, which embodies a kind of uncertainty.
For the attribute ‘‘price”, we have
C 1 = { { x 1 , x 2 , x 4 , x 8 , x 10 , x 15 } , { x 4 , x 6 , x 8 , x 9 , x 10 , x 13 , x 14 } ; { x 3 , x 5 , x 7 , x 11 , x 12 } } .
For the attribute ‘‘surrounding”, we have
C 2 = { { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , { x 1 , x 2 , x 4 , x 7 , x 8 , x 10 , x 11 , x 12 , x 15 } , { x 1 , x 2 , x 15 } } .
For the attribute ‘‘structure”, we have
C 3 = { { x 1 , x 2 , x 3 , x 5 , x 7 , x 11 , x 12 , x 15 } , { x 3 , x 5 , x 6 , x 9 , x 13 , x 14 } , { x 1 , x 2 , x 3 , x 4 , x 5 , x 8 , x 10 , x 15 } } .
Hence, I S = ( U , Δ ) is a multi-covering information system, where U = { x 1 , , x 15 } and Δ = { C 1 , C 2 , C 3 } in ([17]).
Then, we can use Method 1 to compute all lower multi-neighborhood approximation number reducts in the multi-covering information system in Example 6. Assume P 1 = { C 1 } , P 2 = { C 2 } , P 3 = { C 3 } , P 4 = { C 1 , C 2 } , P 5 = { C 1 , C 3 } , P 6 = { C 2 , C 3 } . We calculate | { X U : f P s ̲ ( X ) = f Δ ̲ ( X ) } | ( s = 1 , 2 , 3 , 4 , 5 , 6 ) , and all results are listed in Table 4.
We have 2 | U | = 2 15 = 32 , 768 . According to Step 4 in Method 1, P 4 is a lower multi-neighborhood approximation number reduct of I S .
Remark 2.
A covering is only an abstract concept; there are so many different types of coverings with respect to different modules of data in real applications. Different coverings may induce different attitudes for data processing. In [11], we used every attribute to induce a partition for real data (from UCI data sets), then aggregated all partitions as a covering. In this paper, we also use partitions to induce coverings, such as Example 6.
Inspired by Definition 12, the definition of a lower multi-neighborhood approximation number reduct is proposed in the following definition.
Definition 13.
Suppose I S = ( U , Δ ) is a multi-covering information system and C Δ . Then, C is called U-superfluous in I S if f Δ { C } ¯ ( X ) = f Δ ¯ ( X ) for any X U ; otherwise, C is called U-indispensable in I S . The set of all U-indispensable elements in I S is called the U-core of I S , denoted as C o r e U ( D ) . Assume P Δ . Then, P is referred to as an upper approximation number reduct of I S if for any X U , P satisfies the following conditions:
(1) 
f P ¯ ( X ) = f Δ ¯ ( X ) ;
(2) 
C P , f P { C } ¯ ( X ) f P ¯ ( X ) .
Method 2 shows us how to compute all upper approximation number reducts in a multi-covering information system.
Method 2: The algorithm for upper multi-neighborhood approximation number reducts of I S .
Input: A multi-covering information system I S = ( U , Δ ) .
Output: All upper multi-neighborhood approximation number reducts of I S .
  • Step 1: Calculate N C ( x ) for all x U and C Δ ;
  • Step 2: Calculate N Δ ( x ) for all x U according to Definition 2;
  • Step 3: Calculate | { X U : f P ¯ ( X ) = f Δ ¯ ( X ) } | for all P Δ ;
  • Step 4: If | { X U : f P ¯ ( X ) = f Δ ¯ ( X ) } | = 2 | U | and | { X U : C P , f P { C } ¯ ( X ) = f Δ ¯ ( X ) } | < 2 | U | , then P is an upper multi-neighborhood approximation number reduct of I S ;
  • Step 5: Obtain all upper multi-neighborhood approximation number reducts of I S .
For Method 2, the time complexity of Steps 1 and 2 is O ( | U | | Δ | | C | ) , where | C | = m a x C i Δ | C i | ; the time complexity of Steps 3–5 is O ( 2 | Δ | + | U | | U | ) . Hence, the time complexity of Method 2 is O ( | U | | Δ | | C | + 2 | Δ | + | U | | U | ) . Method 2 can also be used in Example 6.
Example 7.
Suppose I S = ( U , Δ ) is a multi-covering information system in Example 6, where U = { x 1 , , x 15 } and Δ = { C 1 , C 2 , C 3 } . P 1 = { C 1 } , P 2 = { C 2 } , P 3 = { C 3 } , P 4 = { C 1 , C 2 } , P 5 = { C 1 , C 3 } , P 6 = { C 2 , C 3 } . We use Method 2 to compute all upper multi-neighborhood approximation number reducts in the multi-covering information system in Example 6. We calculate | { X U : f P s ¯ ( X ) = f Δ ¯ ( X ) } | ( s = 1 , 2 , 3 , 4 , 5 , 6 ) , and all results are shown in Table 5.
Hence, P 4 is an upper approximation number reduct of I S .

5.2. A Comparison Analysis

In [17], Wang et al. presented a method to investigate data compression in multi-covering information systems. They induced a new multi-covering information system from the original multi-covering information system. By compression, one can obtain a relatively smaller image system that has the same reducts as a given original database. In [17], the definition of a reduct is introduced as follows:
Assume I S = ( U , Δ ) is a multi-covering information system, P Δ . Then, P is referred to as a reduct of Δ if P satisfies the following conditions:
(1) 
P = Δ ;
(2) 
C P , P ( P { C } ) .
Wang’s method [17] shows us how to compute all reducts defined by Wang et al. in a multi-covering information system.
Wang’s method [17]: The method for reducts of I S .
Input: A multi-covering information system I S = ( U , Δ ) .
Output: All reducts of I S .
  • Step 1: Define a mapping g : U V such that ( V , f ( Δ ) ) is a g-induced multi-covering information system of ( U , Δ ) ;
  • Step 2: Find all P Δ which satisfy the following conditions:
    (1) g ( P ) = g ( Δ ) ;
    (2) C P , g ( P ) g ( P { C } ) ;
  • Step 3: P is a reduct of I S .
Example 8.
Continuing from Example 6, we have N Δ ( x 1 ) = N Δ ( x 2 ) = N Δ ( x 15 ) = { x 1 , x 2 , x 15 } , N Δ ( x 3 ) = N Δ ( x 5 ) = { x 3 , x 5 } , N Δ ( x 4 ) = N Δ ( x 8 ) = N Δ ( x 10 ) = { x 4 , x 8 , x 10 } , N Δ ( x 6 ) = N Δ ( x 9 ) = N Δ ( x 13 ) = N Δ ( x 14 ) = { x 6 , x 9 , x 13 , x 14 } , and N Δ ( x 7 ) = N Δ ( x 11 ) = N Δ ( x 12 ) = { x 7 , x 11 , x 12 } . Assume V = { y 1 , y 2 , y 3 , y 4 , y 5 } . In [17], Wang et al. defined a consistent function g : U V for Example 6 as follows:
g ( x ) = y 1 , x = x 1 , x 2 , x 15 ; y 2 , x = x 3 , x 5 ; y 3 , x = x 4 , x 8 , x 10 ; y 4 , x = x 6 , x 9 , x 13 , x 14 ; y 5 , x = x 7 , x 11 , x 12 .
Then g ( Δ ) = { g ( C 1 ) , g ( C 2 ) , g ( C 3 ) } , where
g ( C 1 ) = { { y 1 , y 3 } , { y 3 , y 4 } , { y 2 , y 5 } } ; g ( C 2 ) = { { y 2 , y 4 } , { y 1 , y 3 , y 5 } , { y 1 } } ; g ( C 3 ) = { { y 1 , y 2 , y 5 } , { y 2 , y 4 } , { y 1 , y 2 , y 3 } } .
Then, according to Step 2 in Wang’s method [17], P 4 is a reduct of I S .
All results are shown in Table 6.
Example 8 shows that our Methods 1 and 2 are effective. In [17], Wang et al. studied data compression in multi-covering information systems with their method. This is very interesting in that one can obtain a relatively smaller image system, which has the same reducts as a given original database. However, it is possible that after data compression, the new information system is also large, and one cannot find reducible elements in the new information system. Finally, we give another example to show the advantage of our methods.
Example 9.
Continued from Example 1, we use Method 1, Method 2, and Wang’s method [17] to find reducts. Suppose P 1 = { C 1 } , P 2 = { C 2 } , P 3 = { C 3 } , P 4 = { C 1 , C 2 } , P 5 = { C 1 , C 3 } , P 6 = { C 2 , C 3 } .
(1) Method 1 is used to calculate all lower multi-neighborhood approximation number reducts in the multi-covering information system in Example 1. We calculate | { X U : f P s ̲ ( X ) = f Δ ̲ ( X ) } | ( s = 1 , 2 , 3 , 4 , 5 , 6 ) , and all results are shown in Table 7.
We have 2 | U | = 2 6 = 64 . According to Step 4 in Method 1, P 4 and P 5 are two lower multi-neighborhood approximation number reducts of I S .
(2) Method 2 is used to calculate to compute all upper multi-neighborhood approximation number reducts in the multi-covering information system in Example 1. We calculate | { X U : f P s ¯ ( X ) = f Δ ¯ ( X ) } | ( s = 1 , 2 , 3 , 4 , 5 , 6 ) , and all results are shown in Table 8.
Hence, P 4 and P 5 are two upper approximation number reducts of I S .
(3) Wang’s method [17] is used to calculate to compute all reducts in the multi-covering information system in Example 1. By Table 3 in Example 1, we have N Δ ( x 1 ) = { x 1 } , N Δ ( x 2 ) = { x 2 , x 3 } , N Δ ( x 3 ) = { x 3 } , N Δ ( x 4 ) = { x 3 , x 4 } , N Δ ( x 5 ) = { x 3 , x 4 , x 5 } , and N Δ ( x 6 ) = { x 6 } . Assume V = { y 1 , y 2 , y 3 , y 4 , y 5 , y 6 } . Then, we can define a consistent function g : U V for Example 1 as follows:
g ( x ) = y 1 , x = x 1 ; y 2 , x = x 2 ; y 3 , x = x 3 ; y 4 , x = x 4 ; y 5 , x = x 5 ; y 6 , x = x 6 ;
Then, g ( Δ ) = { g ( C 1 ) , g ( C 2 ) , g ( C 3 ) } , where
g ( C 1 ) = { { y 1 , y 2 , y 3 } , { y 2 , y 3 , y 4 } , { y 3 , y 4 , y 5 , y 6 } } ; g ( C 2 ) = { { y 1 , y 6 } , { y 2 , y 3 , y 4 } , { y 2 , y 3 , y 4 , y 5 } } ; g ( C 3 ) = { { y 1 , y 2 , y 3 } , { y 1 , y 6 } , { y 2 , y 3 , y 4 , y 5 } } .
Then, according to Step 2 in Wang’s method [17], Δ is a reduct of I S , i.e., there is no reducible element in Δ.
All results for Example 1 are shown in Table 9.
Using Wang’s method [17] in Example 9, we find that the dimension of the induced new information system ( U , g ( Δ ) ) is equal to that of the original system ( U , Δ ) , which does not reduce the number of dimensions by inducing a new information system. Moreover, there is no reducible element that can be found. However, our Methods 1 and 2 can find reducible elements in Example 9. In this paper, our Methods 1 and 2 use multi-neighborhood approximation numbers to calculate reducts. According to the final number, one can find a reduct of a multi-covering information system. This provides a new viewpoint from which to study multi-covering information systems.

6. Corresponding Choquet Integrals for Covering Granular Reductions in Multi-Covering Information Systems

The existing covering granular reduction methods are mainly based on the intersection and union of sets. Therefore, the correlations among the covering granular structures cannot be well reflected. As an important type of nonlinear integral, Choquet-like integrals can solve this problem well. Therefore, corresponding Choquet integrals for covering granular reductions in multi-covering information systems are presented in this section. Inspired by a well-known fuzzy measure called the uniform fuzzy measure m ( X ) = | X | | U | with X U , we induce fuzzy measures by multi-neighborhood approximation numbers.
Lemma 1.
Suppose I S = ( U , Δ ) is a multi-covering information system. For any P Δ and P , f P ̲ ( X ) and f P ¯ ( X ) are the lower and upper multi-neighborhood approximation numbers of X U with respect to P , respectively. Then, f P ̲ ( X ) | U | and f P ¯ ( X ) | U | are two fuzzy measures.
Proof. 
First, we prove that f P ̲ ( X ) | U | is a fuzzy measure. By Statement (1) of Proposition 2, we know that ( 1 )   f Δ ̲ ( ) = 0 and f Δ ̲ ( U ) = | U | . Hence, we have f P ̲ ( ) | U | = 0 and f P ̲ ( U ) | U | = 1 . By Statement (2) of Proposition 2, we know if X Y , then f Δ ̲ ( X ) f Δ ̲ ( Y ) . Hence, if X Y , then f P ̲ ( X ) | U | f P ̲ ( Y ) | U | . Therefore, f P ̲ ( X ) | U | is a fuzzy measure. By means of a similar proof, f P ¯ ( X ) | U | is also a fuzzy measure. □
In this section, we mainly study Choquet integrals induced by lower multi-neighborhood approximation numbers.
Theorem 1.
Suppose I S = ( U , Δ ) is a multi-covering information system with U = { x 1 , x 2 , , x n } and O is an overlap function, a real-valued function w : U [ 0 , 1 ] such that w ( x i ) i n [ 0 , 1 ] and i = 1 n = 1 . For any P Δ and P , f P ̲ ( X ) is the lower multi-neighborhood approximation number of X U with respect to P . If the fuzzy measure m P ( X ) = f P ̲ ( X ) | U | , then a Choquet-like integral of w with respect to the fuzzy measure m and the overlap function O is:
O w d m P = i = 1 n [ O ( m P ( X ( i ) ) , w ( x ( i ) ) ) O ( m P ( X ( i + 1 ) ) , w ( x ( i ) ) ) ] ,
where { x ( 1 ) , x ( 2 ) , , x ( n ) } is a permutation of { x 1 , x 2 , , x n } such that w ( x ( 1 ) ) w ( x ( 2 ) ) w ( x ( n ) ) , X ( i ) = { x ( i ) , x ( i + 1 ) , , x ( n ) } and X ( n + 1 ) = .
Proof. 
By Definition 7 and Lemma 1, O w d m P is a Choquet-like integral of w with respect to the fuzzy measure m and the overlap function O . □
Definition 14.
Suppose I S = ( U , Δ ) is a multi-covering information system, C Δ , U = { x 1 , x 2 , , x n } , O is an overlap function, the fuzzy measure m Δ ( X ) = f Δ ̲ ( X ) | U | , and there is a real-valued function w : U [ 0 , 1 ] such that w ( x i ) i n [ 0 , 1 ] and i = 1 n = 1 . Then, C is called a Choquet-like reducible element in I S if O w d m Δ = O w d m Δ { C } for every X U ; otherwise, C is called a non-Choquet-like reducible element in I S . Assume P Δ . Then, P is called a Choquet-like reduct of I S if P satisfies the following conditions:
(1) 
O w d m Δ = O w d m P ;
(2) 
For every C P , O w d m P O w d m P { C } .
Method 3 is the corresponding method that shows the corresponding steps for computing all Choquet-like reducts in a multi-covering information system.
Method 3: The algorithm for Choquet-like reducts of I S .
Input: A multi-covering information system I S = ( U , Δ ) .
Output: All Choquet-like reducts of I S .
  • Step 1: Calculate N C ( x ) for all x U and C Δ according to Definition 9;
  • Step 2: Calculate N Δ ( x ) for all x U according to Definition 2;
  • Step 3: Find P Δ , which satisfies the following conditions:
    (1) 
    O w d m Δ = O w d m P ;
    (2) 
    For every C P , O w d m P O w d m P { C } .
  • Step 4: Obtain all Choquet-like reducts of I S .
For Method 3, the time complexity of Steps 1 and 2 is O ( | U | | Δ | | C | ) , where | C | = m a x C i Δ | C i | ; the time complexity of Steps 3 and 4 is O ( 2 | Δ | | U | ) . Hence, the time complexity of Method 3 is O ( | U | | Δ | | C | + 2 | Δ | | U | ) .
Example 10.
Continuing from Example 6, we have P 1 = { C 1 } , P 2 = { C 2 } , P 3 = { C 3 } , P 4 = { C 1 , C 2 } , P 5 = { C 1 , C 3 } , P 6 = { C 2 , C 3 } . By Definition 14, we assume w = { 1 120 , 2 120 , 3 120 , , 15 120 } and O = O m 1 2 . Then, we have the following results, as shown in Table 10.
By Table 10, we know that P 4 is a Choquet-like reduct of I S .
Based on the results in Table 6, we find that Method 3 is also effective, since the result of Method 3 is the same as Methods 1–2 and Wang’s method [17]. To demonstrate the advantages and stability of the method, the following example is given.
Example 11.
Continuing from Example 1, we use Method 3 to find the Choquet-like reducts. Suppose P 1 = { C 1 } , P 2 = { C 2 } , P 3 = { C 3 } , P 4 = { C 1 , C 2 } , P 5 = { C 1 , C 3 } , P 6 = { C 2 , C 3 } . By Definition 14, we use different w and O to calculate Choquet-like reducts. All results are shown in Table 11 and Table 12.
Hence, P 4 and P 5 are two Choquet-like reducts for different w 1 and w 2 and different O .
Based on Examples 9 and 11, all the results for Example 1 are shown in Table 13 by Methods 1–3 and Wang’s method [17].
By Table 13, we find that our methods are effective. Additionally, Method 3 has good stability, since it can obtain the same Choquet-like reduct for different w 1 and w 2 and different O . Finally, we list the advantages of our methods:
(1) 
Wang’s method [17] and Method 3 can only be used in the information table, but Methods 1 and 2 can be used in both the information table and decision information table.
(2) 
By Examples 9 and 11, we find that sometimes Wang’s method cannot find a reduction. However, our methods can find them.
(3) 
For big data or complex data, Method 3 will take less time than others, especially Methods 1 and 2, since they must compute multi-neighborhood approximation numbers for all X U , which is time consuming.

7. Conclusions

In this paper, we applied the notions of lower and upper multi-neighborhood approximation numbers to attribute reduction problems in multi-covering information systems. This is a method of quantitative analysis in covering-based rough sets and a new viewpoint of multi-covering information systems. The lower and upper multi-neighborhood approximation numbers can be seen as the quantitative role of lower and upper approximation operators in covering-based rough sets. Furthermore, multi-neighborhood approximation numbers are used to establish Choquet-like integrals, which can be applied in covering granular reduction. Our new methods can make up for the problem that the existing methods cannot reduce the covering granular structure in some multi-covering information systems. Additionally, the reduction method based on Choquet-like integrals can reduce the covering granular structure by the process of aggregating all covering granular structures, which is a new perspective on the use of nonlinear integrals in multi-covering information systems. The reduction method based on Choquet-like integrals also provides a new way to deal with multi-source information fusion in granular computing.
Further study is deserved by the following research topics. The limitations of the present study are as follows. The main question addressed by our research may be detached from practical applications. The so-called “multi” concept can be observed everywhere in practical data processing; however, our paper does not focus on practical data, e.g., multi-modal data. Hence, in the future, the research content of this paper will be applied to some practical data (such as multi-modal data). Furthermore, logical operators [50] and three-way decision making [51,52] will be connected with the research content of this paper in further research.

Author Contributions

All authors have contributed equally to this paper. The individual responsibilities and contribution of all authors can be described as follows: J.W.: writing—original draft; resources; investigation; writing—review and editing; funding acquisition. S.S.: investigation; writing—review and editing; funding acquisition. X.Z.: methodology; investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 12201373), China Postdoctoral Science Foundation (Grant No. 2023T160402), and Natural Science Basic Research Program of Shaanxi (Grant No. 2023-JC-QN-0046).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. Some reflections on soft computing, granular computing and their roles in the conception, design and utilization of information/intelligent systems. Soft Comput. 1998, 2, 23–25. [Google Scholar] [CrossRef]
  2. Hao, C.; Fan, M.; Li, J.; Yin, Y.Q.; Wang, D.J. Optimal scale selection in multi-scale contexts based on granular scale rules. Pattern Recognit. Artif. Intell. 2016, 29, 272–280. [Google Scholar]
  3. Niu, J.; Chen, D.; Li, J.; Wang, H. A dynamic rule-based classification model via granular computing. Inf. Sci. 2022, 584, 325–341. [Google Scholar] [CrossRef]
  4. Qin, J.; Martínez, L.; Pedrycz, W.; Ma, X.; Liang, Y. An overview of granular computing in decision-making: Extensions, applications, and challenges. Inf. Fusion 2023, 98, 101833. [Google Scholar] [CrossRef]
  5. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  6. Alcantud, J.C.R. Revealed indifference and models of choice behavior. J. Math. Psychol. 2002, 46, 418–430. [Google Scholar] [CrossRef]
  7. Wang, J.; Zhang, X. A novel multi-criteria decision making method based on rough sets and fuzzy measures. Axioms 2022, 11, 275. [Google Scholar] [CrossRef]
  8. Wei, X.; Pang, B.; Mi, J. Axiomatic characterizations of L-valued rough sets using a single axiom. Inf. Sci. 2021, 580, 283–310. [Google Scholar] [CrossRef]
  9. Zhang, H.; Sun, Q.; Dong, K. Information-theoretic partially labeled heterogeneous feature selection based on neighborhood rough sets. Int. J. Approx. Reason. 2023, 154, 200–217. [Google Scholar] [CrossRef]
  10. Zhu, W.; Wang, F. Reduction and axiomization of covering generalized rough sets. Inf. Sci. 2003, 152, 217–230. [Google Scholar] [CrossRef]
  11. Wang, J.; Zhang, X.; Liu, C. Grained matrix and complementary matrix: Novel methods for computing information descriptions in covering approximation spaces. Inf. Sci. 2022, 591, 68–87. [Google Scholar] [CrossRef]
  12. Zhu, W. Relationship among basic concepts in covering-based rough sets. Inf. Sci. 2009, 179, 2478–2486. [Google Scholar] [CrossRef]
  13. Mohammed, A.; Shokry, N.; Ashraf, N. Covering soft rough sets and its topological properties with application. Soft Comput. 2023, 27, 4451–4461. [Google Scholar]
  14. Ma, Z.; Mi, J.; Lin, Y.; Li, J. Boundary region-based variable precision covering rough set models. Inf. Sci. 2022, 608, 1524–1540. [Google Scholar] [CrossRef]
  15. Huang, Z.; Li, J. Multi-scale covering rough sets with applications to data classification. Appl. Soft Comput. 2021, 110, 107736. [Google Scholar] [CrossRef]
  16. Restrepo, M.; Cornelis, C.; Gómez, J. Duality, conjugacy and adjointness of approximation operators in covering-based rough sets. Int. J. Approx. Reason. 2014, 55, 469–485. [Google Scholar] [CrossRef]
  17. Wang, C.; Chen, D.; Wu, C.; Hu, Q. Data compression with homomorphism in multi-covering information systems. Int. J. Approx. Reason. 2011, 52, 519–525. [Google Scholar] [CrossRef]
  18. Su, L.; Lin, Y.; Zhao, X. On Axiomatic Characterizations of Positive-Negative Region Covering-Based Rough Sets. In Fuzzy Systems and Data Mining; V. IOS Press: Amsterdam, The Netherlands, 2010; pp. 289–295. [Google Scholar]
  19. Li, F.; Yin, Y. Approaches to knowledge reduction of covering decision systems based on information theory. Inf. Sci. 2009, 179, 1694–1704. [Google Scholar] [CrossRef]
  20. Wu, W. Attribute reduction based on evidence theory in incomplete decision systems. Inf. Sci. 2008, 178, 1355–1371. [Google Scholar] [CrossRef]
  21. Zhang, X.; Li, J.; Li, W. A new mechanism of rule acquisition based on covering rough sets. Appl. Intell. 2022, 52, 12369–12381. [Google Scholar] [CrossRef]
  22. Lang, G.; Li, Q.; Cai, M.; Yang, T. Characteristic matrixes-based knowledge reduction in dynamic covering decision information systems. Knowl.-Based Syst. 2015, 85, 1–26. [Google Scholar] [CrossRef]
  23. Lang, G.; Li, Q.; Cai, M.; Fujita, H.; Zhang, H. Related families-based methods for updating reducts under dynamic object sets. Knowl. Inf. Syst. 2019, 60, 1081–1104. [Google Scholar] [CrossRef]
  24. Zhan, J.; Alcantud, J.C.R. A novel type of soft rough covering and its application to multicriteria group decision making. Artif. Intell. Rev. 2019, 52, 2381–2410. [Google Scholar] [CrossRef]
  25. Chen, J.; Li, J.; Lin, Y.; Lin, G.; Ma, Z. Relations of reduction between covering generalized rough sets and concept lattices. Inf. Sci. 2015, 304, 16–27. [Google Scholar] [CrossRef]
  26. Zhang, X.; Dai, J.; Yu, Y. On the union and intersection operations of rough sets based on various approximation spaces. Inf. Sci. 2015, 292, 214–229. [Google Scholar] [CrossRef]
  27. Li, X.; Yi, H.; Liu, S. Rough sets and matroids from a lattice-theoretic viewpoint. Inf. Sci. 2016, 342, 37–52. [Google Scholar] [CrossRef]
  28. Zhao, F.; Pang, B.; Mi, J. A new approach to generalized neighborhood system-based rough sets via convex structures and convex matroids. Inf. Sci. 2022, 612, 1187–1205. [Google Scholar] [CrossRef]
  29. D’eer, L.; Cornelis, C. A comprehensive study of fuzzy covering-based rough set models: Definitions, properties and interrelationships. Fuzzy Sets Syst. 2018, 336, 1–26. [Google Scholar] [CrossRef]
  30. Yang, B.; Hu, B. On some types of fuzzy covering-based rough sets. Fuzzy Sets Syst. 2017, 312, 36–65. [Google Scholar] [CrossRef]
  31. Wang, J.; Zhang, X.; Dai, J.; Zhan, J. TI-fuzzy neighborhood measures and generalized Choquet integrals for granular structure reduction and decision making. Fuzzy Sets Syst. 2023, 465, 108512. [Google Scholar] [CrossRef]
  32. Kong, Q.; Zhang, X.; Xu, W. Operation properties and algebraic properties of multi-covering rough sets. Granul. Comput. 2019, 4, 377–390. [Google Scholar] [CrossRef]
  33. Xue, Z.; Jing, M.; Li, Y.; Zheng, Y. Variable precision multi-granulation covering rough intuitionistic fuzzy sets. Granul. Comput. 2023, 8, 577–596. [Google Scholar] [CrossRef]
  34. Lin, R.; Li, J.; Chen, D.; Chen, Y.; Huang, J. Attribute reduction based on observational consistency in intuitionistic fuzzy multi-covering decision systems. J. Intell. Fuzzy Syst. 2022, 43, 1599–1619. [Google Scholar] [CrossRef]
  35. Wang, S.; Zhu, W. Matroidal structure of covering-based rough sets through the upper approximation number. Int. J. Granul. Comput. Rough Sets Intell. Syst. 2011, 2, 141. [Google Scholar] [CrossRef]
  36. Wang, S.; Zhu, Q.; Zhu, W.; Min, F. Quantitative analysis for covering-based rough sets through the upper approximation number. Inf. Sci. 2013, 220, 483–491. [Google Scholar] [CrossRef]
  37. Choquet, C. Theory of capacities. Ann. L’Institut Fourier 1953, 5, 131–295. [Google Scholar] [CrossRef]
  38. Waegenaere, A.D.; Wakker, P.P. Nonmonotonic Choquet integrals. J. Math. Econ. 2001, 36, 45–60. [Google Scholar] [CrossRef]
  39. Dimuro, G.P.; Lucca, G.; Bedregal, B.; Mesiar, R.; Sanz, J.A.; Lin, C.T.; Bustince, H. Generalized CF1F2-integrals: From Choquet-like aggregation to ordered directionally monotone functions. Fuzzy Sets Syst. 2020, 378, 44–67. [Google Scholar] [CrossRef]
  40. Zhang, X.; Wang, J.; Zhan, J.; Dai, J. Fuzzy measures and choquet integrals based on fuzzy covering rough sets. IEEE Trans. Fuzzy Syst. 2022, 30, 2360–2374. [Google Scholar] [CrossRef]
  41. Karczmarek, P.; Kiersztyn, A.; Pedrycz, W. On developing Sugeno fuzzy measure densities in problems of face recognition. Int. J. Mach. Intell. Sens. Signal Process. 2017, 2, 80–96. [Google Scholar] [CrossRef]
  42. Marco-Detchart, C.; Lucca, G.; Lopez-Molina, C.; De Miguel, L.; Dimuro, G.P.; Bustince, H. Neuro-inspired edge feature fusion using Choquet integrals. Inf. Sci. 2021, 581, 740–754. [Google Scholar] [CrossRef]
  43. Batista, T.; Bedregal, B.; Moraes, R. Constructing multi-layer classifier ensembles using the Choquet integral based on overlap and quasi-overlap functions. Neurocomputing 2022, 500, 413–421. [Google Scholar] [CrossRef]
  44. Boczek, M.; Józefiak, T.; Kaluszka, M.; Okolewski, A. On the monotonicity of the discrete Choquet-like operators. Int. J. Approx. Reason. 2023, 163, 109045. [Google Scholar] [CrossRef]
  45. Zhang, D.; Mesiar, R.; Pap, E. Pseudo-integral and generalized Choquet integral. Fuzzy Sets Syst. 2022, 446, 193–221. [Google Scholar] [CrossRef]
  46. Zhu, W. Topological approaches to covering rough sets. Inf. Sci. 2007, 177, 1499–1508. [Google Scholar] [CrossRef]
  47. Chen, D.; Wang, C.; Hu, Q. A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets. Inf. Sci. 2007, 177, 3500–3518. [Google Scholar]
  48. Grabisch, M. K-order additive discrete fuzzy measures and their representation. Fuzzy Sets Syst. 1997, 92, 167–189. [Google Scholar] [CrossRef]
  49. Bustince, H.; Fernandez, J.; Mesiar, R.; Montero, J.; Orduna, R. Overlap functions. Nonlinear Anal. 2010, 72, 1488–1499. [Google Scholar] [CrossRef]
  50. Zhang, X.; Sheng, N.; Borzooei, R.A. Partial residuated implications induced by partial triangular norms and partial residuated lattices. Axioms 2023, 12, 63. [Google Scholar] [CrossRef]
  51. Wang, J.; Zhang, X.; Hu, Q. Three-way fuzzy sets and their applications (II). Axioms 2022, 11, 532. [Google Scholar] [CrossRef]
  52. Yao, Y.; Yang, J. Granular rough sets and granular shadowed sets: Three-way approximations in Pawlak approximation spaces. Int. J. Approx. Reason. 2022, 142, 231–247. [Google Scholar] [CrossRef]
Table 1. Some kinds of O and I O .
Table 1. Some kinds of O and I O .
O
1 O m 1 2 ( a , b ) = m i n { a , b }
2 O 2 V ( a , b ) = 1 + ( 2 a 1 ) 2 ( 2 b 1 ) 2 2 , a , b [ 0.5 , 1 ] ; m i n { a , b } , otherwise .
3 O m M V ( a , b ) = 1 + m i n { 2 a 1 , 2 b 1 } m a x { ( 2 a 1 ) 2 , ( 2 b 1 ) 2 } 2 , a , b [ 0.5 , 1 ] ; m i n { a , b } , otherwise .
Table 2. N C j ( x i ) ( i = 1 , 2 , 3 , 4 , 5 , 6 ; j = 1 , 2 , 3 ) .
Table 2. N C j ( x i ) ( i = 1 , 2 , 3 , 4 , 5 , 6 ; j = 1 , 2 , 3 ) .
U x 1 x 2 x 3 x 4 x 5 x 6
N C 1 ( x i ) { x 1 , x 2 , x 3 } { x 2 , x 3 } { x 3 } { x 3 , x 4 } { x 3 , x 4 , x 5 , x 6 } { x 3 , x 4 , x 5 , x 6 }
N C 2 ( x i ) { x 1 , x 6 } { x 2 , x 3 , x 4 } { x 2 , x 3 , x 4 } { x 2 , x 3 , x 4 } { x 2 , x 3 , x 4 , x 5 } { x 1 , x 6 }
N C 3 ( x i ) { x 1 } { x 2 , x 3 } { x 2 , x 3 } { x 2 , x 3 , x 4 , x 5 } { x 2 , x 3 , x 4 , x 5 } { x 1 , x 6 }
Table 3. N Δ ( x i ) ( i = 1 , 2 , 3 , 4 , 5 , 6 ) .
Table 3. N Δ ( x i ) ( i = 1 , 2 , 3 , 4 , 5 , 6 ) .
U x 1 x 2 x 3 x 4 x 5 x 6
N Δ ( x i ) { x 1 } { x 2 , x 3 } { x 3 } { x 3 , x 4 } { x 3 , x 4 , x 5 } { x 6 }
Table 4. The number of X U such that f P s ̲ ( X ) = f Δ ̲ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
Table 4. The number of X U such that f P s ̲ ( X ) = f Δ ̲ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
P s P 1 P 2 P 3 P 4 P 5 P 6
| { X U : f P s ̲ ( X ) = f Δ ̲ ( X ) } | 18 , 986 18 , 078 21 , 947 32 , 768 29 , 696 24 , 827
Table 5. The number of X U such that f P s ¯ ( X ) = f Δ ¯ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
Table 5. The number of X U such that f P s ¯ ( X ) = f Δ ¯ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
P s P 1 P 2 P 3 P 4 P 5 P 6
| { X U : f P s ¯ ( X ) = f Δ ¯ ( X ) } | 18 , 986 18 , 078 21 , 947 32 , 768 29 , 696 24 , 827
Table 6. The results utilizing the different methods of Example 6.
Table 6. The results utilizing the different methods of Example 6.
MethodThe Different Types of ReductsQuantitative Analysis
Wang et al. [17] P 4 is a reduct of I S ×
Method 1 in this paper P 4 is a lower multi-neighborhood approximation number reduct of I S
Method 2 in this paper P 4 is an upper approximation number reduct of I S
Table 7. The number of X U such that f P s ̲ ( X ) = f Δ ̲ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
Table 7. The number of X U such that f P s ̲ ( X ) = f Δ ̲ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
P s P 1 P 2 P 3 P 4 P 5 P 6
| { X U : f P s ̲ ( X ) = f Δ ̲ ( X ) } | 212033 64 64 36
Table 8. The number of X U such that f P s ¯ ( X ) = f Δ ¯ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
Table 8. The number of X U such that f P s ¯ ( X ) = f Δ ¯ ( X ) ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
P s P 1 P 2 P 3 P 4 P 5 P 6
| { X U : f P s ¯ ( X ) = f Δ ¯ ( X ) } | 202133 64 64 36
Table 9. The results utilizing the different methods of Example 1.
Table 9. The results utilizing the different methods of Example 1.
MethodThe Different Types of ReductsReducible Element
Wang et al. [17] Δ ×
Method 1 in this paper P 4 and P 5
Method 2 in this paper P 4 and P 5
Table 10. O w d m Δ and O w d m P s ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
Table 10. O w d m Δ and O w d m P s ( s = 1 , 2 , 3 , 4 , 5 , 6 ) .
O w d m Δ O w d m P 1 O w d m P 2 O w d m P 3 O w d m P 4 O w d m P 5 O w d m P 6
0 . 2415 0.1826 0.1581 0.1581 0 . 2415 0.2236 0.1581
Table 11. O w d m Δ and O w d m P s ( s = 1 , 2 , 3 , 4 , 5 , 6 ) with w 1 = { 0.15 , 0.2 , 0.11 , 0.28 , 0.1 , 0.16 } .
Table 11. O w d m Δ and O w d m P s ( s = 1 , 2 , 3 , 4 , 5 , 6 ) with w 1 = { 0.15 , 0.2 , 0.11 , 0.28 , 0.1 , 0.16 } .
O O w d m Δ O w d m P 1 O w d m P 2 O w d m P 3 O w d m P 4 O w d m P 5 O w d m P 6
O 2 V 0 . 1363 0.1082 0.1322 0.1313 0 . 1363 0 . 1363 0.1322
O m M V 0 . 1117 0.1044 0.1114 0.1089 0 . 1117 0 . 1117 0.1114
O m 1 2 0 . 4000 0.3317 0.3873 0.3873 0 . 4000 0 . 4000 0.3873
Table 12. O w d m Δ and O w d m P s ( s = 1 , 2 , 3 , 4 , 5 , 6 ) with w 2 = { 0.11 , 0.21 , 0.1 , 0.25 , 0.13 , 0.2 } .
Table 12. O w d m Δ and O w d m P s ( s = 1 , 2 , 3 , 4 , 5 , 6 ) with w 2 = { 0.11 , 0.21 , 0.1 , 0.25 , 0.13 , 0.2 } .
O O w d m Δ O w d m P 1 O w d m P 2 O w d m P 3 O w d m P 4 O w d m P 5 O w d m P 6
O 2 V 0 . 1354 0.1000 0.1058 0.1058 0 . 1354 0 . 1354 0.1058
O m M V 0 . 1047 0.1000 0.1011 0.1011 0 . 1047 0 . 1047 0.1011
O m 1 2 0 . 4082 0.3162 0.3317 0.3317 0 . 4082 0 . 4082 0.3317
Table 13. The results utilizing the different methods of Example 1.
Table 13. The results utilizing the different methods of Example 1.
MethodThe Different Types of ReductsReducible Element
Wang et al. [17] Δ ×
Method 1 in this paper P 4 and P 5
Method 2 in this paper P 4 and P 5
Method 3 with w 1 and O 2 V in this paper P 4 and P 5
Method 3 with w 1 and O m M V in this paper P 4 and P 5
Method 3 with w 1 and O m 1 2 V in this paper P 4 and P 5
Method 3 with w 2 and O 2 V in this paper P 4 and P 5
Method 3 with w 2 and O m M V in this paper P 4 and P 5
Method 3 with w 2 and O m 1 2 V in this paper P 4 and P 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Shao, S.; Zhang, X. Choquet-like Integrals with Multi-Neighborhood Approximation Numbers for Novel Covering Granular Reduction Methods. Mathematics 2023, 11, 4650. https://doi.org/10.3390/math11224650

AMA Style

Wang J, Shao S, Zhang X. Choquet-like Integrals with Multi-Neighborhood Approximation Numbers for Novel Covering Granular Reduction Methods. Mathematics. 2023; 11(22):4650. https://doi.org/10.3390/math11224650

Chicago/Turabian Style

Wang, Jingqian, Songtao Shao, and Xiaohong Zhang. 2023. "Choquet-like Integrals with Multi-Neighborhood Approximation Numbers for Novel Covering Granular Reduction Methods" Mathematics 11, no. 22: 4650. https://doi.org/10.3390/math11224650

APA Style

Wang, J., Shao, S., & Zhang, X. (2023). Choquet-like Integrals with Multi-Neighborhood Approximation Numbers for Novel Covering Granular Reduction Methods. Mathematics, 11(22), 4650. https://doi.org/10.3390/math11224650

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop