Fully Scalable Fuzzy Neural Network for Data Processing
Abstract
:1. Introduction
- Decentralization of data analysis: IoT devices can independently process data and make decisions, which increases their autonomy and reduces the latency associated with transmitting data to a central server;
- Optimization of energy consumption: thanks to local data processing, IoT devices can more efficiently manage their energy usage, which is crucial for battery-powered devices;
- Increased data security through local processing: processing data on the device reduces the risk of data interception during transmission, therefore enhancing information security;
- System modularity: with local processing capabilities, IoT systems can be more easily expanded with new devices without the need to modify the central infrastructure;
- Shorter response time to events: local analysis and decision-making allow for faster responses to anomalies or other events in real-time;
- Reduced data transmission costs: fewer data to be sent to central servers means lower costs associated with data transmission and less demand for network bandwidth;
- Lower infrastructure cost: reduced requirements for central data centers and a reduced need for complex cloud-based analytical algorithms;
- Local customization of algorithms: IoT devices can be equipped with algorithms tailored to the specific needs of local users or operating conditions;
- Offline functionality: devices can continue their operations and analyze data, even in areas with poor network access, which is crucial in remote or hard-to-reach locations.
- Data transmission to a central analysis center, which causes data transmission energy costs, and a steady connection may not always be available;
- Alternatively, the need for local data analysis, which, over a long period, with battery-powered devices, will generate energy consumption.
2. Short Review of Fuzzy Neural Network
3. Fuzzy Neural Network with Ordered Fuzzy Numbers
- _A—beginning, rising slope;
- —end, falling slope.
- Sum: ordered fuzzy number is the sum of numbers and , when:
- Difference: ordered fuzzy number is the difference between numbers and , when:
- Multiplication by scalar: ordered fuzzy number is the result of the multiplication of number by scalar , when:
- Product: ordered fuzzy number is the product of numbers and , when:
- Quotient: ordered fuzzy number is the quotient of numbers by , when:
- They allow calculations to be performed while avoiding the identified drawbacks of traditional L-R numbers;
- No attempt has yet been made to implement a network using this solution, so that is why this is a novel solution.
- First layer—a fuzzification process is employed to convert input data into the network into OFN notation;
- Last layer—defuzzification is utilized for processing the output data from the network;
- Deep layer—network learning/training algorithms are adapted to operate effectively with OFN network arithmetic in network layers.
- First layer with 2 inputs and 4 neurons;
- Second layer with 4 inputs and 2 neurons;
- Second layer with 2 neurons on input side and 1 output.
- LOM—Largest of Maximum is a method used in fuzzy logic to determine the degree of probability for each element of a fuzzy set by finding the maximum membership value. This approach is commonly applied when assigned values carry greater significance, and is particularly useful in decision-making systems where minimizing errors and achieving high-quality defuzzification are crucial. Essentially, LOM selects the largest value from the maximum membership values of the fuzzy set elements, making it a valuable tool for optimizing the accuracy and reliability of fuzzy logic-based systems;
- MOM—Mean of Maximus is a method used in fuzzy logic to determine the degree of probability for each element of a fuzzy set by calculating the arithmetic mean of the maximum membership values. This approach is commonly applied when different membership values are equally important, unlike the LOM method, where certain values hold more significance. MOM is particularly useful in decision-making systems where less precision is required compared with LOM, but where different membership values must still be considered. Essentially, MOM provides a way to balance the importance of different membership values, resulting in a more versatile and flexible approach to fuzzy logic-based systems;
- FOM—First of Maximus is a method used in fuzzy logic to determine the degree of probability for each element of a fuzzy set by selecting the maximum value that occurs first on the variable axis. This approach is commonly applied when the most important value is the degree of membership that first reaches its maximum value, and when a quick decision is required. FOM can be particularly useful in decision-making systems where there is only one degree of membership that is significantly higher than the others, or where precision is not a major concern. Essentially, FOM provides a way to quickly identify the most important value of a fuzzy set, making it a valuable tool for time-sensitive applications;
- Golden Ratio is a method that uses a mathematical constant with a value of approximately 1.618 to determine the real value.
4. Monitoring System—Solution Tests
- Cafeteria system—application (event) log;
- System of internal legal acts—application (event) log;
- Logs of Apache Tomcat application servers;
- MariaDB database system logs;
- operating system status logs.
- Type 1: Increased number of repetitive system errors;
- Type 2: Specific repetitive sequences of events leading to system failure;
- Type 3: Resource-consuming system actions leading to errors occurring in a short time;
- Type 4: Abnormal events related to an attempt to access the system;
- Type 5: Analysis of trends related to specific actions in the system and finding situations deviating from the norms;
- Type 6: Detecting errors related to communication between different systems;
- Type 7: Detection of point increased load on the system that may suggest errors or attacks.
- It came from an existing system, thus addressing a practical problem of anomaly detection and prediction;
- It can represent logs from any device, such as a network edge device: router, firewall, access device, or edge sensor solution.
4.1. Research Methodology
4.2. Research Results Achieved
- Input layer: 10,000;
- Deep layer: 64;
- Deep layer: 64;
- Output layer: 1.
- Input layer: 10,000;
- LSTM layer: 64;
- Deep layer 64;
- Output layer: 1.
- Input layer: 10,000;
- Deep layer: 512;
- Deep layer: 256;
- Deep layer 64;
- Output layer: 1.
- Fuzzy Neural Network with OFN: 129;
- LSTM network: 129;
- Deep Neural Network: 833.
5. Discussion
- Fuzzy Neural Network with OFN: 15 min;
- LSTM network: 50 h;
- Deep Neural Network: 3 h 20 min.
- The necessity of transmitting the data to remote destinations within the area the solution covers;
- The necessity of developing data transmission protocols required for efficiently transferring the data,
- Creating a huge data center for collecting and processing the data—producers do not need to build a cloud architecture data center for collecting and store big amounts of data, which we need to share with other systems;
- Developing the algorithms for big data analysis—the data could be analyzed in small IoT devices and only the results could be passed to the management center;
- The necessity of connecting various devices working with different communications protocols, which could be incompatible—there is no need to build some gateways or converters for them.
6. Limitations
- First and foremost, the solution has not yet been tested with different neuron activation methods, which will be necessary for its widespread application;
- The second limitation of the solution is the necessity of fuzzifying the data. While there are many examples of applying fuzzy logic in the literature, each use of the proposed fuzzy neural network will require a separate analysis of the data’s nature. Based on the nature of the data, it will be necessary to select a fuzzification method or even develop a new one. Developing new fuzzification methods can be a challenging task. Although various defuzzification methods are well-known in the literature, the fuzzification process still requires a lot of work. Of course, there are certain types of data for which fuzzy logic is ideally suited, particularly data defined by intervals;
- The third limitation of the solution is the necessity for possessing extensive expert knowledge encompassing fuzzification, defuzzification, and neural network construction. A specialist using the proposed solution must combine knowledge of fuzzy logic with neural networks, and the selection of fuzzification and defuzzification methods can significantly impact the achieved results. As the use of the solution increases, it is valuable to build knowledge on the selection of these fuzzification and defuzzification methods
- The possibilities and results achieved with different sharpening functions, considering that there are currently many sharpening methods, and new ones are continually being developed. Studies show that they affect the output result, and their application requires thorough research. It is estimated that they may even impact the network training process, including its speed,
- The impact of random weight selection, where, in the randomization process, weights can take various shapes, not only trapezoidal. Limiting the shape of random weights or their normalization can also affect the network training process, especially the speed of learning and the quality of the obtained results.
7. Conclusions
- Energy consumption, because it uses fewer neurons in the network architecture than a Deep Neural Network, which provides the same level of accuracy. This was tested during this work and in some previous tests in which the Iris database was used [22]. As a result, it requires less computational power from the processor, thus allowing for the conservation of the energy needed to perform the calculations;
- Data security, because it allowed for the development of a solution for detecting and predicting anomalies in running software. This enables the administrator to make timely decisions and avoid system issues, thereby eliminating software vulnerabilities. Consequently, this leads to an increase in the security of the processed data;
- Reduced data transmission costs, as the solution can be applied at the network edge and allows for data analysis at the point of origin. This avoids the need to transmit data to a data center, thereby eliminating the associated data transmission costs, including the energy required for such transmission.
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Rojek, I.; Dostatni, E.; Mikołajewski, D.; Pawłowski, L.; Węgrzyn-Wolska, K.M. Modern approach to sustainable production in the context of Industry 4.0. Bull. Pol. Acad. Sci. Tech. Sci. 2022, 70, 143828. [Google Scholar] [CrossRef]
- Lee, S.C.; Lee, E.T. Fuzzy Neural Networks. Math. Biosci. 1975, 23, 151–177. [Google Scholar] [CrossRef]
- Ishibuchi, H.; Kwon, K.; Tanaka, H. A learning algorithm of fuzzy neural networks with triangular fuzzy weights. Fuzzy Sets Syst. 1995, 71, 277–293. [Google Scholar] [CrossRef]
- Buckley, J.J.; Hayashi, Y. Fuzzy neural networks: A survey. Fuzzy Sets Syst. 1994, 66, 1–13. [Google Scholar] [CrossRef]
- Vitor de Campos Souza, P.; Lughofer, E. EFNN-NullUni: An evolving fuzzy neural network based on null-uninorm. Fuzzy Sets Syst. 2022, 449, 1–31. [Google Scholar] [CrossRef]
- Liu, X.; Zhao, T.; Cao, J.; Li, P. Design of an interval type-2 fuzzy neural network sliding mode robust controller for higher stability of magnetic spacecraft attitude control. ISA Trans. 2023, 137, 144–159. [Google Scholar] [CrossRef]
- Zheng, K.; Zhang, Q.; Hu, Y.; Wu, B. Design of fuzzy system-fuzzy neural network-backstepping control for complex robot system. Inf. Sci. 2021, 546, 1230–1255. [Google Scholar] [CrossRef]
- Zhang, Y.; Ma, H.; Xu, J. Neural network-based fuzzy vibration controller for offshore platform with random time delay. Ocean Eng. 2021, 225, 108733. [Google Scholar] [CrossRef]
- Yang, M.; Sheng, Z.; Yin, G.; Wang, H. A recurrent neural network based fuzzy sliding mode control for 4-DOF ROV movements. Ocean Eng. 2022, 256, 111509. [Google Scholar] [CrossRef]
- Zhang, R.; Gao, L. The Brushless DC motor control system Based on neural network fuzzy PID control of power electronics technology. Optik 2022, 271, 169879. [Google Scholar] [CrossRef]
- Zhang, Q.-Q.; Wai, R.-J. Distributed secondary control of islanded micro-grid based on adaptive fuzzy-neural-network-inherited total-sliding-mode control technique. Int. J. Electr. Power Energy Syst. 2022, 137, 107792. [Google Scholar] [CrossRef]
- Wang, P.; Li, X.; Wang, N.; Li, Y.; Shi, K.; Lu, J. Almost periodic synchronization of quaternion-valued fuzzy cellular neural networks with leakage delays. Fuzzy Sets Syst. 2022, 426, 46–65. [Google Scholar] [CrossRef]
- Hou, G.; Xiong, J.; Zhou, G.; Gong, L.; Huang, C.; Wang, S. Coordinated control system modeling of ultra-supercritical unit based on a new fuzzy neural network. Energy 2021, 234, 121231. [Google Scholar] [CrossRef]
- Pang, M.; Zhang, Z.; Wang, X.; Wang, Z.; Lin, C. Fixed/Preassigned-time synchronization of high-dimension-valued fuzzy neural networks with time-varying delays via nonseparation approach. Knowl. Based Syst. 2022, 255, 109774. [Google Scholar] [CrossRef]
- Van, M. Higher-order terminal sliding mode controller for fault accommodation of Lipschitz second-order nonlinear systems using fuzzy neural network. Appl. Soft Comput. 2021, 104, 107186. [Google Scholar] [CrossRef]
- Salari, A.H.; Mirzaeinejad, H.; Mahani, M.F. Tire normal force estimation using artificial neural networks and fuzzy classifiers: Experimental validation. Appl. Soft Comput. 2023, 132, 109835. [Google Scholar] [CrossRef]
- Yadav, P.K.; Bhasker, R.; Upadhyay, S.K. Comparative study of ANFIS fuzzy logic and neural network scheduling based load frequency control for two-area hydro thermal system. Mater. Today Proc. 2022, 56, 3042–3050. [Google Scholar] [CrossRef]
- Nasiri, H.; Ebadzadeh, M.M. MFRFNN: Multi-Functional Recurrent Fuzzy Neural Network for Chaotic Time Series Prediction. Neurocomputing 2022, 507, 292–310. [Google Scholar] [CrossRef]
- Souza, P.V.d.C. Fuzzy neural networks and neuro-fuzzy networks: A review the main techniques and applications used in the literature. Appl. Soft Comput. 2020, 92, 106275. [Google Scholar] [CrossRef]
- Jang, J.-S. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
- Pan, B.; Li, C.; Che, H.; Leung, M.-F.; Yu, K. Low-Rank Tensor Regularized Graph Fuzzy Learning for Multi-View Data Processing. IEEE Trans. Consum. Electron. 2024, 70, 2925–2938. [Google Scholar] [CrossRef]
- Shi, Y.; Mizumoto, M. A new approach of neuro-fuzzy learning algorithm for tuning fuzzy rules. Fuzzy Sets Syst. 2000, 112, 99–116. [Google Scholar] [CrossRef]
- Chung, F.-L.; Lee, T. Fuzzy learning vector quantization. In Proceedings of the 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan), Nagoya, Japan, 25–29 October 1993; Volume 3, pp. 2739–2743. [Google Scholar] [CrossRef]
- Russo, M. Genetic fuzzy learning. IEEE Trans. Evol. Comput. 2000, 4, 259–273. [Google Scholar] [CrossRef]
Class Layer |
---|
class Layer: def __init__(self): self.input = None self.output = None def forward(self, input): pass def backward(self, output_gradient): pass |
Class Dense |
---|
class Dense(Layer): def __init__(self, input_size, output_size): self.weights = np.ndarray(shape=(output_size,input_size),dtype=object) self.weights.fill(OFN()) self.bias = np.ndarray(shape=(output_size, 1), dtype=object) self.bias.fill(OFN()) def forward(self, input): self.input = input return np.dot(self.weights, self.input) + self.bias def backward(self, output_gradient, learning_rate): weights_gradient = np.dot(output_gradient, self.input.T) self.weights -= weights_gradient * learning_rate self.bias -= output_gradient * learning_rate return np.dot(self.weights.T, output_gradient) |
Class Activation |
---|
Class Activation(Layer): def __init__(self, activation, activation_prime): self.activation = activation self.activation_prime = activation_prime def forward(self, input): self.input = input return self.activation(self.input) def backward(self, output_gradient, learning_rate): return np.multiply(output_gradient, self.activation_prime(self.input)) |
Network Definition |
---|
network = [ Dense(2, 4), Tanh(), Dense(4, 2), Tanh(), Dense(2, 1), Tanh() ] |
Network Training |
---|
# train for e in range(epochs): error = 0 for x, y in zip(X, Y): # forward output = x for layer in network: output = layer.forward(output) # error error += mse(y, output) # backward grad = mse_prime(y, output) for layer in reversed(network): grad = layer.backward(grad, learning_rate) error /= len(X) |
Class OFN |
---|
Class OFN: def __init__(self, x1 = None, x2 = None, x3 = None, x4 = None): self.x1 = random.random() if (x1 is None) else x1 self.x2 = random.random() if (x2 is None) else x2 self.x3 = random.random() if (x3 is None) else x3 self.x4 = random.random() if (x4 is None) else x4 def __add__(self, other): if isinstance(other, OFN): return OFN(self.x1 + other.x1, self.x2 + other.x2, self.x3 + other.x3, self.x4 + other.x4) else: return OFN(self.x1 + float(other), self.x2 + float(other), self.x3 + float(other), self.x4 + float(other)) def __sub__(self, other): if isinstance(other, OFN): return OFN(self.x1 - other.x1, self.x2 - other.x2, self.x3 - other.x3, self.x4 - other.x4) else: return OFN(self.x1 - float(other), self.x2 - float(other), self.x3 - float(other), self.x4 - float(other)) def __mul__(self, other): if isinstance(other, OFN): return OFN(self.x1 * other.x1, self.x2 * other.x2, self.x3 * other.x3, self.x4 * other.x4) else: return OFN(self.x1 * float(other), self.x2 * float(other), self.x3 * float(other), self.x4 * float(other)) def __div__(self, other): if isinstance(other, OFN): return OFN(self.x1 / other.x1, self.x2 / other.x2, self.x3 / other.x3, self.x4 / other.x4) else: return OFN(self.x1 / float(other), self.x2 / float(other), self.x3 / float(other), self.x4 / float(other)) |
Type of Anomaly | Percentage of Anomaly Detection on Test Data | Percentage of Anomaly Prediction on Test Data |
---|---|---|
1 | 95 | 91 |
2 | 94 | 89 |
3 | 94 | 91 |
4 | 91 | 85 |
5 | 92 | 84 |
6 | 93 | 88 |
7 | 92 | 87 |
Type of Anomaly | Percentage of Anomaly Detection on Test Data | Percentage of Anomaly Prediction on Test Data |
---|---|---|
1 | 96 | 92 |
2 | 92 | 86 |
3 | 91 | 88 |
4 | 90 | 86 |
5 | 91 | 83 |
6 | 91 | 86 |
7 | 94 | 88 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Apiecionek, Ł. Fully Scalable Fuzzy Neural Network for Data Processing. Sensors 2024, 24, 5169. https://doi.org/10.3390/s24165169
Apiecionek Ł. Fully Scalable Fuzzy Neural Network for Data Processing. Sensors. 2024; 24(16):5169. https://doi.org/10.3390/s24165169
Chicago/Turabian StyleApiecionek, Łukasz. 2024. "Fully Scalable Fuzzy Neural Network for Data Processing" Sensors 24, no. 16: 5169. https://doi.org/10.3390/s24165169
APA StyleApiecionek, Ł. (2024). Fully Scalable Fuzzy Neural Network for Data Processing. Sensors, 24(16), 5169. https://doi.org/10.3390/s24165169