Computationally Efficient Inference via Time-Aware Modular Control Systems
Abstract
:1. Introduction
1.1. Preface
1.2. Contributions
- PHIMEC
- –
- novel framework for neural control, which addresses data-leveraging efficiency
- –
- two variants in terms of design—recurrent and fully connected
- –
- two variations in terms of learning federation
- –
- showcases its performance in a real-world control application
- IMPULSTM
- –
- integrates temporal knowledge into an LSTM design
- –
- addresses a lack of convexity of the loss landscape for recurrent neural networks to enhance generalization abilities
- –
- addresses the memory efficiency of LSTM design
- –
- provides a formal analysis of reduced expressiveness from our changes
- –
- proposes a human-in-the-loop way to control how networks handle situations with respect to temporal data
- DIMAS
- –
- proposes a modular-design framework for domain-informed distributed control systems
- –
- combines physical awareness with temporal insight for the sake of full domain-knowledge integration
- –
- addresses the issue of wasted compute in RTMAS systems in an intuitive way
2. Related Research
2.1. Data-Efficient Neural Control
2.2. Temporal Awareness
2.3. Modular Neural Systems
3. Proposals
3.1. PHIMEC
3.2. IMPULSTM
3.2.1. Methodology
3.2.2. Scaling of Architecture Changes
- -
- w is the width of the layer,
- -
- S is the ratio between the computation for a sigmoid activation and the chosen nonlinear activation,
- -
- T is the computation required for a tanh activation,
- -
- d is the computation cost for a dot product operation, and
- -
- H is the computation cost for a Hadamard product.
3.3. DIMAS
- is the last measured state of node i at time t.
- is the amount of control applied to node i at time t.
- L is a function that depends on and .
3.3.1. Priority Manager (PM)
3.3.2. Urgency Decoder (UD)
3.4. Scalability
3.4.1. Structural
3.4.2. PHIMEC
3.4.3. IMPULSTM
- -
- is the state of the system at time t,
- -
- is the input or control variable,
- -
- describes how the state evolves over time.
- -
- denoting the expressiveness of the standard LSTM.
- -
- denoting the expressiveness of the IMPULSTM.
- -
- and denoting the temporal handling abilities of LSTM and IMPULSTM, respectively, with respect to irregularly sampled data.
- -
- is the state vector of the system at time t,
- -
- is the input vector (control input),
- -
- is the output vector,
- -
- is the system matrix that defines the system’s dynamics,
- -
- is the input matrix,
- -
- is the output matrix, and
- -
- is the feedthrough (or direct transmission) matrix.
- -
- Let be the true state of the system at the current timestep .
- -
- The system evolves from the previous timestep according to the system dynamics:
- -
- If is the last measurement at time , the neural network typically relies on the measurement and the known input to estimate the state at the next timestep .
- -
- Temporal irregularity can then be defined as the deviation between the true state and the estimate based on the last available measurement at , assuming regular evolution of the system:
- -
- represent the true state of the system at time ,
- -
- be the state representation in the latent space of the LSTM model at time ,
- -
- be the state representation of the LSTM model at time .
4. Experiments
4.1. Results PHIMEC
4.1.1. Control Problem
- -
- is the Lagrange multiplier associated with the lift-force constraint, which we apply to simplify the usage of gradient descent-based algorithms for the training of the controller. Also, it serves as an additional regularization term for the control agent, increasing the uniqueness of the solution for the training process.
- -
- is the minimum acceptable lift force, which will depend on the characteristics of the aircraft structure itself and the other characteristics of the use case.
- -
- and are the lift and drag forces, stated as functions of the flap angles and environmental parameters such as axial force.
4.1.2. Empirical Analysis
4.2. Results IMPULSTM
4.2.1. Performance Benchmarks
4.2.2. Empirical Analysis
4.3. Results DIMAS
4.3.1. Formation Control Task
- is the Euclidean distance between robots i and j.
- is the prescribed inter-robot distance.
4.3.2. Graph Neural Networks
- Aggregate:
- Update:
- Readout:
4.3.3. IMPULSTM and PHIMEC
4.3.4. Results
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
DIMAS | Domains Integration for Multi-Agent Systems |
PHIMEC | Physics-Informed Meta Control |
GNN | Graph Neural Network |
LSTM | Long Short-Term Memory |
GSP | Graph Signal Processing |
UD | Urgency Decoder |
PM | Priority Manager |
References
- Plappert, S.; Gembarski, P.; Lachmayer, R. Multi-Agent Systems in Mechanical Engineering: A Review. In Smart Innovation, Systems and Technologies; Springer: Singapore, 2021; pp. 193–203. [Google Scholar] [CrossRef]
- Černevičienė, J.; Kabašinskas, A. Review of Multi-Criteria Decision-Making Methods in Finance Using Explainable Artificial Intelligence. Front. Artif. Intell. 2022, 5, 827584. [Google Scholar] [CrossRef] [PubMed]
- Bazzan, A.L.C.; Klügl, F. A review on agent-based technology for traffic and transportation. Knowl. Eng. Rev. 2014, 29, 375–403. [Google Scholar] [CrossRef]
- Rizk, Y.; Awad, M.; Tunstel, E. Decision Making in Multi-Agent Systems: A Survey. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 514–529. [Google Scholar] [CrossRef]
- Maestre, I.M.; Sánchez Prieto, S.; Velasco Pérez, J.R. Sistemas Multiagente de Tiempo Real. 2005. Available online: https://api.semanticscholar.org/CorpusID:170300785 (accessed on 7 September 2024).
- He, X. Building Safe and Stable DNN Controllers using Deep Reinforcement Learning and Deep Imitation Learning. In Proceedings of the 2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS), Guangzhou, China, 5–9 December 2022; pp. 775–784. [Google Scholar] [CrossRef]
- Deng, C.; Ji, X.; Rainey, C.; Zhang, J.; Lu, W. Integrating Machine Learning with Human Knowledge. iScience 2020, 23, 101656. [Google Scholar] [CrossRef] [PubMed]
- Baty, H. A hands-on introduction to Physics-Informed Neural Networks for solving partial differential equations with benchmark tests taken from astrophysics and plasma physics. arXiv 2024, arXiv:2403.00599. [Google Scholar]
- Malcolm, K.; Casco-Rodriguez, J. A Comprehensive Review of Spiking Neural Networks: Interpretation, Optimization, Efficiency, and Best Practices. arXiv 2023, arXiv:2303.10780. [Google Scholar]
- Sun, Y.; Sengupta, U.; Juniper, M. Physics-informed deep learning for simultaneous surrogate modeling and PDE-constrained optimization of an airfoil geometry. Comput. Methods Appl. Mech. Eng. 2023, 411, 116042. [Google Scholar] [CrossRef]
- Nguyen, N.T.; Cramer, N.B.; Hashemi, K.E.; Ting, E.; Drew, M.; Wise, R.; Boskovic, J.; Precup, N.; Mundt, T.; Livne, E. Real-Time Adaptive Drag Minimization Wind Tunnel Investigation of a Flexible Wing with Variable Camber Continuous Trailing Edge Flap System. In Proceedings of the AIAA Aviation 2019 Forum, Dallas, TX, USA, 17–21 June 2019. [Google Scholar] [CrossRef]
- Stiasny, J.; Chevalier, S.; Chatzivasileiadis, S. Learning without Data: Physics-Informed Neural Networks for Fast Time-Domain Simulation. arXiv 2021, arXiv:2106.15987. [Google Scholar]
- Erichson, N.B.; Muehlebach, M.; Mahoney, M.W. Physics-informed Autoencoders for Lyapunov-stable Fluid Flow Prediction. arXiv 2019, arXiv:1905.10866. [Google Scholar]
- Chen, S.; Guo, W. Auto-Encoders in Deep Learning—A Review with New Perspectives. Mathematics 2023, 11, 1777. [Google Scholar] [CrossRef]
- Bank, D.; Koenigstein, N.; Giryes, R. Autoencoders. arXiv 2021, arXiv:2003.05991. [Google Scholar]
- Song, D.R.; Yang, C.; McGreavy, C.; Li, Z. Recurrent Deterministic Policy Gradient Method for Bipedal Locomotion on Rough Terrain Challenge. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Automation, Singapore, 18–21 November 2018. [Google Scholar] [CrossRef]
- Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep reinforcement learning. arXiv 2019, arXiv:1509.02971. [Google Scholar]
- Haarnoja, T.; Zhou, A.; Abbeel, P.; Levine, S. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv 2018, arXiv:1801.01290. [Google Scholar]
- Fujimoto, S.; van Hoof, H.; Meger, D. Addressing Function Approximation Error in Actor-Critic Methods. arXiv 2018, arXiv:1802.09477. [Google Scholar]
- Hou, Y.; Hong, H.; Sun, Z.; Xu, D.; Zeng, Z. The Control Method of Twin Delayed Deep Deterministic Policy Gradient with Rebirth Mechanism to Multi-DOF Manipulator. Electronics 2021, 10, 870. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Philipp, G.; Song, D.; Carbonell, J.G. The exploding gradient problem demystified—Definition, prevalence, impact, origin, tradeoffs, and solutions. arXiv 2018, arXiv:1712.05577. [Google Scholar]
- Zhao, Z.; Ding, X.; Prakash, B.A. PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks. arXiv 2024, arXiv:2307.11833. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023, arXiv:1706.03762. [Google Scholar]
- Guo, T.; Lin, T.; Antulov-Fantulin, N. Exploring Interpretable LSTM Neural Networks over Multi-Variable Data. arXiv 2019, arXiv:1905.12034. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. [Google Scholar]
- Liu, Z.; Zhu, Z.; Gao, J.; Xu, C. Forecast Methods for Time Series Data: A Survey. IEEE Access 2021, 9, 91896–91912. [Google Scholar] [CrossRef]
- Smedt, J.D.; Yeshchenko, A.; Polyvyanyy, A.; Weerdt, J.D.; Mendling, J. Process Model Forecasting Using Time Series Analysis of Event Sequence Data. arXiv 2021, arXiv:2105.01092. [Google Scholar]
- Yu, X.; Ren, Z.; Liu, P.; Imsland, L.; Georges, L. Comparison of time-invariant and adaptive linear grey-box models for model predictive control of residential buildings. Build. Environ. 2024, 254, 111391. [Google Scholar] [CrossRef]
- Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A Survey on Deep Transfer Learning. arXiv 2018, arXiv:1808.01974. [Google Scholar]
- Branco, N.W.; Cavalca, M.S.M.; Stefenon, S.F.; Leithardt, V.R.Q. Wavelet LSTM for Fault Forecasting in Electrical Power Grids. Sensors 2022, 22, 8323. [Google Scholar] [CrossRef]
- Zhang, W.; Yang, D.; Cheung, C.Y.; Chen, H. Frequency-Aware Inverse-Consistent Deep Learning for OCT-Angiogram Super-Resolution. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2022, Singapore, 18–22 September 2022; Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S., Eds.; Springer: Cham, Switerland, 2022; pp. 645–655. [Google Scholar]
- Cao, D.; Huang, J.; Zhang, X.; Liu, X. FTCLNet: Convolutional LSTM with Fourier Transform for Vulnerability Detection. In Proceedings of the 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), Guangzhou, China, 29 December 2020–1 January 2021; pp. 539–546. [Google Scholar] [CrossRef]
- Ishii, T.; Ueda, R.; Miyao, Y. Empirical Analysis of the Inductive Bias of Recurrent Neural Networks by Discrete Fourier Transform of Output Sequences. arXiv 2023, arXiv:2305.09178. [Google Scholar]
- Nguyen, A.; Chatterjee, S.; Weinzierl, S.; Schwinn, L.; Matzner, M.; Eskofier, B. Time Matters: Time-Aware LSTMs for Predictive Business Process Monitoring. arXiv 2020, arXiv:2010.00889. [Google Scholar]
- Baytas, I.M.; Xiao, C.; Zhang, X.; Wang, F.; Jain, A.K.; Zhou, J. Patient Subtyping via Time-Aware LSTM Networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, New York, NY, USA, 13–17 August 2017; pp. 65–74. [Google Scholar] [CrossRef]
- Fuengfusin, N.; Tamukoh, H. Network with Sub-networks: Layer-wise Detachable Neural Network. J. Robot. Netw. Artif. Life 2020, 7, 240. [Google Scholar] [CrossRef]
- Mohammed, A.; Kora, R. A comprehensive review on ensemble deep learning: Opportunities and challenges. J. King Saud Univ.-Comput. Inf. Sci. 2023, 35, 757–774. [Google Scholar] [CrossRef]
- Yang, Z.; Li, L.; Xu, X.; Kailkhura, B.; Xie, T.; Li, B. On the Certified Robustness for Ensemble Models and Beyond. arXiv 2022, arXiv:2107.10873. [Google Scholar]
- Zhong, Y.; Ta, Q.T.; Luo, T.; Zhang, F.; Khoo, S.C. Scalable and Modular Robustness Analysis of Deep Neural Networks. arXiv 2021, arXiv:2108.11651. [Google Scholar]
- Zhou, J.; Cui, G.; Hu, S.; Zhang, Z.; Yang, C.; Liu, Z.; Wang, L.; Li, C.; Sun, M. Graph neural networks: A review of methods and applications. AI Open 2020, 1, 57–81. [Google Scholar] [CrossRef]
- Kortvelesy, R.; Prorok, A. ModGNN: Expert Policy Approximation in Multi-Agent Systems with a Modular Graph Neural Network Architecture. arXiv 2023, arXiv:2103.13446. [Google Scholar]
- Andreas, J.; Rohrbach, M.; Darrell, T.; Klein, D. Neural Module Networks. arXiv 2017, arXiv:1511.02799. [Google Scholar]
- Yang, R.; Singh, S.K.; Tavakkoli, M.; Amiri, N.; Yang, Y.; Karami, M.A.; Rai, R. CNN-LSTM deep learning architecture for computer vision-based modal frequency detection. Mech. Syst. Signal Process. 2020, 144, 106885. [Google Scholar] [CrossRef]
- Langston, J.; Ravindra, H.; Steurer, M.; Fikse, T.; Schegan, C.; Borraccini, J. Priority-Based Management of Energy Resources During Power-Constrained Operation of Shipboard Power System. In Proceedings of the 2021 IEEE Electric Ship Technologies Symposium (ESTS), Arlington, VA, USA, 3–6 August 2021; pp. 1–9. [Google Scholar] [CrossRef]
- Faxas-Guzmán, J.; García-Valverde, R.; Serrano-Luján, L.; Urbina, A. Priority load control algorithm for optimal energy management in stand-alone photovoltaic systems. Renew. Energy 2014, 68, 156–162. [Google Scholar] [CrossRef]
- Sebastian, E.; Duong, T.; Atanasov, N.; Montijano, E.; Sagues, C. Physics-Informed Multi-Agent Reinforcement Learning for Distributed Multi-Robot Problems. arXiv 2024, arXiv:2401.00212. [Google Scholar]
- Mowlavi, S.; Nabi, S. Optimal control of PDEs using physics-informed neural networks. arXiv 2022, arXiv:2111.09880. [Google Scholar] [CrossRef]
- Zheng, Y.; Hu, C.; Wang, X.; Wu, Z. Physics-informed recurrent neural network modeling for predictive control of nonlinear processes. J. Process Control 2023, 128, 103005. [Google Scholar] [CrossRef]
- Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. ETSformer: Exponential Smoothing Transformers for Time-series Forecasting. arXiv 2022, arXiv:2202.01381. [Google Scholar]
- Jawad, M.; Dhawale, C.; Ramli, A.A.B.; Mahdin, H. Adoption of knowledge-graph best development practices for scalable and optimized manufacturing processes. MethodsX 2023, 10, 102124. [Google Scholar] [CrossRef]
- Xiao, H.; Ordozgoiti, B.; Gionis, A. Searching for polarization in signed graphs: A local spectral approach. In Proceedings of the Web Conference 2020, WWW ’20, Taipei, Taiwan, 20–24 April 2020. [Google Scholar] [CrossRef]
- Augustine, M.T. A Survey on Universal Approximation Theorems. arXiv 2024, arXiv:2407.12895. [Google Scholar]
- Wang, S.; Li, Z.; Li, Q. Inverse Approximation Theory for Nonlinear Recurrent Neural Networks. arXiv 2024, arXiv:2305.19190. [Google Scholar]
- Zhu, R.; Lin, B.; Tang, H. Bounding The Number of Linear Regions in Local Area for Neural Networks with ReLU Activations. arXiv 2020, arXiv:2007.06803. [Google Scholar]
- Yang, Y.; Wang, T.; Woolard, J.P.; Xiang, W. Guaranteed approximation error estimation of neural networks and model modification. Neural Netw. 2022, 151, 61–69. [Google Scholar] [CrossRef] [PubMed]
- Oukhouya, H.; El Himdi, K. Comparing Machine Learning Methods—SVR, XGBoost, LSTM, and MLP— For Forecasting the Moroccan Stock Market. Comput. Sci. Math. Forum 2023, 7, 39. [Google Scholar] [CrossRef]
- Niu, K.; Zhou, M.; Abdallah, C.T.; Hayajneh, M. Deep transfer learning for system identification using long short-term memory neural networks. arXiv 2022, arXiv:2204.03125. [Google Scholar]
- Qin, J.; Yu, C.B. Exponential consensus of general linear multi-agent systems under directed dynamic topology. Automatica 2014, 50, 2327–2333. [Google Scholar] [CrossRef]
- Andrychowicz, M.; Denil, M.; Gomez, S.; Hoffman, M.W.; Pfau, D.; Schaul, T.; Shillingford, B.; de Freitas, N. Learning to learn by gradient descent by gradient descent. arXiv 2016, arXiv:1606.04474. [Google Scholar]
- Shchyrba, D.; Paniczek, I. Adaptively Learning Memory Incorporating PSO. arXiv 2024, arXiv:2402.11679. [Google Scholar]
- Jiang, C.; Huang, X.; Guo, Y. End-to-end decentralized formation control using a graph neural network-based learning method. Front. Robot. AI 2023, 10, 1285412. [Google Scholar] [CrossRef] [PubMed]
- Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The Graph Neural Network Model. IEEE Trans. Neural Netw. 2009, 20, 61–80. [Google Scholar] [CrossRef] [PubMed]
Noise Percentage | PHIMEC | DDPG | SAC | TD3 |
---|---|---|---|---|
0.01 | 20.5 | 19.8 | 19.7 | 19.9 |
0.05 | 20.2 | 19.3 | 19.2 | 19.2 |
0.10 | 19.2 | 18.4 | 18.4 | 18.5 |
0.15 | 18.6 | 17.8 | 19.2 | 18.1 |
Noise Percentage | R-PHIMEC | RDPG | R-SAC | RTD3 |
---|---|---|---|---|
0.01 | 21.3 | 20.6 | 20.3 | 20.1 |
0.05 | 20.8 | 19.5 | 19.7 | 19.6 |
0.10 | 20.2 | 18.8 | 19.2 | 19.0 |
0.15 | 18.6 | 18.0 | 18.9 | 18.7 |
Task | Traits | Reference |
---|---|---|
UCI Electricity Load Diagrams Dataset | High spectral flatness | F1 |
UCI PEM-SF Traffic Dataset | Sharp peaks | F2 |
UCI Air Quality Dataset | Generic benchmark | F3 |
Electricity Transformer Temperature | Spectral flatness and long-term dependencies | F4 |
Cifar-10 | Convolutional Network | T1 |
MNIST with RELU | Unfamiliar activation function | T2 |
LSTM for two-tank cascade system identification | Spurious Valleys | T3 |
Random quadratic functions | Simple task | T4 |
Task | T-LSTM | T-RNN | LSTM |
---|---|---|---|
F1 | 25 | 27 | 38 |
F2 | 24 | 23 | 31 |
F3 | 25 | 26 | 33 |
F4 | 31 | 30 | 42 |
Task | T-LSTM | T-RNN | LSTM |
---|---|---|---|
T1 | 5 | 2 | −2 |
T2 | 3 | 2 | −1 |
T3 | 3 | 7 | 3 |
T4 | 4 | 8 | 4 |
Layer | Input | Output |
---|---|---|
IMPULSTM | 128 | 96 |
PHIMEC 1 | 256 | 128 |
PHIMEC 2 | 256 | 2 |
Number | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
4 | 100 | 91 | 83 | 67 | N/a | N/a |
5 | 97 | 92 | 79 | 71 | 65 | N/a |
6 | 93 | 84 | 71 | 67 | 53 | 47 |
7 | 87 | 85 | 68 | 61 | 49 | 44 |
8 | 73 | 77 | 60 | 54 | 38 | 40 |
9 | 66 | 60 | 52 | 47 | 37 | 35 |
Number | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
4 | 100.0 | 94.0 | 88.0 | 74.0 | N/a | N/a |
5 | 96.0 | 94.0 | 83.0 | 77.0 | 72.0 | N/a |
6 | 91.0 | 84.0 | 74.0 | 71.0 | 58.0 | 53.0 |
7 | 84.0 | 84.0 | 69.0 | 64.0 | 53.0 | 49.0 |
8 | 69.0 | 75.0 | 60.0 | 56.0 | 41.0 | 44.0 |
9 | 62.0 | 58.0 | 51.0 | 48.0 | 39.0 | 38.0 |
Number | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
4 | 100 | 100 | 92 | 74 | N/a | N/a |
5 | 100 | 100 | 87 | 78 | 72 | N/a |
6 | 99 | 93 | 78 | 74 | 59 | 52 |
7 | 96 | 94 | 75 | 68 | 54 | 49 |
8 | 81 | 85 | 66 | 60 | 42 | 44 |
9 | 73 | 66 | 58 | 52 | 41 | 39 |
Number | 0 | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|
4 | 100.0 | 103.0 | 98.0 | 81.0 | 0.0 | 0.0 |
5 | 99.0 | 102.0 | 91.0 | 84.0 | 80.0 | 0.0 |
6 | 97.0 | 93.0 | 81.0 | 79.0 | 65.0 | 59.0 |
7 | 92.0 | 93.0 | 76.0 | 71.0 | 58.0 | 54.0 |
8 | 77.0 | 83.0 | 66.0 | 62.0 | 45.0 | 48.0 |
9 | 68.0 | 63.0 | 57.0 | 53.0 | 43.0 | 42.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shchyrba, D.; Zarzycki, H. Computationally Efficient Inference via Time-Aware Modular Control Systems. Electronics 2024, 13, 4416. https://doi.org/10.3390/electronics13224416
Shchyrba D, Zarzycki H. Computationally Efficient Inference via Time-Aware Modular Control Systems. Electronics. 2024; 13(22):4416. https://doi.org/10.3390/electronics13224416
Chicago/Turabian StyleShchyrba, Dmytro, and Hubert Zarzycki. 2024. "Computationally Efficient Inference via Time-Aware Modular Control Systems" Electronics 13, no. 22: 4416. https://doi.org/10.3390/electronics13224416
APA StyleShchyrba, D., & Zarzycki, H. (2024). Computationally Efficient Inference via Time-Aware Modular Control Systems. Electronics, 13(22), 4416. https://doi.org/10.3390/electronics13224416