Next Article in Journal
Fully Automatic Analysis of Muscle B-Mode Ultrasound Images Based on the Deep Residual Shrinkage U-Net
Next Article in Special Issue
A 3D Range-Only SLAM Algorithm Based on Improved Derivative UKF
Previous Article in Journal
Implementation of Soft Decoding Mechanism for Addressing Nonlinearities in Long-Distance Optical Transmission System
Previous Article in Special Issue
Dynamic Task Allocation of Multiple UAVs Based on Improved A-QCDPSO
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey of Multi-Agent Cross Domain Cooperative Perception

1
College of Electronic and Information Engineering, Tongji University, Shanghai 201804, China
2
Shanghai Research Institute for Intelligent Autonomous Systems, Shanghai 201804, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(7), 1091; https://doi.org/10.3390/electronics11071091
Submission received: 17 February 2022 / Revised: 22 March 2022 / Accepted: 29 March 2022 / Published: 30 March 2022
(This article belongs to the Collection Advance Technologies of Navigation for Intelligent Vehicles)

Abstract

:
Intelligent unmanned systems for ground, sea, aviation, and aerospace application are important research directions for the new generation of artificial intelligence in China. Intelligent unmanned systems are also important carriers of interactive mapping between physical space and cyberspace in the process of the digitization of human society. Based on the current domestic and overseas development status of unmanned systems for ground, sea, aviation, and aerospace application, this paper reviewed the theoretical problems and research trends of multi-agent cross-domain cooperative perception. The scenarios of multi-agent cooperative perception tasks in different areas were deeply investigated and analyzed, the scientific problems of cooperative perception were analyzed, and the development direction of multi-agent cooperative perception theory research for solving the challenges of the complex environment, interactive communication, and cross-domain tasks was expounded.

1. Introduction

With the development of artificial intelligence, big data, the internet of things, unmanned systems, and other cross science and technology, the whole off human society is evolving towards digital spaces through a series of cyber-physical systems (CPS), such as digital city [1], digital economy [2], digital governance [3], and digital life [4]. As a fundamental theory and method for multi-dimensional mapping representation of physical space and information space, perception mechanism and method have become a hot research topic in recent years [5]. In cognitive science, perception is considered as the medium and means of information transfer between human consciousness and the objective world, and it also can be extended to the information expression of the living subject to its inner state and the external world objects [6]. Unmanned systems include the unmanned aerial vehicle (UAV), unmanned ground vehicle (UGV), unmanned surface vehicle (USV), autonomous underwater vehicle (AUV), field robot, and other unmanned platforms and equipment. Various types of unmanned systems for ground, sea, aviation, and aerospace application are oriented to different physical domain-sensing tasks, thereby integrating different multimode sensing chips and sensor devices and eventually forming distinctive configuration features and sensing advantages, respectively [7,8,9]. With the advent of the digital intelligence era, unmanned systems are gradually developing into anthropomorphic living subjects with a certain degree of autonomy. The autonomous unmanned system turns into a combination of artificial intelligence, robotics, real-time control, and decision-making systems which can widely replace humans in a variety of environments, for completing tasks such as perception, independently [10].
The future unmanned systems for ground, sea, aviation, and aerospace applications and large-scale cross-domain collaborative applications will greatly improve the range of human perception and expand human behavior capabilities. However, the development and evolution of autonomous perception in unmanned systems is a long and slow process. There are numerous examples of the lack of autonomous perception in unmanned systems. Autopilot cars are still far from being completely driverless, given a spate of safety incidents caused by the unmanned vehicles’ insufficient perceptual abilities [11]. A comprehensive survey of UAV shows that the current perception technology of a single drone leads to many issues in UAV path planning in the compromise of cost-efficiency, time-efficiency, energy-efficiency, robustness, and collision avoidance requirements. In addition, cooperative perception by the multiple UAV network connectivity is a promising way for various mission and critical operations performed by UAV [12]. Many challenges in the marine environment of USV are also associated with autonomous perception, such as haze or fog weather conditions, the surrounding environment reflections in water, large highlighted water areas caused by sunlight, dynamic water surface, and varying backgrounds [13]; the state-of-the-art technologies of robots reveal similar trends. Typical robots, such as mobile healthcare robots [14] and manufacturing robots [15], have witnessed a growing interest in multi-agent cooperation and network connections. It can be expected that the application scenarios of multi-agent cooperative perception are vast and more effective than single-agent perception. To break through the key technologies of multi-agent cooperative perception frontier science theory, this paper analyzed the key scientific issues in the field of cross-domain multi-agent cooperative perception and summarized the research trends in cross-domain multi-agent cooperative perception by combining the current development status of unmanned systems for ground, sea, aviation, and aerospace application at home and abroad.
The overall content architecture of this paper is shown in Figure 1. To better study the problem of cross-domain collaboration, we first studied the perception methods of the different unmanned systems of land, sea, and air. This is the main work of Section 2. As the research on land, sea, and air unmanned systems often comes from different subdivision disciplines, it is a challenge to extract knowledge from copious literature in different fields and sum up the common perception technology of different unmanned systems. We sorted out and analyzed extensive literature to deal with this challenge. Four kinds of perceptual means are described in detail, including lightwave perception, microwave perception, acoustic perception, and dedicated perception. Based on the advantages and disadvantages of each means, the importance of multi-agent collaborative perception is discussed. The literature review also indicates that the current research on collaborative perception is not sufficient. The perception limitation of a single agent is an effective way to deduce the challenges of multi-agent cooperative perception.
In Section 3, we analyzed and summarized a large number of relevant scientific papers focusing on the difficulties of multi-agent cooperative perception. For the related challenges of multi-agent cooperative perception tasks such as environment complexity, interactive communication, and task diversity, we also tried to provide solutions based on theoretical knowledge and literature summaries and predict the future development trends combined with application scenarios. The thorough analysis and countermeasures of different challenges correspond to Section 3.1, Section 3.2 and Section 3.3 respectively.
After the investigation and analysis of the limitations of perception methods and the key challenges of multi-agent cooperative perception tasks, we summarized the survey of multi-agent cross domain cooperative perception in Section 4. The main contributions of this paper are as follows: (1) Reference to the current lack of research on multi-agent cross-domain collaborative perception of land, sea and air unmanned systems, the investigation and analysis were carried out from the internal factors of agent system and the external factors of environmental tasks. (2) We systematically summarized the internal and external shortcomings and challenges and put forward research suggestions in different perspectives. To some extent, this paper provides some research ideas for the follow-up research on cross-domain collaborative perception.

2. Overview of Perception Technologies for Unmanned Systems

As shown in Figure 2, unmanned systems represented by unmanned aerial vehicles (UAVs) [16], unmanned ground vehicles (UGVs) [17] and unmanned surface vehicles (USV) [18], etc. have different sensor devices and sensing means due to their different application fields, but there are also common technologies, mainly based on sensing means such as light waves, microwaves, and acoustic waves, to achieve environmental sensing, navigation, and positioning.
Lightwave Perception: Lightwave sensors, including RGB cameras, infrared cameras, lidar, X-ray detectors, etc., are used for autonomous navigation, tracking, and obstacle avoidance by different unmanned systems. Lightwave perception has higher detection accuracy and longer detection distance as well as higher visibility requirements, in comparison with other perception methods. Lightwave sensors can be divided into active sensors and passive sensors [19]. Active sensors such as lidar send out lightwave signals and sense the measured changes in the outside world by receiving reflected signals. The strengths and weaknesses of the LiDAR technologies emerging in recent years were analyzed by Hsu [20]. Passive lightwave sensors do not emit light waves themselves but receive external lightwave signals passively to sense things. Lightwave sensors such as optical cameras are the most widely used sensors today, which are conducive to a series of tasks combined with artificial intelligence technology such as feature detection, target tracking, environment reconstruction and segmentation, and human or vehicle recognition [21]. A recent survey of using artificial intelligence to passive lightwave sensors will further inspire new tools for material analysis, diagnosis, and healthcare [22]. The application of artificial intelligence to different unmanned systems of land, sea, air, and space benefits from the current development of edge computing. The current edge computing power has been greatly improved, which promotes the application of intelligent algorithms for machine vision, lidar point cloud processing, and so on.
Microwave Perception: A sensing method determines the distance by measuring the time difference between the transmitted wave and the return wave of a microwave. Millimeter-wave radar is a typical application of microwave perception. The detection performance of microwaves is not as strong as that of lightwave sensors, but its ability to penetrate materials makes it superior to lightwave sensors at night and in bad weather such as fog and snow. The incorporation of microwave laminar imaging radar into UAS can be applied to search and rescue operations during crisis events, as well as research studies in cultural heritage and agriculture [23]. Millimeter wave radars are being used extensively for commercial applications, especially for automotive radar applications at 77 GHz and 24 GHz [24,25].
Acoustic Perception: Acoustic perception also mainly uses the reflective ranging method. It has the characteristics of high detection accuracy within a short distance, strong penetration ability, and a relatively simple structure. However, acoustic perception is affected by temperature and the Doppler effect. The speed of sound is slower than electromagnetic waves, which will produce a delay when the object’s speed is high. In addition, with the increase in detection distance, its directivity will be sharply weakened. To address this problem, Park et al. proposed a new ultrasonic sensor design approach that uses the frequency difference between two ultrasonic waves to generate a highly directional low-frequency wave with a small aperture to improve the spatial resolution of the ultrasonic sensor [26]. The practical conditions such as the characteristics of the transmission medium and environment constraints on unmanned systems pose substantial challenges for acoustic sensor application [27,28].
In addition to environmental perception technology in unmanned systems, navigation, and positioning technology, the use of unmanned system platforms or groups to carry out perception tasks at different scales in different domains is also an important research direction in the field of unmanned systems perception.
Dedicated Perception: As complex systems are coupled with multiple physical fields, unmanned systems often integrate more sensors to realize different subsystems and components with cross-level sensing capabilities. Due to the different functional properties of various types of unmanned systems and their components, the types of dedicated sensors required are varied. In the case of unmanned vehicles, for example, dozens of types of sensors such as pressure sensors, temperature sensors, Hall sensors, oxygen sensors, etc., are built to detect different functions such as temperature, pressure, speed, torque, exhaust gas, etc. [29]. In recent years, with the continuous development of technology, new types of special sensors and application methods have emerged. Some examples are shown in Figure 3. Electronic skin tactile sensing based on an array of scalable tactile sensors deployed on gloves can be used to simulate the process of gripping objects and judging their weight and material [30]. New sensors based on piezoelectric and pyroelectric effects [31] can be applied to environmental monitoring by measuring temperature and pressure. Olfactory sensors consisting of insect tentacles combined with mechanical devices can be used to track down the source of gas leaks or fires, thus identifying explosives, disaster prevention, and disaster relief [32]. An intracortical brain-computer interface enables paralyzed people to gain typing speed comparable to that of normal people by decoding neural activity [33]. Furthermore, many electromyography (EMG) sensors have been widely studied for human robot interaction [34].
Multi-Agent Perception: Li et al. showed in Nature that distributed perceptual sensors combined with artificial intelligence algorithms can achieve ambient ubiquitous intelligence. Their long-time practice in smart hospital scenarios showed that ambient ubiquitous intelligent perception can understand the complex interactions between the physical environment and health-critical human behavior and substantially improve the efficiency of hospital convalescence [36]. Peter et al. from ETH, Switzerland, described the important role of multi-robot collaborative perception in responding to the COVID-19 epidemic but also pointed out that the ability to perceive complex environments in hospitals is still an important bottleneck for service robots to be used on a large scale [37,38]. Yoon et al. proposed that collaborative perception and information sharing have become an important underlying theory for future large-scale applications of smart transportation and unmanned vehicles. Compared with single-vehicle intelligence, decentralized multi-vehicle networked cooperative sensing can help solve the long-tail problem of unknown environments [39,40,41]. In addition, as Figure 4 shows, the collaborative perception of unmanned systems across the domains of land, sea, and air has been intensively researched and explored in recent years in ecological protection [42], smart agriculture [43], terrain detection [44], and underwater exploration [45].

3. Analysis of Difficulties and Trend of Cross-Domain Multi-Agent Cooperative Perception

3.1. Cooperative Perception Confronts Challenges of Environmental Complexity

The physical world itself is composed of macro-scale, meso-scale, micro-, nano-, and other multi-scale materials, presenting constantly moving and changing characteristics of force, light, sound, heat, electricity, and magnet, and the actual application environment of unmanned systems is featured by multi-scale, high dynamics, and uncertainty. Unmanned systems are often required to work in extreme environments, such as nuclear radiation and chemical leaks [46], electromagnetic interference around high-voltage power grids [47], weather hazards [48], light disturbances [49], and complex terrain with multiple obstacles [50]. These environmental factors pose certain difficulties and challenges to the work of intelligent robots.
To begin with, as communication between an intelligent robot and the system to which it belongs relies primarily on mobile wireless networks, the impact of electromagnetic interference on the communication of unmanned systems is an issue that cannot be ignored. For example, in Li’s investigation of the electromagnetic effects of UHV substations on drones, they mentioned that electromagnetic pulses from the substation equipment can have detrimental effects on the electronic components of the drones [51]. The communication and control systems of drones are likely to be disturbed by strong electromagnetic fields during their tasks of substation inspections [52]. Some literature shows that weak magnetic fields can also produce interference effects. Various sinusoidal and impulse electromagnetic interference can have a significant influence on the bio-sensors used for the control of wearable robots [53]. A new communication system with higher communication quality and environmental adaptability has become the research hotspot of robot application expansion [22]. Secondly, light disturbances and complex meteorological changes may also increase the instability of unmanned systems. In agricultural applications, Olson et al. suggested that light disturbances caused by cloud movements may lead to problems of data degradation [54]. Kucharczyk et al. pointed out that current UAVs are difficult to work properly in sudden disasters such as floods and hurricanes. Common UAV sensor RGB cameras may fail in light-obscuring environments such as rain, haze, and smoke [55]. Complex environments influenced by light also have a big impact on unmanned vehicles. For example, Kim shows that a tunnel environment is irregular and has significantly lower illumination, including tunnel lighting and light reflected from driving vehicles [56]. In addition, complex terrain and ecological environments may also have negative impacts on the movement speed and data collection efficiency of intelligent robots. For example, mining equipment and remotely operated vehicles sometimes struggle to function properly when operating in the deep sea due to concerns about fragile ecosystems [57]. Johnston’s research on UAS for marine science and conservation mentioned that drones find it easy to collect high-resolution data at small scales on the ground, but it is difficult to effectively sample at large scales where feature domains are missing, such as in the ocean [58]. Jeong et al. addressed the problem that traditional laser ranging methods can be limited by complex terrain such as mountains and rivers [59]. To enable the smooth operation of intelligent robots in uninhabitable environments, Freitas et al. proposed an active control strategy for reconfigurable mobile robots on irregular terrain [60]. Gao et al. proposed an intelligent system to support active quadrotor flight in complex environments, primarily by calculating the topologically equivalent free space of the user’s teaching trajectory and combining spatiotemporal optimization, online sensing, and local re-planning [61]. Moreover, many scholars tend to use multi-agent cooperative perception strategies in response to these challenges. Kapoutsis et al. proposed a new method that uses multi-AXV robot swarms to explore unknown areas and construct detailed maps of the environment under environmental and communication constraints [62]. A large number of the research papers in the field of unmanned vehicles show that smart vehicle–road cooperative perception will play an important role in reducing congestion [63,64,65]. Unmanned aerial vehicles have recently been used in a wide variety of cooperative perception applications too. Zhou et al. presented a decentralized and asynchronous systematic solution for multi-robot autonomous navigation in unknown obstacle-rich scenes [66]. Andrade et al. proposed a real-time path-planning solution using multiple cooperative UAVs for SAR missions [67]. Yu et al. investigated the cooperative forest fire monitoring problem of multiple fixed-wing unmanned aerial vehicles in the presence of actuator faults during a fire monitoring mission. Cooperative robot tasks have been widely studied in intelligent manufacturing and other fields as they can adapt to the varying and dynamic conditions of the environment. Collaborative work between robots and humans is becoming a hot topic in autonomous and collaborative robots [68]. Compared with the studies of UAV and UGV, the research of unmanned surface vehicles (USVs) is relatively recent. However, multi-USV collaborative applications have received widespread attention for civil and military applications [69,70].
The relevant academic literature indicates that the complexity of the environment is reflected in the interference factors, spatial and temporal scales, scene changes, environmental climate, and other aspects, and there is an urgent need to build theoretical methods for in-depth research. Referring to the environmental challenges, it is difficult for a single intelligent robot to cope with the challenges of sensing complex environments, and multi-agent cooperative perception should become an important development direction in the future:
(1)
For the task of multi-scale complex environment perception, further research is needed to establish a scenario-driven multi-grain size collaborative perception framework for unmanned systems for ground, sea, aviation, and aerospace application to realize multi-agent full-domain perception and information sharing and interaction.
(2)
For the uncertainty of dynamic changes in the environment, research is needed to study the multi-mode perception collaborative enhancement mechanism for unmanned systems for ground, sea, aviation, and aerospace application, in order to form a multi-grain size perception fusion method for complex scenarios of multi-agent.
(3)
For the task of cooperative perception across land, sea, and air, there is an urgent need to build a unified representation perception model with cross-level spatiotemporal characteristics to develop a multi-agent cooperative perception theory and method system driven by multi-dimensional cross-domain perception big data.

3.2. Cooperative Perception Confronts Challenges of Multi-Agent Interaction

The study of innovative weakly interactive multi-agent cooperative perception methods is urgent for cooperative perception tasks under restricted communication and incomplete information conditions. As shown in Table 1, current unmanned systems have different communication approaches such as Bluetooth, ZIGBEE, NB-IOT, 4G/5G, and WIFI, and these approaches differ greatly in terms of transmission speed, distance, bandwidth, security, and robustness.
Multi-agent cooperative perception tasks under the conditions of restricted communication and incomplete information widely exist due to multiple factors such as the heterogeneous performance of unmanned systems, working conditions, and complex environments. For example, typical cross-domain robot workspace such as subterranean areas, narrow spaces, and rugged terrain make it impossible for UAVs to reach and communicate with UGVs or field robots [71,72,73]. The darkness, unusual pressures and temperatures, and complicated submarine environments pose a threat to AUV coordinated reconnaissance and operations [74]. In addition, the mixed traffic scenarios of unmanned vehicles and traditional vehicles, such as congestion, traffic accidents, vehicle cut-in, and vehicle cut-out, increase the difficulty of vehicle–road coordination and vehicle–vehicle coordination, which is also a challenge for multi-level collaborative perception [75,76]. For the problem of restricted communication due to the variability of multi-agent heterogeneous cross-domain communication methods, Dorigo proposed the need to establish universal interaction rules as soon as possible based on the analysis of the development history and trends of unmanned system clusters [77]. Todescato et al. proposed a scalable partition control algorithm to solve the consistency problem of restricted cross-domain communication inspired by the generalized gradient descent strategy [78]. Referring to communication problems such as communication intermittency and time delay, Su et al. proposed an output adjustment algorithm based on adaptive observers to break through communication interaction perturbations [79]. Lin et al. proposed non-uniform unbounded convex constraint sets and non-uniform step-size distribution continuous-time and discrete-time optimization algorithms to solve the asynchronous communication problem [80]. Li et al. established event-triggered bounded consistency algorithms for stochastic multi-agent systems to solve the communication delay problem [81]. However, most of these algorithms are theoretically simplified abstract models to solve the communication constraint problem and are difficult to apply in practical scenarios. Related literature suggests that, in conjunction with the development of communication technologies, the use of unmanned system communication relaying strategies to achieve cooperative perception under weak communication conditions in the future is more relevant to practical applications [82,83]. In addition, multi-agent cooperative perception is not entirely affected by the objective conditions of communication. In an adversarial environment, the originally stable cooperative communication mechanism may be broken by the gaming parties, and the cooperative perception task needs to be redistributed among the agents to cope with factors such as sudden communication interruptions [84]. Current collaborative task allocation mechanisms can be achieved by simple communication mechanisms between agents, such as the large-scale clustered collaborative robots published in Nature, which use phototropic sensing devices to achieve clustered movements of more than 100,000 agents [85]. However, for cooperative perception in complex environments, there is still a need to improve the stability of the acyclic sampling data system of the agents [86].
Further study needs to be conducted on the mechanism of multi-agent interaction under weak communication conditions. Several research directions based on the above literature analysis are suggested as follows:
(1)
The distributed multi-agent information complementation model tends to be constructed. The multi-agent information complementation model can be set up by using edge computing instead of cloud computing to lighten the transmission of data in the cooperative communication of a large scale of agents and expand the information in this respect [87]. In addition, 5G, 6G, and other highly dynamic, large-bandwidth communication technologies lay a technical foundation for the construction of multi-agent information complementation models. However, the model also needs to pay attention to the security and reliability of information sharing. Information security is a very big field that has received a lot of attention in computer networks and wireless sensor networks [88]. These information security studies provide a good reference for the construction of the collaborative perception model. Recently, our team applied computer blockchain technology to the research of collaborative perception construction to ensure the security and credibility of perceptual information [89].
(2)
The task planning method based on autonomous collaborative positioning and navigation needs to combine research with multi-agent perception. It means different types of agents can provide positioning information for other agents through their perception and positioning of surrounding environmental targets. For example, we proposed an application of the cooperative perception localization method without GPS positioning at the international conference of computers, control, and robots. The data sharing interaction among the distributed sensing network nodes of the intelligent lamp pole and UGV. As the position of the lamp post is fixed, we can use the cooperative perception of poles and UGV for objection location, that is, the lamp pole-mounted camera is used to track the pollution sources’ position based on computer vision, and then the path of the UGV to the pollution source can be planned in combination with the location of lamp poles, which can achieve collaborative fine location detection of pollution sources [90].
(3)
The task-driven multi-agent role assignment and unexpected situation response mechanism need to be established in the field of multi-agent collaboration. Since unmanned systems in different domains have different sensing devices and different perception perspectives, multi-agent cooperative task role division can be used to realize complementary collaborative perception. A number of recent papers on multi-agent collaboration have begun to study this method [91].

3.3. Cooperative Perception Confronts Challenges of Mission Diversity

Referring to the long time-series nonlinearity of discrete cooperative perception tasks, the perceptual information memory and inference method for distributing perception agent networks needs to be studied. Perception tasks such as smart city development patterns [92], large infrastructure health monitoring [93], complex traffic flow prediction [94], and tracking and tracing the outbreak of pandemics [95] all necessitate a long time series, large samples, and widely dispersed perception source data mining and analysis. Batty proposed a theory of urban complex modeling consisting of high-frequency real-time big data and low-frequency city evolution data [96]. The theory explains the complexity of urban perception missions due to the diversity, scale, and speed of accumulation of data. Unmanned systems, with sensing nodes or edge computing units, will become an important innovation technology for ubiquitous sensing and big data collection in the future. Because a single unmanned system is restricted by limited memory and computational power, it is of great research value for multi-agents to collaboratively set up distributed sensor networks [97,98]. In recent years, some scholars have combined 5G/6G and other high-throughput and low-latency advanced communication technologies to build multi-agent networks and service frameworks to address the key technologies for building multi-agent distributed sensing networks. For example, Han et al. combined 5G and big data technologies to discuss in detail the mobile cloud sensing computing framework and component construction methods to promote the development of ubiquitous intelligent sensing in the physical world [99]. Sliwa et al. explored a collaborative crowdsourcing sensing approach based on 6G technology and hybrid machine learning at the cloud edge, which can improve data utilization by 223% on average while reducing network resource usage by up to 89% [100]. Shrestha et al. discussed the development of 6G-enabled UAV traffic management ecosystems, especially in the scenario of intensive air traffic, which can ensure the safety and efficiency of transportation in urban air [101]. Gu et al. designed a path-tracking control algorithm for tracked mobile robots under the framework of 6G and edge cloud, which can enhance the tracking accuracy without undermining real-time performance [102]. Lv et al. analyzed the interconnection of vehicles under 6G networks and discussed the measurement and modeling of 6G-oriented wireless channels [103]. In addition, some scholars have devoted themselves to the problem of the network topology optimization of complex systems. For example, Brown et al. presented a local computational method for the convex optimization of network structures in multi-intelligent body systems and proposed a conjugate residual estimation algorithm based on the analysis of local problems and their occurrence correlation factors and provided a theoretical basis for the application of the local computational paradigm to convex optimization problems in multi-agent systems [104]. Li et al. proposed a new multi-objective multi-intelligent complex network optimization algorithm by drawing on the ideas of genetic algorithms and validated the new algorithm for seventeen unconstrained multi-objective optimization problems and seven multi-objective optimization problems [105]. Future multi-agent complex networks are evolving towards multi-scale, dynamic, and multidimensional systems where unmanned systems combined with intelligent learning algorithms play an important role in network node mobility, diffusion, and security [98,106]. At the same time, with the expansion of network nodes and layers, there are still great challenges for multi-agent collaborative decision control and massive data processing.
Several important research directions are drawn through literature research and judgment:
(1)
The optimization algorithms of complex sensing networks based on multi-agent cooperative perception should be deeply developed. Take Shanghai’s urban governance as an example. Shanghai has 25 million people, 1.3 billion m2 building area, and more than 6 million vehicles. It is hard to imagine how to construct a large-scale multi-agent cooperative perception network to realize the global perception of the whole city. The exponential growth of city data and the cost of computing power have become increasingly prominent. Our team is conducting preliminary research on the big project, such as how to achieve efficient data aggregation by optimizing the clustering structure [107]. There is still a lot of work to be done for the optimization algorithms of complex sensing networks.
(2)
Distributed federated learning and cloud-edge co-intelligence sensing methods need to be set up to address the challenges of mission diversity. Federated learning is a new machine learning method that is well studied and widely applied for distributed data learning [108]. Since the applications of land, sea, and air unmanned systems in smart cities are scattered and considering the autonomy and intelligence of multi-agents, federated learning, and cloud-edge collaborative computing will be the trend for a wide variety of tasks.
(3)
A full-coverage, full-factor, full-cycle spatiotemporal-coupled information sensing model is promising for the future. Recently, digital twin and meta-universe technology have become new research hotspots. The concepts of the digital factory [109], digital city [110], and digital earth [111] are emerging one after another. The mapping of physical space to information space has become the trend of social development. To play the digital twin efficiency in the long time series and large span space, a full-coverage, full-factor, full-cycle spatiotemporal-coupled information sensing model is the precondition. Without perception, there is no source of data in physical space, and without physical space data, there is no digital twin.

4. Conclusions

With the development of a new generation of artificial intelligence technology, intelligent unmanned systems will gradually develop into life-like subjects with certain autonomy, which will greatly expand the perception capabilities of humans. In addition to the integration of intelligent sensor devices for the improvement of the perception capability of a single intelligent body, the research into the collaborative perception of multiple agents in ground, sea, aviation, and aerospace is of great significance with broad application prospects in fields such as smart cities, smart military, smart construction, and smart agriculture.
Through the research and analysis of the current situation of unmanned system perception technology, this paper summarizes the difficulties of cross-domain multi-agent research: (1) the difficulty of multi-scale complex scene multi-grain perception under multi-physical field-coupling conditions; (2) the difficulty of collaborative perception information interaction when communication is limited and information is incomplete; (3) the difficulty in executing collaborative perception tasks due to the diversity of ground, sea, aviation, and aerospace scenes.
In the face of the above-mentioned scientific problems of cooperative perception, it is urgent to build a unified model for the perception of spatiotemporal characteristics across layers to develop a theoretical and methodological system for the cooperative perception of multiple intelligences driven by multi-domain perception and big data. Further research is needed to study the mechanism of multi-agent interaction under weak communication conditions, to build a distributed multi-agent collaborative information complementation model, to study task-planning methods based on autonomous collaborative positioning and navigation, and to develop a task-driven multi-agent role allocation and contingency response mechanism. Referring to the long time-series nonlinearity of discrete cooperative perception tasks, the perceptual information memory and inference method for distributed perceptual body networks need to be studied, and a spatiotemporal-coupled information perception model with full coverage, full elements, and full cycles needs to be established.

Author Contributions

Conceptualization and methodology Z.Z. and Q.D.; writing—original draft preparation, Z.Z. and Q.D.; writing—review and editing, G.L. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Project funded by China Postdoctoral Science Foundation (Grant No. BX20190243), in part by National Natural Science Foundation of China (Grant No. 52002286), in part by National Key R&D Project Foundation (Grant No. 2021YFE0193900). and in part by Shandong Provincial Natural Science Foundation (Grant No. ZR2020KF022), and in part by Shandong Province Innovation Capability Enhancement Project (Grant No. 2021TSGC1049).

Institutional Review Board Statement

The study did not require ethical approval for not involving humans or animals.

Informed Consent Statement

The study did not require ethical approval for not involving humans or animals.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rathore, M.M.; Paul, A.; Hong, W.; Seo, H.; Awan, I.; Saeed, S. Exploiting IoT and big data analytics: Defining Smart Digital City using real-time urban data. Sustain. Cities Soc. 2018, 40, 600–610. [Google Scholar] [CrossRef]
  2. Rano, P. Digital economy: Features and development trends. ACADEMICIA Int. Multidiscip. Res. J. 2020, 10, 197–202. [Google Scholar] [CrossRef]
  3. Dubman, R. The Digital Governance of Data-Driven Smart Cities: Sustainable Urban Development, Big Data Management, and the Cognitive Internet of Things. Geopolit. Hist. Int. Relat. 2019, 11, 34–40. [Google Scholar]
  4. Olifirenko, J. Digital Life on Instagram: New Social Communication of Photography. New Media Soc. 2019, 21, 2087–2089. [Google Scholar] [CrossRef]
  5. Dou, H.; Deng, Q.; Mao, J. Object detection based on hierarchical visual perception mechanism. Proc. SPIE 2020, 11429, 114290P. [Google Scholar] [CrossRef]
  6. Roman, M. Perception-Loop-Verticality. Concerning the theory and Practice of Cognitive Science Literature. World Lit. Stud. 2014, 6, 154–156. [Google Scholar]
  7. González-Jorge, H.; Martínez-Sánchez, J.; Bueno, M. Unmanned aerial systems for civil applications: A review. Drones 2017, 1, 2. [Google Scholar] [CrossRef]
  8. Green, D.R.; Gregory, B.J. From Land to Sea: Monitoring the Underwater Environment with Drone Technology. In Unmanned Aerial Remote Sensing; CRC Press: Boca Raton, FL, USA, 2020; pp. 271–279. [Google Scholar]
  9. Lewis, T.; Bhaganagar, K. A Comprehensive Review of Plume Source Localization Efforts Using Unmanned Vehicles for Environmental Sensing. Sci. Total Environ. 2020, 762, 144029. [Google Scholar] [CrossRef]
  10. Chen, J.; Sun, J.; Wang, G. From Unmanned Systems to Autonomous Intelligent Systems. Engineering 2021, 10, 7. [Google Scholar] [CrossRef]
  11. Chen, M.; Chen, Q.; Wu, Y. Research on the Responsibility of Automatic Driving Vehicle Accident. In Proceedings of the 2020 4th International Seminar on Education, Management and Social Sciences (ISEMSS 2020), Online, 18 July 2020. [Google Scholar]
  12. Aggarwal, S.; Kumar, N. Path planning techniques for unmanned aerial vehicles: A review, solutions, and challenges. Comput. Commun. 2020, 149, 270–299. [Google Scholar] [CrossRef]
  13. Liu, J.; Li, H.; Luo, J.; Xie, S.; Sun, Y. Efficient obstacle detection based on prior estimation network and spatially constrained mixture model for unmanned surface vehicles. J. Field Robot. 2021, 38, 212–228. [Google Scholar] [CrossRef]
  14. Wan, S.; Gu, Z.; Ni, Q. Cognitive computing and wireless communications on the edge for healthcare service robots. Comput. Commun. 2020, 149, 99–106. [Google Scholar] [CrossRef]
  15. Kemény, Z.; Váncza, J.; Wang, L.; Wang, X.V. Human–robot collaboration in manufacturing: A multi-agent view. In Advanced Human-Robot Collaboration in Manufacturing; Springer: Cham, Switzerland, 2021; pp. 3–41. [Google Scholar]
  16. Szrek, J.; Zimroz, R.; Wodecki, J.; Michalak, A.; Góralczyk, M.; Worsa-Kozak, M. Application of the Infrared Thermography and Unmanned Ground Vehicle for Rescue Action Support in Underground Mine—The AMICOS Project. Remote Sens. 2021, 13, 69. [Google Scholar] [CrossRef]
  17. Meng, L.; Peng, Z.; Zhou, J.; Zhang, J.; Lu, Z.; Baumann, A.; Du, Y. Real-Time Detection of Ground Objects Based on Unmanned Aerial Vehicle Remote Sensing with Deep Learning: Application in Excavator Detection for Pipeline Safety. Remote Sens. 2020, 12, 182. [Google Scholar] [CrossRef] [Green Version]
  18. Liu, H.; Nie, J.; Liu, Y.; Wu, Y.; Wang, H.; Qu, F.; Liu, W.; Li, Y. A Multi-modality Sensor System for Unmanned Surface Vehicle. Neural Process. Lett. 2020, 52, 977–992. [Google Scholar] [CrossRef]
  19. Jia, J.; Sun, H.; Jiang, C.; Karila, K.; Karjalainen, M.; Ahokas, E.; Khoramshahi, E.; Hu, P.; Chen, C.; Xue, T.; et al. Review on Active and Passive Remote Sensing Techniques for Road Extraction. Remote Sens. 2021, 13, 4235. [Google Scholar] [CrossRef]
  20. Hsu, C.P.; Li, B.; Solano-Rivas, B.; Gohil, A.R.; Chan, P.H.; Moore, A.D.; Donzella, V. A Review and Perspective on Optical Phased Array for Automotive Lidar. IEEE J. Sel. Top. Quantum Electron. 2021, 27, 1–16. [Google Scholar] [CrossRef]
  21. Ding, Y.; Hua, L.; Li, S. Research on computer vision enhancement in intelligent robot based on machine learning and deep learning. Neural Comput. Appl. 2022, 34, 2623–2635. [Google Scholar] [CrossRef]
  22. Zhao, W.; Kamezaki, M.; Yamaguchi, K.; Konno, M.; Onuki, A.; Sugano, S. A Wheeled Robot Chain Control System for Underground Facilities Inspection using Visible Light Communication and Solar Panel Receivers. IEEE/ASME Trans. Mechatron. 2022, 27, 180–189. [Google Scholar] [CrossRef]
  23. Ludeno, G.; Catapano, I.; Renga, A.; Vetrella, A.R.; Fasano, G.; Soldovieri, F. Assessment of a micro-UAV system for microwave tomography radar imaging. Remote Sens. Environ. 2018, 212, 90–102. [Google Scholar] [CrossRef]
  24. Taha, I.; Mirhassani, M. A 24-GHz DCO With High-Amplitude Stabilization and Enhanced Startup Time for Automotive Radar. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2019, 27, 2260–2271. [Google Scholar] [CrossRef]
  25. Khan, O.; Meyer, J.; Baur, K.; Arafat, S.; Waldschmidt, C. Aperture coupled stacked patch thin film antenna for automotive radar at 77 GHz. Int. J. Microw. Wirel. Technol. 2019, 11, 1061–1068. [Google Scholar] [CrossRef] [Green Version]
  26. Park, J.; Je, Y.; Lee, H.; Lee, H.; Moon, W. Design of an ultrasonic sensor for measuring distance and detecting obstacles. Ultrasonics 2010, 50, 340–346. [Google Scholar] [CrossRef]
  27. Han, G.; Shen, S.; Song, H.; Yang, T.; Zang, W.B. A stratification-based data collection scheme in underwater acoustic sensor networks. IEEE Trans. Veh. Technol. 2018, 67, 10671–10682. [Google Scholar] [CrossRef]
  28. Guan, Q.; Ji, F.; Liu, Y.; Yu, H.; Chen, W.Q. Distance-vector-based opportunistic routing for underwater acoustic sensor networks. IEEE Internet Things J. 2019, 6, 3831–3839. [Google Scholar] [CrossRef]
  29. Mohankumar, P.; Ajayan, J.; Yasodharan, R.; Devendrana, P.; Sambasivama, R. A review of micromachined sensors for automotive applications. Measurement 2019, 140, 305–322. [Google Scholar] [CrossRef]
  30. Sundaram, S.; Kellnhofer, P.; Li, Y.; Zhu, J.; Torralba, A.; Matusik, W. Learning the signatures of the human grasp using a scalable tactile glove. Nature 2019, 569, 698–702. [Google Scholar] [CrossRef]
  31. Yang, M.M.; Luo, Z.D.; Mi, Z.; Zhao, J.; Pei, S.; Alexe, M. Piezoelectric and pyroelectric effects induced by interface polar symmetry. Nature 2020, 584, 377–381. [Google Scholar] [CrossRef]
  32. Anderson, M.J.; Sullivan, J.G.; Horiuchi, T.K.; Fuller, S.B.; Daniel, T.L. A bio-hybrid odor-guided autonomous palm-sized air vehicle. Bioinspir. Biomim. 2020, 16, 026002. [Google Scholar] [CrossRef]
  33. Willett, F.R.; Avansino, D.T.; Hochberg, L.R.; Henderson, J.M.; Shenoy, K.V. High-performance brain-to-text communication via handwriting. Nature 2021, 593, 249–254. [Google Scholar] [CrossRef]
  34. Furukawa, J.; Noda, T.; Teramae, T.; Morimoto, J. Human movement modeling to detect biosignal sensor failures for myoelectric assistive robot control. IEEE Trans. Robot. 2017, 33, 846–857. [Google Scholar] [CrossRef]
  35. Torin Technology Home Page. Available online: http://torintek.com/prosthesis/ (accessed on 23 November 2021).
  36. Haque, A.; Milstein, A.; Li, F.F. Illuminating the dark spaces of healthcare with ambient intelligence. Nature 2020, 585, 193–202. [Google Scholar] [CrossRef]
  37. Jovanovic, K.; Schwier, A.; Matheson, E.; Xiloyannis, M.; Stramigioli, S. Digital Innovation Hubs in Health-Care Robotics Fighting COVID-19: Novel Support for Patients and Health-Care Workers Across Europe. IEEE Robot. Autom. Mag. 2021, 28, 40–47. [Google Scholar] [CrossRef]
  38. Tamantini, C.; Luzio, F.; Cordella, F.; Pascarella, G.; Zollo, L. A Robotic Health-Care Assistant for COVID-19 Emergency: A Proposed Solution for Logistics and Disinfection in a Hospital Environment. IEEE Robot. Autom. Mag. 2021, 28, 71–81. [Google Scholar] [CrossRef]
  39. Yoon, D.; Ayalew, B.; Ali, G. Performance of Decentralized Cooperative Perception in V2V Connected Traffic. IEEE Trans. Intell. Transp. Syst. 2021, 1–14. Available online: https://www.nature.com/articles/s41586-021-03506-2 (accessed on 22 March 2022). [CrossRef]
  40. Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Generation of Cooperative Perception Messages for Connected and Automated Vehicles. IEEE Trans. Veh. Technol. 2020, 69, 16336–16341. [Google Scholar] [CrossRef]
  41. Hui, Y.; Su, Z.; Luan, T.H. Unmanned Era: A Service Response Framework in Smart City. IEEE Trans. Intell. Transp. Syst. 2021, 1–15. [Google Scholar] [CrossRef]
  42. Wang, D.; Song, Q.; Liao, X.; Ye, H.; Shao, Q.; Fan, J.; Cong, N.; Xin, X.; Yue, H.; Zhang, H. Integrating satellite and unmanned aircraft system (UAS) imagery to model livestock population dynamics in the Longbao Wetland National Nature Reserve, China. Sci. Total Environ. 2020, 756, 140327. [Google Scholar] [CrossRef]
  43. Singh, V.; Rana, A.; Bishop, M.P.; Filippi, A.M.; Bagavathiannan, M. Unmanned aircraft systems for precision weed detection and management: Prospects and challenges. Adv. Agron. 2020, 159, 93–134. [Google Scholar]
  44. Jensen, A.A.; Pinto, J.O.; Bailey, S.; Sobash, R.A.; Steiner, M. Assimilation of a coordinated fleet of uncrewed aircraft system observations in complex terrain: EnKF system design and preliminary assessment. Mon. Weather Rev. 2021, 149, 1459–1480. Available online: https://journals.ametsoc.org/view/journals/mwre/149/5/MWR-D-20-0359.1.xml (accessed on 22 March 2022). [CrossRef]
  45. Chen, Y.L.; Ma, X.W.; Bai, G.Q.; Sha, Y.; Liu, J. Multi-autonomous underwater vehicle formation control and cluster search using a fusion control strategy at complex underwater environment. Ocean. Eng. 2020, 216, 108048. [Google Scholar] [CrossRef]
  46. Qian, K.; Song, A.; Bao, J.; Zhang, H. Small Teleoperated Robot for Nuclear Radiation and Chemical Leak Detection. Int. J. Adv. Robot. Syst. 2012, 9, 70. [Google Scholar] [CrossRef] [Green Version]
  47. Gaynutdinov, R.R.; Chermoshentsev, S.F. Electromagnetic Interference Emission from Communication Lines of Onboard Equipment of an Unmanned Aerial Vehicle. J. Commun. Technol. Electron. 2020, 65, 221–227. [Google Scholar] [CrossRef]
  48. Roseman, C.A.; Argrow, B.M. Weather Hazard Risk Quantification for sUAS Safety Risk Management. J. Atmos. Ocean. Technol. 2020, 37, 1251–1268. [Google Scholar] [CrossRef]
  49. Li, X.; Levin, N.; Xie, J.; Li, D. Monitoring hourly night-time light by an unmanned aerial vehicle and its implications to satellite remote sensing. Remote Sens. Environ. 2020, 247, 111942. [Google Scholar] [CrossRef]
  50. Stodola, P.; Drozd, J.; Mazal, J.; Hodický, J.; Procházka, D. Cooperative Unmanned Aerial System Reconnaissance in a Complex Urban Environment and Uneven Terrain. Sensors 2019, 19, 3754. [Google Scholar] [CrossRef] [Green Version]
  51. Li, Y.; Ding, Q.; Li, K.; Valtchev, S.; Li, S.; Yin, L. A Survey of Electromagnetic Influence on UAVs from EHV Power Converter Stations and Possible Countermeasures. Electronics 2021, 10, 701. [Google Scholar] [CrossRef]
  52. Liao, W.; Nagai, K.; Wang, J. An evaluation method of electromagnetic interference on bio-sensor used for wearable robot control. IEEE Trans. Electromagn. Compat. 2019, 62, 36–42. [Google Scholar] [CrossRef]
  53. Nuriev, M.G.; Gizatullin, R.M.; Gizatullin, Z.M. Physical Modeling of Electromagnetic Interference in Unmanned Aerial Vehicle under Action of the Electric Transport Contact Network. Russ. Aeronaut. 2018, 61, 293–298. [Google Scholar] [CrossRef]
  54. Olson, D.; Anderson, J. Review on unmanned aerial vehicles, remote sensors, imagery processing, and their applications in agriculture. Agron. J. 2021, 113, 971–992. [Google Scholar] [CrossRef]
  55. Kucharczyk, M.; Hugenholtz, C.H. Remote sensing of natural hazard-related disasters with small drones: Global trends, biases, and research opportunities. Remote Sens. Environ. 2021, 264, 112577. [Google Scholar] [CrossRef]
  56. Kim, J.B. Vehicle Detection Using Deep Learning Technique in Tunnel Road Environments. Symmetry 2020, 12, 2012. [Google Scholar] [CrossRef]
  57. Voosen, P. Bus-size robot set to vacuum up valuable metals from the deep sea. Science, 14 March 2019. [Google Scholar] [CrossRef]
  58. Johnston, D.W. Unoccupied Aircraft Systems in Marine Science and Conservation. Annu. Rev. Mar. Sci. 2019, 11, 439–463. [Google Scholar] [CrossRef] [Green Version]
  59. Jeong, S.; Kim, D.; Kim, S.; Ham, J.W.; Oh, K.Y. Real-time environmental cognition and sag estimation of transmission lines using UAV equipped with 3-D Lidar system. IEEE Trans. Power Deliv. 2020, 36, 2658–2667. [Google Scholar] [CrossRef]
  60. Freitas, G.; Gleizer, G.; Lizarralde, F.; Liu, H.; Reis, N. Kinematic reconfigurability control for an environmental mobile robot operating in the Amazon rain forest. J. Field Robot. 2010, 27, 197–216. [Google Scholar] [CrossRef]
  61. Gao, F.; Wang, L.; Zhou, B.; Zhou, X.; Shen, S. Teach-Repeat-Replan: A Complete and Robust System for Aggressive Flight in Complex Environments. IEEE Trans. Robot. 2020, 36, 1529–1545. [Google Scholar] [CrossRef]
  62. Kapoutsis, A.C.; Chatzichristofis, S.A.; Doitsidis, L.; De Sousa, J.B.; Pinto, J.; Braga, J. Real-time adaptive multi-robot exploration with application to underwater map construction. Auton. Robot. 2016, 40, 987–1015. [Google Scholar] [CrossRef]
  63. Cunha, B.; Brito, C.; Araújo, G.; Sousa, R.; Soares, A.; Silva, F.A. Smart Traffic Control in Vehicle Ad-Hoc Networks: A Systematic Literature Review. Int. J. Wirel. Inf. Netw. 2021, 28, 362–384. [Google Scholar] [CrossRef]
  64. Du, M.; Yang, S.; Chen, Q. Impacts of vehicle-to-infrastructure communication on traffic flows with mixed connected vehicles and human-driven vehicles. Int. J. Mod. Phys. B 2021, 35, 2150091. [Google Scholar] [CrossRef]
  65. Zhu, Q. Research on Road Traffic Situation Awareness System Based on Image Big Data. IEEE Intell. Syst. 2019, 35, 18–26. [Google Scholar] [CrossRef]
  66. Zhou, X.; Zhu, J.; Zhou, H.; Xu, C.; Gao, F. Ego-swarm: A fully autonomous and decentralized quadrotor swarm system in cluttered environments. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 4101–4107. [Google Scholar]
  67. Andrade, F.; Hovenburg, A.; Lima, L.; Rodin, C.D.; Haddad, D.B. Autonomous Unmanned Aerial Vehicles in Search and Rescue Missions Using Real-Time Cooperative Model Predictive Control. Sensors 2019, 19, 4067. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Bonci, A.; Cen Cheng, P.D.; Indri, M.; Nabissi, G.; Sibona, F. Human-robot perception in industrial environments: A survey. Sensors 2021, 21, 1571. [Google Scholar] [CrossRef] [PubMed]
  69. Yin, L.; Zhang, R.; Gu, H.; Li, P. Research on Cooperative Perception of MUSVs in Complex Ocean Conditions. Sensors 2021, 21, 1657. [Google Scholar] [CrossRef]
  70. Villa, J.; Aaltonen, J.; Virta, S.; Koskinen, K.T. A Co-Operative Autonomous Offshore System for Target Detection Using Multi-Sensor Technology. Remote Sens. 2020, 12, 4106. [Google Scholar] [CrossRef]
  71. Tranzatto, M.; Mascarich, F.; Bernreiter, L.; Godinho, C.; Camurri, M.; Khattak, S.; Dang, T.; Reijgwart, V.; Loeje, J.; Wisth, D.; et al. CERBERUS: Autonomous Legged and Aerial Robotic Exploration in the Tunnel and Urban Circuits of the DARPA Subterranean Challenge. arXiv 2021, arXiv:2201.07067. [Google Scholar]
  72. Ohradzansky, M.T.; Rush, E.R.; Riley, D.G.; Mills, A.B.; Ahmad, S.; Mcguire, S. Multi-Agent Autonomy: Advancements and Challenges in Subterranean Exploration. arXiv 2021, arXiv:2110.04390. [Google Scholar]
  73. Agha, A.; Otsu, K.; Morrell, B.; Agha, A.; Otsu, K.; Morrell, B.; Fan, D.D.; Thakker, R.; Santamaria-Navarro, A.; Kim, S.; et al. Nebula: Quest for robotic autonomy in challenging environments; team costar at the darpa subterranean challenge. arXiv 2021, arXiv:2103.11470. [Google Scholar]
  74. Liu, X. Key Technologies of Reinforcement of Submarine Optical Fiber Communication Engineering based on Low Power Sensor Network. J. Coast. Res. 2020, 104, 188–191. [Google Scholar] [CrossRef]
  75. Liu, B.; Cai, P.; Lan, H.; Wang, P. Short-Term Traffic Planning and Forecasting System Based on Vehicle-Road Coordination. In Lecture Notes in Electrical Engineering; Springer: Singapore, 2019; pp. 896–903. [Google Scholar]
  76. Wang, Y.; Yin, K. Study of overtaking method of intelligent vehicle under vehicle road coordination. J. Phys. Conf. Ser. IOP Publ. 2021, 1983, 012095. [Google Scholar] [CrossRef]
  77. Dorigo, M.; Theraulaz, G.; Trianni, V. Reflections on the future of swarm robotics. Sci. Robot. 2020, 5, eabe4385. [Google Scholar] [CrossRef] [PubMed]
  78. Todescato, M.; Bof, N.; Cavraro, G.; Carli, R.; Schenato, L. Partition-based multi-agent optimization in the presence of lossy and asynchronous communication. Automatica 2019, 111, 108648. [Google Scholar] [CrossRef]
  79. Su, H.; Chen, J.; Chen, X.; He, H. Adaptive Observer-Based Output Regulation of Multiagent Systems with Communication Constraints. IEEE Trans. Cybern. 2020, 11, 5259–5268. [Google Scholar] [CrossRef]
  80. Lin, P.; Ren, W.; Yang, C.; Gui, W. Distributed Continuous-Time and Discrete-Time Optimization with Nonuniform Unbounded Convex Constraint Sets and Nonuniform Stepsizes. IEEE Trans. Autom. Control 2019, 64, 5148–5155. [Google Scholar] [CrossRef] [Green Version]
  81. Li, L.; Li, Z.; Xia, Y.; Yan, J. Event-triggered bounded consensus for stochastic multi-agent systems with communication delay. Int. J. Control 2021, 57, 1–24. [Google Scholar] [CrossRef]
  82. Zhai, D.; Li, H.; Tang, X.; Zhang, R.; Ding, Z.; Yu, F.R. Height Optimization and Resource Allocation for NOMA Enhanced UAV-Aided Relay Networks. IEEE Trans. Commun. 2020, 69, 962–975. [Google Scholar] [CrossRef]
  83. Xi, X.; Cao, X.; Yang, P.; Chen, J.; Quek, T.Q.; Wu, D.O. Network Resource Allocation for eMBB Payload and URLLC Control Information Communication Multiplexing in a Multi-UAV Relay Network. IEEE Trans. Commun. 2020, 69, 1802–1817. [Google Scholar] [CrossRef]
  84. Liang, L.; Deng, F.; Lu, M.; Chen, J. Analysis of Role Switch for Cooperative Target Defense Differential Game. IEEE Trans. Autom. Control 2020, 66, 902–909. [Google Scholar] [CrossRef]
  85. Li, S.; Batra, R.; Brown, D.; Chang, H.; Ranganathan, N.; Hoberman, C.; Rus, D.; Lipson, H. Particle robotics based on statistical mechanics of loosely coupled components. Nature 2019, 567, 361–365. [Google Scholar] [CrossRef]
  86. Sun, J.; Chen, G.; Chen, J. Stability Analysis of Aperiodic Sampled-Data Systems: A Switched Polytopic System Method. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 1054–1058. [Google Scholar] [CrossRef]
  87. Zhou, Y.; Pan, C.; Yeoh, P.L.; Wang, K.; Elkashlan, M.; Vucetic, B.; Li, Y. Secure Communications for UAV-Enabled Mobile Edge Computing Systems. IEEE Trans. Commun. 2020, 68, 376–388. [Google Scholar] [CrossRef] [Green Version]
  88. Batista, F.K.; Martín del Rey, A.; Queiruga-Dios, A. A New Individual-Based Model to Simulate Malware Propagation in Wireless Sensor Networks. Mathematics 2020, 8, 410. [Google Scholar] [CrossRef] [Green Version]
  89. Li, G.; He, B.; Wang, Z.P.; Cheng, X.; Chen, J. Blockchain-enhanced spatiotemporal data aggregation for UAV-assisted wireless sensor networks. IEEE Trans. Ind. Inform. 2021, 17, 1. [Google Scholar] [CrossRef]
  90. Zhongpan, Z.; Hanlin, Y.; Qiwei, D.; Viswanath, G.B.; Fenggui, C.; Zhipeng, W. Edge Intelligent Perception Agents for Smart City Field Application. In Proceedings of the 2022 International Conference on Computer, Control and Robotics (ICCCR), Shanghai, China, 6–8 January 2022. [Google Scholar]
  91. Liu, D.; Jiang, Q.; Zhu, H.; Huang, B. Distributing UAVs as Wireless Repeaters in Disaster Relief via Group Role Assignment. Int. J. Coop. Inf. Syst. 2020, 29, 2040002. [Google Scholar] [CrossRef]
  92. Vilajosana, I.; Llosa, J.; Martinez, B.; Domingo-Prieto, M.; Angles, A.J.; Vilajosana, X. Bootstrapping smart cities through a self-sustainable model based on big data flows. IEEE Commun. Mag. 2013, 51, 128–134. [Google Scholar] [CrossRef]
  93. Salehi, H.; Burgueo, R.; Chakrabartty, S.; Lajnef, N.; Alavi, A.H. A comprehensive review of self-powered sensors in civil infrastructure: State-of-the-art and future research trends. Eng. Struct. 2021, 234, 111963. [Google Scholar] [CrossRef]
  94. Zhang, Y.; Yang, Y.; Zhou, W.; Wang, H.; Ouyang, X. Multi-city traffic flow forecasting via multi-task learning. Appl. Intell. 2021, 51, 6895–6913. [Google Scholar] [CrossRef]
  95. Hossain, M.S.; Muhammad, G.; Guizani, N. Explainable AI and Mass Surveillance System-based Healthcare Framework to Combat COVID-19 like Pandemics. IEEE Netw. 2020, 34, 126–132. [Google Scholar] [CrossRef]
  96. Batty, M. Defining smart cities: High and low frequency cities, big data and urban theory. In The Routledge Companion to Smart Cities; Routledge: London, UK, 2020; pp. 51–60. [Google Scholar]
  97. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An overview of recent progress in the study of distributed multi-agent coordination. IEEE Trans. Ind. Inform. 2012, 9, 427–438. [Google Scholar] [CrossRef] [Green Version]
  98. Herrera, M.; Pérez-Hernández, M.; Kumar Parlikad, A.; Izquierdo, J. Multi-agent systems and complex networks: Review and applications in systems engineering. Processes 2020, 8, 312. [Google Scholar] [CrossRef] [Green Version]
  99. Han, Q.; Liang, S.; Zhang, H. Mobile cloud sensing, big data, and 5G networks make an intelligent and smart world. IEEE Netw. 2015, 29, 40–45. [Google Scholar] [CrossRef]
  100. Sliwa, B.; Adam, R.; Wietfeld, C. Client-Based Intelligence for Resource Efficient Vehicular Big Data Transfer in Future 6G Networks. IEEE Trans. Veh. Technol. 2021, 70, 5332–5346. [Google Scholar] [CrossRef]
  101. Shrestha, R.; Bajracharya, R.; Kim, S. 6G Enabled Unmanned Aerial Vehicle Traffic Management: A Perspective. IEEE Access 2021, 9, 91119–91136. [Google Scholar] [CrossRef]
  102. Gu, Q.; Bai, G.; Meng, Y.; Wang, G.; Zhang, J.; Zhou, L. Efficient Path Tracking Control for Autonomous Driving of Tracked Emergency Rescue Robot under 6G Network. Wirel. Commun. Mob. Comput. 2021, 2021, 5593033. [Google Scholar] [CrossRef]
  103. Lv, Z.; Qiao, L.; You, I. 6G-enabled network in box for internet of connected vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 22, 5275–5282. [Google Scholar] [CrossRef]
  104. Brown, R.; FRossi Solovey, K.; Tsao, M.W.; Wolf, M.T.; Pavone, M. On Local Computation for Network-Structured Convex Optimization in Multi-Agent Systems. IEEE Trans. Control Netw. Syst. 2021, 8, 542–554. [Google Scholar] [CrossRef]
  105. Li, X.; Zhang, H. A multi-agent complex network algorithm for multi-objective optimization. Appl. Intell. 2020, 50, 2690–2717. [Google Scholar] [CrossRef]
  106. Nguyen, T.T.; Nguyen, N.D.; Nahavandi, S. Deep Reinforcement Learning for Multiagent Systems: A Review of Challenges, Solutions, and Applications. IEEE Trans. Cybern. 2020, 50, 3826–3839. [Google Scholar] [CrossRef] [Green Version]
  107. Li, G.; He, B.; Wang, Z. A Swarm Optimization-Enhanced Data Aggregation Tree Based on a Nonuniform Clustering Structure for Long and Linear Wireless Sensor Networks. Wirel. Pers. Commun. 2020, 112, 2285–2295. [Google Scholar] [CrossRef]
  108. Konen, J.; Mcmahan, H.B.; Yu, F.X. Federated Learning: Strategies for Improving Communication Efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
  109. Yildiz, E.; Mller, C.; Bilberg, A. Virtual Factory: Digital Twin Based Integrated Factory Simulations. Procedia CIRP 2020, 93, 216–221. [Google Scholar] [CrossRef]
  110. Chao, F.A.; Cheng, Z.A.; Ay, B. Disaster City Digital Twin: A vision for integrating artificial and human intelligence for disaster management. Int. J. Inf. Manag. 2019, 56, 102049. [Google Scholar]
  111. Voosen, P. Europe builds ‘digital twin’ of Earth to hone climate forecasts. Science 2020, 370, 6512. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The overall content architecture of the article.
Figure 1. The overall content architecture of the article.
Electronics 11 01091 g001
Figure 2. Classification of perception technology in unmanned system.
Figure 2. Classification of perception technology in unmanned system.
Electronics 11 01091 g002
Figure 3. Dedicated sensors for various applications. (a) Electronic skin for tactile sensing, (b) olfactory sensor for inspection robot, (c) brain-computer interface for vehicle control, (d) EMG sensor for prosthetic limb control [35].
Figure 3. Dedicated sensors for various applications. (a) Electronic skin for tactile sensing, (b) olfactory sensor for inspection robot, (c) brain-computer interface for vehicle control, (d) EMG sensor for prosthetic limb control [35].
Electronics 11 01091 g003
Figure 4. Booming applications of multi-agent perception.
Figure 4. Booming applications of multi-agent perception.
Electronics 11 01091 g004
Table 1. Communication classification of unmanned systems used.
Table 1. Communication classification of unmanned systems used.
CategoryBluetooth 3.0ZIGBEENB-IOT4G/5GWIFI
Transmission speed24 Mbps250 kbps100 kbps300 Mbps/
30 Gbps
600 Mbps
Communication distance100 m100 m10 km3 km/300 m200 m
Frequency2.4 GHz2.4 GHz800–900 MHz700–2500 MHz/
28–39 GHz
2.4 GHz/5 GHz
SecurityHighMediumHighHighLow
PowerLowLowLowHighHigh
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, Z.; Du, Q.; Wang, Z.; Li, G. A Survey of Multi-Agent Cross Domain Cooperative Perception. Electronics 2022, 11, 1091. https://doi.org/10.3390/electronics11071091

AMA Style

Zhu Z, Du Q, Wang Z, Li G. A Survey of Multi-Agent Cross Domain Cooperative Perception. Electronics. 2022; 11(7):1091. https://doi.org/10.3390/electronics11071091

Chicago/Turabian Style

Zhu, Zhongpan, Qiwei Du, Zhipeng Wang, and Gang Li. 2022. "A Survey of Multi-Agent Cross Domain Cooperative Perception" Electronics 11, no. 7: 1091. https://doi.org/10.3390/electronics11071091

APA Style

Zhu, Z., Du, Q., Wang, Z., & Li, G. (2022). A Survey of Multi-Agent Cross Domain Cooperative Perception. Electronics, 11(7), 1091. https://doi.org/10.3390/electronics11071091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop