Sensing Requirements and Vision-Aided Navigation Algorithms for Vertical Landing in Good and Low Visibility UAM Scenarios
Abstract
:1. Introduction
1.1. Context and Related Works
- Vertical Landing (VL) with a defined obstacle-free volume (see Figures D-13 and D-14 of [3]), which will be required to maintain safe distances to obstacles in the airspace above vertiports placed in urban environments.
1.2. Paper Contribution
- The definition of the visual sensor requirements needed to safely support these operations and increase the autonomy of the navigation architecture. Considering the case of low visibility conditions, this corresponds to extending the sensing requirements of Enhanced Visual Operations (EVO) to the new UAM scenarios.
- The implementation of a vision-aided, multi-sensor-based navigation architecture which integrates the measurements of an IMU, a standalone GNSS receiver, and a monocular camera. A multi-mode data fusion pipeline based on an EKF is designed, which takes the distance from the landing area into account and adopts ad hoc strategies to self-estimate navigation performance degradation and improve integrity, protecting the navigation solution and consequently the overall landing procedure from visual sensing anomalies.
- Performance assessment of the proposed architecture is conducted within a highly realistic simulation environment in which sensor measurements are realistically reproduced, analysing day/night scenarios in both nominal and low visibility conditions. Given the stringent safety requirements of UAM operations, the scope of this analysis is to understand how the developed logic and processing pipeline work in degraded conditions, and which are the applicability limits of the developed concepts.
2. Visual Sensor Requirements in UAM Landing Scenarios
2.1. Assumptions
- Touch-down and Lift-off Area (TLOF), i.e., the load bearing surface on which the aircraft lands and/or takes off. Its minimum length and width should be at least equal to the distance between the two outermost edges of the vehicle according to the FAA [5] or to the diameter of the smallest circle enclosing the VTOL aircraft projection on a horizontal plane, as stated by the EASA [3] for elevated TLOFs. The TLOF can be designed to be rectangular or circular. According to the FAA, different advantages are provided by the two design choices. A rectangular TLOF provides better guidance for the pilot, while a circular TLOF may result more visible in an urban environment. The TLOF is assumed with a diameter of 12.2 m (40 ft) [7], coherently with the dimensions of the main VTOL prototypes, which are foreseen to be certified for UAM procedures [23].
- Final Approach and Take-off Area (FATO), which is centred around every TLOF area. This area represents the surface where VTOL aircraft complete the final phase of the approach to a hover or a landing. It is assumed that the minimum horizontal dimensions of this area are 1.5 times the minimum diameter of the circle enclosing the VTOL aircraft projection [3]. A more conservative definition of the FATO dimensions is provided by the FAA [5], assuming twice the distance between the vehicle’s two outermost edges.
- Safety Area (SA), which is defined on a heliport surrounding the FATO to reduce the risk of accidents for aircraft inadvertently diverging from the FATO.
- Rotorcraft should not attempt approach and departure operations with a tailwind, which can cause the aircraft to enter into VRS conditions.
- Rotorcraft should not attempt approach or departure operations with a crosswind greater than 15 knots.
2.2. Requirements
- Camera mounting configuration and FOV. For the considered type of approach trajectory, the camera must be mounted with an off-nadir pointing configuration. Moreover, different constraints can be identified on the field of regard to be monitored in the directions transversal and parallel to the velocity projection on the local horizontal plane, which determine different requirements for the camera horizontal and vertical FOV, respectively. Among the various impacting factors, the dimension of the landing pad and the wind direction and intensity play a crucial role. The minimum camera Horizontal FOV (HFOV) is defined to maintain visual contact with the TLOF area in the case of approaches in crosswind conditions producing a maximum heading deviation of 25°, and to exclude the risk of visual contact loss due to wind gusts causing a maximum heading oscillation of 15° of amplitude (as in [25]). Clearly, this implies that the landing procedure should be aborted if these conditions cause larger deviations from the nominal path. Consequently, using a worst-case approach, the resulting minimum HFOV is 80° (i.e., ±40°). The minimum camera Vertical FOV (VFOV) is estimated to maintain the landing pad in view during the visual flight phase of both the approach trajectories, considering the possibility of pitch oscillations (assuming max. ±5° as in [25]). The landing pad must be visible in the vision sensor imagery when the VTOL is in the final vertical descent above the vertiport and in each other point of the trajectories. Considering both the trajectories, the last holding circle of the VTCA is the point with the lower ratio between relative height and horizontal distance from the landing pad. The minimum VFOV required is defined through the comparison of the VFOV needed in this point to detect the landing pad and the value needed with the same camera mounting configuration during a vertical descent. It can be estimated by applying the pinhole camera model under perspective projection geometry, as in:
- Sensor resolution. Assuming the landing pad detection as the main objective of the selected camera to support the approach phase, the required sensor resolution will be strongly dependent on the ground infrastructure installed on it. Considering the previously reported TLOF dimensions (12.2 m × 12.2 m), a value of 0.05° Instantaneous Field of View (IFOV) leads to cover the area of interest with more than 1400 pixels at the TPVF defined along the approach trajectories. As better highlighted by the numerical simulation results shown in Section 5, this value results in being sufficient to accurately extract a fiducial marker placed in correspondence of the TLOF and visible in the whole approach trajectory, as well as to ensure an acceptable pose estimation accuracy.
- Refresh rate. The image frames shall be refreshed at least at 15 Hz, considering the nominal frame rates adopted in the vision-aided navigation sensor literature (20 Hz is tested in [14,16,17]) and the minimum 15 Hz value requested in nominal helicopter synthetic/enhanced/combined vision operations to runways [27]. Assuming a value of 25 m/s (90 km/h) for the maximum VTOL velocity as reported by the Volocopter VoloCity specification [23], the platform would travel a distance of about 1.56 m between two consecutive frame acquisitions. On the other hand, a higher refresh rate might be required in different applications which are characterized by higher platform speeds, for example the preliminary proof of concept of the eXternal Vision Systems (XVS) designed to support future supersonic operations providing real-time imagery in each flight phase assumes a camera frame rate of 60 Hz [28].
- System latency. A value of 100 m s is the maximum value considered acceptable in case of synthetic images presented to the pilot in rotorcraft landing operations [29]. This latency can be assumed as the threshold in these applications, including in the 100 m s value the latencies of the image processing and any sensor fusion phases. The assumed latency would lead the VTOL platform to fly 2.5 m between the frame acquisition and the end of its processing in the worst case of maximum flight speed assumed as before. Higher latency times might introduce the risk of pilot/autopilot oscillations.
3. Vision-Aided Navigation Architecture
- The prediction step of the filter allows coping with the lower update rate of visual-based and GNSS-based measurements with respect to the inertial sensors data rate, thus ensuring valid initial guesses to the vision-based iterative pose estimation algorithm.
- The correction step of the filter in the approach phase trusts visual estimates and GNSS measurements for position data (Figure 4). Only the estimates provided by the standalone GNSS receiver are used in the first phase of the approach, when the high distances from the landing pad prevent from obtaining accurate markers’ detection, thus leading to coarse visual-based pose measurements. However, in this part of the approach path, the landing pad is already searched and tracked by the visual algorithm, thus being able to initialize the pose estimation process. Once the previously identified distance threshold from the landing pad (i.e., the TPVF) is reached, visual-based pose measurements are fed to the filter correction step. Specifically, in this second part of the approach, a multi-sensor correction step is implemented following the cascaded single-epoch integration model [30], combining both GNSS and visual sensor estimates. This scheme allows us to cross-check the integrity of GNSS measurements, which in urban scenarios might be affected at low altitudes by failures due to signal multipath or non-line-of-sight (NLOS) receiver [31]. The implemented logic accepts only GNSS position measurements that are included in the 3-sigma bounds defined over the corresponding filter position prediction through the uncertainties of the prediction and the measurement.
- Navigation performance is monitored at each time step through the control of the estimated position uncertainty of the EKF. To improve integrity, a failure detection logic verifies if the visual-based pose estimates are acceptable comparing the pose estimation residuals with a threshold. Thanks to this process, if the pose estimated from a specific image frame is deemed unreliable, it is not fed to the EKF correction step, thus not contributing to the reduction of EKF position error covariance. In a similar way, a missed detection of the landing pad in the image frame (e.g., due to unfavorable visibility conditions or to obstacles) does not provide visual position estimates. When the estimated navigation uncertainty, as expressed by the covariance matrix of the filter, reaches a threshold which is deemed not compatible with a safe landing procedure, a contingency event is generated with consequent activation of a MAP. As concerns the definition of the error thresholds for MAP activation, different approaches are possible which are all linked to the entries of the covariance matrix. As an example, within an ILS-like perspective, constant lateral and glide-slope deviation thresholds in degrees can be considered, and positioning uncertainties can be converted into angular deviations by taking the distance to the landing pad into account. For the sake of concreteness, in this work the threshold is set on the three-dimensional positioning uncertainty computed as the square root of the sum of the diagonal entries of the covariance matrix relevant to aircraft positioning. The threshold is assumed to have a linear dependence on the distance to the vertiport, which is consistent with the idea of a constant threshold for the angular errors. This choice is also in line with typical performance of visual sensors capable of providing improved position accuracy at reducing range. Since the logic adopted for MAP activation is based on the positioning uncertainty, the system behaviour is strongly affected by the characteristics of the inertial sensors. In fact, enhanced resilience with respect to visual challenges is provided by architectures, integrating higher performance inertial sensors which allow slower divergence of positioning errors. This is modelled by the smaller process noise matrix adopted in the navigation filter. It is worth noting that this is a navigation-induced MAP activation logic. In closed-loop autonomous landing operations, a MAP may also be activated by excessive control errors. These aspects are beyond the scope of this paper as the focus here is placed on perception and estimation aspects. In general, a sufficient battery charge must be available on electric VTOL aircraft to successfully cancel the landing procedure and divert to an alternate vertiport [32]. It is assumed that the possibility to perform a MAP shall be dependent on the VTOL aerodynamic performance and the scenario surrounding the vertiport, influencing the minimum distance from the landing pad at which it will be possible to safely cancel the landing procedure observing the ROC minima. Such distance corresponds to the Landing Decision Point defined by the EASA [3] and is assumed equal to 100 m in this work.
Visual-Based Pose Estimation
- AprilTag (AT) fiducial markers for daylight operations. A first AT marker occupying the TLOF area is placed on the landing pad to allow the pose estimation process at large distances, while 6 smaller ATs are adopted to maintain enough markers detectable in proximity of the landing pad. The smaller ATs are inserted into the first one thus not affecting the total landing pad dimension. The detection and identification of the AT markers is carried out exploiting the procedure presented in [33], implemented in MATLAB by the readAprilTag function.
- A pattern of lights for night scenarios respecting the FAA normative on heliport lighting [4] and the EASA specifications for vertiport design [3], including only the green flush lights placed in the FATO and TLOF areas to reduce the area occupied. The introduction of five white lights inside the TLOF area allows keeping reliable visual pose estimates in the last meters of the landing trajectory (similarly to the function of the 6 smaller ATs). Each light is identified minimizing the sum of the differences between the 2D coordinates of the green and white lights detected in the region of interest and the reprojection of the pattern’s real coordinates in the image plane. The defined Global Nearest Neighbour identification problem is solved by the Jonker-Volgenant [34] algorithm implemented in MATLAB by the assignkbest function.
4. Simulation Environment
- The 3D scenario where the UAV flies, with the possibility to customize it through the interaction with UE changing weather and illumination parameters (as the Sun altitude and azimuth, the fog density, the cloud density, and speed). Furthermore, other UAVs flying in the scenario can be introduced, e.g., enabling the simulation of Sense and Avoid functionalities. Another effect on the scenario is given by the shadows of the simulated UAVs.
- The trajectory of the UAVs introduced in the scenario. It is worth noting that the simulated approach paths are the ideal ones reported in Section 5. Hence, the navigation estimates are not used to correct potential deviations from the ideal path through feedback control. At the moment, the effect of residual control errors is emulated through sinusoidal orientation and translation deviations of the ideal trajectory, while the effective introduction of feedback control in the trajectory simulations will be tacked in future applications.
- The parameters of the sensors installed on the rotorcraft, with the possibility to select nominal camera, fisheye camera, and LIDAR.
5. Results
5.1. Visual-Based Pose Estimation Performance
5.2. EKF Performance
- Daylight: VTCA and 3-stepped trajectories assuming small attitude oscillations of the rotorcraft (maximum 0.1° difference from ideal orientation) to consider its limits in the attitude control capabilities. These scenarios can be used to assess the nominal EKF accuracy.
- Perturbed: VTCA and 3-stepped trajectories assuming larger disturbances applied to the ideal paths (i.e., 0.2 m horizontal displacement and 5° attitude at 1 Hz frequency) to prove the robustness of the vision-aided architecture to significant oscillations caused by atmospheric disturbances.
- Night: VTCA and 3-stepped trajectories in case of night scenario to prove the architecture’s effectiveness in case the pose estimation process relies on the detection of a pattern of lights instead of the AT markers suitable for day-light scenarios.
- Low visibility: VTCA trajectory in case of uniform fog to evaluate architecture’s robustness in non-ideal visibility conditions. The AT detections have been reported as a function of the distance from the landing pad, so the same considerations can be applied to the 3-stepped trajectory as well.
- MAP activation: VTCA trajectory in case of local fog banks along the trajectory to test the effective autonomous activation of a MAP.
5.2.1. Daylight
5.2.2. Perturbed
5.2.3. Night
5.2.4. Low Visibility
5.2.5. MAP Activation
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- National Aeronautics and Space Administration. UAM Vision Concept of Operations (ConOps) UAM Maturity Level (UML) 4. 2021. Available online: https://ntrs.nasa.gov/api/citations/20205011091/downloads/UAM%20Vision%20Concept%20of%20Operations%20UML-4%20v1.0.pdf (accessed on 10 January 2022).
- European Union Aviation Safety Agency. Second Publication of Proposed Means of Compliance with the Special Condition VTOL, Doc. No. MOC-2 SC-VTOL. 2021. Available online: https://www.easa.europa.eu/document-library/product-certification-consultations/special-condition-vtol (accessed on 17 January 2022).
- European Union Aviation Safety Agency Vertiports—Prototype Technical Specifications for the Design of VFR Vertiports for Operation with Manned VTOL-Capable Aircraft Certified in the Enhanced Category (PTS-VPT-DSN). 2022. Available online: https://www.easa.europa.eu/downloads/136259/en (accessed on 27 March 2022).
- Federal Aviation Administration AC 150/5390-2C—Heliport Design. 2012. Available online: https://www.faa.gov/documentLibrary/media/Advisory_Circular/150_5390_2c.pdf (accessed on 7 January 2022).
- Federal Aviation Administration. Draft EB 105, Vertiport Design, June XX. 2022. Available online: https://www.faa.gov/airports/engineering/engineering_briefs/drafts/media/eb-105-vertiport-design-industry-draft.pdf (accessed on 15 March 2022).
- Yilmaz, E.; Warren, M.; German, B.J. Energy and Landing Accuracy Considerations for Urban Air Mobility Vertiport Approach Surfaces. In Proceedings of the AIAA Aviation 2019 Forum, Dallas, TX, USA, 17–21 June 2019. [Google Scholar]
- Webber, D.; Zahn, D. FAA and the National Campaign (Powerpoint). Available online: https://nari.arc.nasa.gov/aam-portal/recordings/ (accessed on 20 February 2022).
- Pradeep, P.; Wei, P. Energy-Efficient Arrival with RTA Constraint for Multirotor EVTOL in Urban Air Mobility. J. Aerosp. Inf. Syst. 2019, 16, 263–277. [Google Scholar] [CrossRef]
- Kleinbekman, I.C.; Mitici, M.; Wei, P. Rolling-Horizon Electric Vertical Takeoff and Landing Arrival Scheduling for on-Demand Urban Air Mobility. J. Aerosp. Inf. Syst. 2020, 17, 150–159. [Google Scholar] [CrossRef]
- Shao, Q.; Shao, M.; Lu, Y. Terminal Area Control Rules and EVTOL Adaptive Scheduling Model for Multi-Vertiport System in Urban Air Mobility. Transp. Res. Part C Emerg. Technol. 2021, 132, 103385. [Google Scholar] [CrossRef]
- Song, K.; Yeo, H.; Moon, J.H. Approach Control Concepts and Optimal Vertiport Airspace Design for Urban Air Mobility (UAM) Operation. Int. J. Aeronaut. Space Sci. 2021, 22, 982–994. [Google Scholar] [CrossRef]
- Song, K.; Yeo, H. Development of Optimal Scheduling Strategy and Approach Control Model of Multicopter VTOL Aircraft for Urban Air Mobility (UAM) Operation. Transp. Res. Part C Emerg. Technol. 2021, 128, 103181. [Google Scholar] [CrossRef]
- Patruno, C.; Nitti, M.; Petitti, A.; Stella, E.; D’Orazio, T. A Vision-Based Approach for Unmanned Aerial Vehicle Landing. J. Intell. Robot. Syst. 2019, 95, 645–664. [Google Scholar] [CrossRef]
- Lin, S.; Jin, L.; Chen, Z. Real-Time Monocular Vision System for UAV Autonomous Landing in Outdoor Low-Illumination Environments. Sensors 2021, 21, 6226. [Google Scholar] [CrossRef] [PubMed]
- Wubben, J.; Fabra, F.; Calafate, C.T.; Krzeszowski, T.; Marquez-Barja, J.M.; Cano, J.C.; Manzoni, P. Accurate Landing of Unmanned Aerial Vehicles Using Ground Pattern Recognition. Electronics 2019, 8, 1532. [Google Scholar] [CrossRef] [Green Version]
- Chen, X.; Phang, S.K.; Chen, B.M. System Integration of a Vision-Guided UAV for Autonomous Tracking on Moving Platform in Low Illumination Condition. In Proceedings of the Institute of Navigation Pacific Positioning, Navigation and Timing Meeting, Honolulu, HI, USA, 1–4 May 2017; pp. 1082–1092. [Google Scholar]
- Chen, X.; Lin, F.; Hamid, M.R.A.; Teo, S.H.; Phang, S.K. Real-Time Landing Spot Detection and Pose Estimation on Thermal Images Using Convolutional Neural Networks. In Proceedings of the 2018 IEEE 14th International Conference on Control and Automation (ICCA), Anchorage, AL, USA, 12–15 June 2018; pp. 998–1003. [Google Scholar]
- Baldini, F.; Anandkumar, A.; Murray, R.M. Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data. In Proceedings of the 2020 American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020; pp. 2961–2966. [Google Scholar]
- Ye, S.; Wan, Z.; Zeng, L.; Li, C.; Zhang, Y. A Vision-Based Navigation Method for EVTOL Final Approach in Urban Air Mobility (UAM). In Proceedings of the 2020 4th CAA International Conference on Vehicular Control and Intelligence (CVCI), Hangzhou, China, 18–20 December 2020; pp. 645–649. [Google Scholar]
- Kawamura, E.; Kannan, K.; Lombaerts, T.; Ippolito, C.A. Vision-Based Precision Approach and Landing for Advanced Air Mobility. In Proceedings of the AIAA SCITECH 2022 Forum, Reston, VA, USA, 3 January 2022. [Google Scholar]
- EUROCONTROL. Helicopter Point in Space Operations in Controlled and Uncontrolled Airspace Generic Safety Case. Edition Number 1.4. 2019. Available online: https://www.eurocontrol.int/publication/helicopter-point-space-operations-controlled-and-uncontrolled-airspace (accessed on 15 February 2022).
- Vascik, P.D.; Hansman, R.J. Development of Vertiport Capacity Envelopes and Analysis of Their Sensitivity to Topological and Operational Factors. In Proceedings of the AIAA Scitech 2019 Forum, AIAA, San Diego, CA, USA, 7–11 January 2019. [Google Scholar]
- EVTOL Aircraft Directory. Available online: https://evtol.news/aircraft (accessed on 31 March 2022).
- Bauranov, A.; Rakas, J. Designing Airspace for Urban Air Mobility: A Review of Concepts and Approaches. Prog. Aerosp. Sci. 2021, 125, 100726. [Google Scholar] [CrossRef]
- Galway, D.; Etele, J.; Fusina, G. Modeling of Urban Wind Field Effects on Unmanned Rotorcraft Flight. J. Aircr. 2011, 48, 1613–1620. [Google Scholar] [CrossRef]
- Zelinski, S. Operational Analysis of Vertiport Surface Topology. In Proceedings of the AIAA/IEEE Digital Avionics Systems Conference, San Antonio, TX, USA, 11–15 October 2020; pp. 1–10. [Google Scholar]
- Federal Aviation Administration AC 20-167A. Airworthiness Approval of Enhanced Vision System, Synthetic Vision System, Combined Vision System, and Enhanced Flight Vision System Equipment. 2016. Available online: https://www.faa.gov/documentLibrary/media/Advisory_Circular/AC_20-167A.pdf (accessed on 7 January 2022).
- Shelton, K.J.; Williams, S.P.; Kramer, L.J.; Arthur, J.J.; Prinzel, L., II; Bailey, R.E. External Vision Systems (XVS) Proof-of-Concept Flight Test Evaluation. In Proceedings of the SPIE Defense and Security Symposium, Baltimore, MD, USA, 19 June 2014; Volume 9087. [Google Scholar]
- Link, N.K.; Kruk, R.; McKay, D.; Jennings, S.A.; Craig, G. Hybrid Enhanced and Synthetic Vision System Architecture for Rotorcraft Operations. In Proceedings of the SPIE AeroSense, Orlando, FL, USA, 16 July 2002; Volume 4713. [Google Scholar]
- Groves, P. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 2nd ed.; Artech: Morristown, NJ, USA, 2013. [Google Scholar]
- Bijjahalli, S.; Sabatini, R.; Gardi, A. GNSS Performance Modelling and Augmentation for Urban Air Mobility. Sensors 2019, 19, 4209. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Maia, F.D.; Lourenço da Saúde, J.M. The State of the Art and Operational Scenarios for Urban Air Mobility with Unmanned Aircraft. Aeronaut. J. 2021, 125, 1034–1063. [Google Scholar] [CrossRef]
- Olson, E. AprilTag: A Robust and Flexible Visual Fiducial System. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 3400–3407. [Google Scholar]
- Blackman, S.S.; Popoli, R. Design and Analysis of Modern Tracking Systems; Artech House: London, UK, 1999. [Google Scholar]
- Gavin, H.P. The Levenberg-Marquardt Algorithm for Nonlinear Least Squares Curve-Fitting Problems; Duke University: Durham, NC, USA, 2020. [Google Scholar]
- Szeliski, R. Computer Vision—Algorithms and Applications; Springer: London, UK, 2011. [Google Scholar]
Vision Sensor Parameter | Required Value |
---|---|
Field of View | 80° × 60° |
Instantaneous Field of View | 0.05° |
Refresh rate | 15 Hz |
Maximum system latency | 100 ms |
Simulated Sensor | Specifications | Selected Value |
---|---|---|
Camera | Focal length (pixels) | (1109, 1109) |
Principal point coordinates (pixels) | (808, 640) | |
Image size (pixels) Sample frequency Off-nadir angle | (1280, 1616) 15 Hz 20 deg | |
GNSS receiver | GNSS Position Standard Deviation | 2.5 m Horizontal–5 m Vertical |
Sample frequency | 1 Hz | |
IMU | Sample Frequency | 200 Hz |
Gyroscopes Angular Random Walk (ARW) | 0.05 deg/sqrt(h) | |
Gyroscopes Bias Instability (BI) | 0.6 deg/h | |
Accelerometers Velocity Random Walk (VRW) Accelerometers Bias Instability (ABI) | 0.6 m/s/sqrt(h) 0.50 mg |
Trajectory | Parameter | 550–350 m | 350–200 m | 200–100 m | 100–20 m | 20–0 m |
---|---|---|---|---|---|---|
VTCA | μx (m) | 0.052 | 0.495 | 0.096 | 0.042 | 0.001 |
σx (m) | 2.499 | 0.564 | 0.214 | 0.060 | 0.009 | |
μy (m) | −0.032 | 0.257 | 0.134 | 0.049 | 0.007 | |
σy (m) | 2.291 | 0.376 | 0.035 | 0.021 | 0.011 | |
μz (m) | 1.147 | 2.291 | 0.367 | 0.110 | 0.026 | |
σz (m) | 5.286 | 1.551 | 0.427 | 0.073 | 0.022 | |
3-stepped | μx (m) | 0.178 | 0.076 | −0.113 | −0.037 | −0.003 |
σx (m) | 2.355 | 0.287 | 0.069 | 0.023 | 0.023 | |
μy (m) | −0.671 | 0.239 | 0.146 | 0.060 | 0.009 | |
σy (m) | 2.220 | 0.218 | 0.074 | 0.025 | 0.030 | |
μz (m) | −0.095 | 0.507 | 0.283 | 0.069 | 0.036 | |
σz (m) | 5.030 | 0.506 | 0.374 | 0.036 | 0.043 |
Trajectory | Parameter | 550–350 m | 350–200 m | 200–100 m | 100–20 m | 20–0 m |
---|---|---|---|---|---|---|
VTCA | μx (m) | −0.375 | 0.584 | 0.160 | 0.024 | 0.010 |
σx (m) | 2.193 | 0.507 | 0.101 | 0.052 | 0.041 | |
μy (m) | 0.067 | 0.235 | 0.116 | 0.044 | 0.024 | |
σy (m) | 2.195 | 0.354 | 0.081 | 0.043 | 0.044 | |
μz (m) | −0.352 | 2.371 | 0.449 | 0.101 | 0.025 | |
σz (m) | 4.707 | 1.022 | 0.310 | 0.082 | 0.059 | |
3-stepped | μx (m) | −0.386 | −0.033 | −0.137 | −0.050 | 0.001 |
σx (m) | 2.411 | 0.300 | 0.107 | 0.034 | 0.064 | |
μy (m) | −0.110 | 0.148 | 0.115 | 0.046 | 0.019 | |
σy (m) | 2.452 | 0.255 | 0.150 | 0.044 | 0.082 | |
μz (m) | 0.816 | 0.346 | 0.345 | 0.081 | 0.065 | |
σz (m) | 5.246 | 0.622 | 0.229 | 0.050 | 0.097 |
Trajectory | Parameter | 550–350 m | 350–200 m | 200–100 m | 100–20 m | 20–0 m |
---|---|---|---|---|---|---|
VTCA | μx (m) | 0.235 | −0.067 | −0.042 | −0.015 | −0.002 |
σx (m) | 2.413 | 0.448 | 0.034 | 0.019 | 0.013 | |
μy (m) | −0.402 | 0.126 | 0.067 | 0.025 | 0.009 | |
σy (m) | 2.451 | 0.358 | 0.026 | 0.014 | 0.012 | |
μz (m) | 0.618 | 0.160 | 0.027 | 0.019 | 0.015 | |
σz (m) | 5.074 | 1.073 | 0.048 | 0.023 | 0.019 | |
3-stepped | μx (m) | 0.192 | −0.083 | −0.063 | −0.020 | −0.005 |
σx (m) | 2.719 | 0.261 | 0.169 | 0.028 | 0.018 | |
μy (m) | −0.436 | 0.118 | 0.070 | 0.029 | 0.012 | |
σy (m) | 2.658 | 0.240 | 0.074 | 0.022 | 0.015 | |
μz (m) | 0.668 | 0.032 | 0.206 | 0.109 | 0.023 | |
σz (m) | 4.599 | 0.522 | 0.229 | 0.052 | 0.029 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Veneruso, P.; Opromolla, R.; Tiana, C.; Gentile, G.; Fasano, G. Sensing Requirements and Vision-Aided Navigation Algorithms for Vertical Landing in Good and Low Visibility UAM Scenarios. Remote Sens. 2022, 14, 3764. https://doi.org/10.3390/rs14153764
Veneruso P, Opromolla R, Tiana C, Gentile G, Fasano G. Sensing Requirements and Vision-Aided Navigation Algorithms for Vertical Landing in Good and Low Visibility UAM Scenarios. Remote Sensing. 2022; 14(15):3764. https://doi.org/10.3390/rs14153764
Chicago/Turabian StyleVeneruso, Paolo, Roberto Opromolla, Carlo Tiana, Giacomo Gentile, and Giancarmine Fasano. 2022. "Sensing Requirements and Vision-Aided Navigation Algorithms for Vertical Landing in Good and Low Visibility UAM Scenarios" Remote Sensing 14, no. 15: 3764. https://doi.org/10.3390/rs14153764
APA StyleVeneruso, P., Opromolla, R., Tiana, C., Gentile, G., & Fasano, G. (2022). Sensing Requirements and Vision-Aided Navigation Algorithms for Vertical Landing in Good and Low Visibility UAM Scenarios. Remote Sensing, 14(15), 3764. https://doi.org/10.3390/rs14153764