sensors-logo

Journal Browser

Journal Browser

Multi-Sensor Integration and Fusion

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (31 March 2017) | Viewed by 114404

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China
Interests: 3D vision; LiDAR; mobile mapping; geospatial big data analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Architecture, Planning and Geomatics, University of Cape Town, Private Bag, Rondebosch 7701, South Africa
Interests: application of GIS; remote sensing; photogrammetric technologies in the area of environmental and land management

E-Mail Website
Guest Editor
Lyles School of Civil Engineering, Purdue University, West Lafayette, IN 47907, USA
Interests: photogrammetry; laser scanning; mobile mapping systems; system calibration; computer vision; unmanned aerial mapping systems; multisensor/multiplatform data integration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Earth Observation Science, Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands
Interests: computer vision and photogrammetry with specialization on deep learning; graphical models; scene understanding; multi-sensor fusion; object segmentation; pose estimation; and activity recognition

Special Issue Information

Dear Colleagues,

The synergistic use of multiple sensors by machines and systems has become essential towards achieving more complete, accurate, and efficient geospatial applications. For example, the fusion of optical cameras with LiDAR in airborne/terrestrial mapping systems and the integration of GNSS/MEMINS/magnetometer/camera sensors in low-cost smartphones, both open a new chapter in related fields. These unprecedented developments in the sensor arena require the definition of new models, algorithms, techniques and tools for multi-sensor data exploitation and system integration, as well as in the assessment and validation of existing methods.

The main purpose of this Special Issue is to provide a reference of both review and original research articles related to multi-sensor integration and fusion on the platforms ranging from satellite, airplane, UAV, terrestrial vehicles to handheld devices. This Special Issue examines the following (but not limited to) practical issues:

  • Multi-sensor system design and on-board processing;
  • Multi-platform sensing with land-, air-, and space-borne platforms;
  • Data quality control/assurance and enhancement for multi-sensor multi-platform solutions;
  • Ubiquitous sensing solutions with non-conventional low-cost sensors in mobile devices;
  • Sensor integration and fusion for positioning and navigation (for indoor and outdoor);

In particular, studies based on optic cameras, LiDAR, SAR and navigation sensors are welcome.

Prof. Dr. Cheng Wang
Dr. Julian Smit
Prof. Dr. Ayman F. Habib
Dr. Michael Ying Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multi-sensor
  • multi-platform sensing
  • data fusion
  • data quality control/assurance/enhancement
  • cross-sensor calibration
  • ubiquitous sensing
  • on-board processing
  • positioning and navigation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

7105 KiB  
Article
Improving the Accuracy of Direct Geo-referencing of Smartphone-Based Mobile Mapping Systems Using Relative Orientation and Scene Geometric Constraints
by Naif M. Alsubaie, Ahmed A. Youssef and Naser El-Sheimy
Sensors 2017, 17(10), 2237; https://doi.org/10.3390/s17102237 - 30 Sep 2017
Cited by 23 | Viewed by 5597
Abstract
This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of [...] Read more.
This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

1244 KiB  
Article
Noncontact Sleep Study by Multi-Modal Sensor Fusion
by Ku-young Chung, Kwangsub Song, Kangsoo Shin, Jinho Sohn, Seok Hyun Cho and Joon-Hyuk Chang
Sensors 2017, 17(7), 1685; https://doi.org/10.3390/s17071685 - 21 Jul 2017
Cited by 31 | Viewed by 6969
Abstract
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven [...] Read more.
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

7072 KiB  
Article
Active Multimodal Sensor System for Target Recognition and Tracking
by Yufu Qu, Guirong Zhang, Zhaofan Zou, Ziyue Liu and Jiansen Mao
Sensors 2017, 17(7), 1518; https://doi.org/10.3390/s17071518 - 28 Jun 2017
Cited by 13 | Viewed by 5860
Abstract
High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent [...] Read more.
High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

3557 KiB  
Article
Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera
by Yufeng Cheng, Shuying Jin, Mi Wang, Ying Zhu and Zhipeng Dong
Sensors 2017, 17(6), 1441; https://doi.org/10.3390/s17061441 - 20 Jun 2017
Cited by 22 | Viewed by 5853
Abstract
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy [...] Read more.
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

3648 KiB  
Article
A Multiple Sensors Platform Method for Power Line Inspection Based on a Large Unmanned Helicopter
by Xiaowei Xie, Zhengjun Liu, Caijun Xu and Yongzhen Zhang
Sensors 2017, 17(6), 1222; https://doi.org/10.3390/s17061222 - 26 May 2017
Cited by 43 | Viewed by 5507
Abstract
Many theoretical and experimental studies have been carried out in order to improve the efficiency and reduce labor for power line inspection, but problems related to stability, efficiency, and comprehensiveness still exist. This paper presents a multiple sensors platform method for overhead power [...] Read more.
Many theoretical and experimental studies have been carried out in order to improve the efficiency and reduce labor for power line inspection, but problems related to stability, efficiency, and comprehensiveness still exist. This paper presents a multiple sensors platform method for overhead power line inspection based on the use of a large unmanned helicopter. Compared with the existing methods, multiple sensors can realize synchronized inspection on all power line components and surrounding objects within one sortie. Flight safety of unmanned helicopter, scheduling of sensors and exact tracking on power line components are very important aspects when using the proposed multiple sensors platform, therefore this paper introduces in detail the planning method for the flight path of the unmanned helicopter and tasks of the sensors before inspecting power lines, and the method used for tracking power lines and insulators automatically during the inspection process. To validate the method, experiments on a transmission line at Qingyuan in Guangdong Province were carried out, the results show that the proposed method is effective for power line inspection. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

1695 KiB  
Article
Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach
by Fabian Girrbach, Jeroen D. Hol, Giovanni Bellusci and Moritz Diehl
Sensors 2017, 17(5), 1159; https://doi.org/10.3390/s17051159 - 19 May 2017
Cited by 11 | Viewed by 7675
Abstract
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the [...] Read more.
The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

4215 KiB  
Article
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
by Yujia Zuo, Jinghong Liu, Guanbing Bai, Xuan Wang and Mingchao Sun
Sensors 2017, 17(5), 1127; https://doi.org/10.3390/s17051127 - 15 May 2017
Cited by 31 | Viewed by 5774
Abstract
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and [...] Read more.
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

2307 KiB  
Article
Super-Resolution Reconstruction of High-Resolution Satellite ZY-3 TLC Images
by Lin Li, Wei Wang, Heng Luo and Shen Ying
Sensors 2017, 17(5), 1062; https://doi.org/10.3390/s17051062 - 7 May 2017
Cited by 21 | Viewed by 6793
Abstract
Super-resolution (SR) image reconstruction is a technique used to recover a high-resolution image using the cumulative information provided by several low-resolution images. With the help of SR techniques, satellite remotely sensed images can be combined to achieve a higher-resolution image, which is especially [...] Read more.
Super-resolution (SR) image reconstruction is a technique used to recover a high-resolution image using the cumulative information provided by several low-resolution images. With the help of SR techniques, satellite remotely sensed images can be combined to achieve a higher-resolution image, which is especially useful for a two- or three-line camera satellite, e.g., the ZY-3 high-resolution Three Line Camera (TLC) satellite. In this paper, we introduce the application of the SR reconstruction method, including motion estimation and the robust super-resolution technique, to ZY-3 TLC images. The results show that SR reconstruction can significantly improve both the resolution and image quality of ZY-3 TLC images. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

4827 KiB  
Article
Performance Analysis of Global Navigation Satellite System Signal Acquisition Aided by Different Grade Inertial Navigation System under Highly Dynamic Conditions
by Chunxi Zhang, Xianmu Li, Shuang Gao, Tie Lin and Lu Wang
Sensors 2017, 17(5), 980; https://doi.org/10.3390/s17050980 - 28 Apr 2017
Cited by 8 | Viewed by 4821
Abstract
Under the high dynamic conditions, Global Navigation Satellite System (GNSS) signals produce great Doppler frequency shifts, which hinders the fast acquisition of signals. Inertial Navigation System (INS)-aided acquisition can improve the acquisition performance, whereas the accuracy of Doppler shift and code phase estimation [...] Read more.
Under the high dynamic conditions, Global Navigation Satellite System (GNSS) signals produce great Doppler frequency shifts, which hinders the fast acquisition of signals. Inertial Navigation System (INS)-aided acquisition can improve the acquisition performance, whereas the accuracy of Doppler shift and code phase estimation are mainly determined by the INS precision. The relation between the INS accuracy and Doppler shift estimation error has been derived, while the relation between the INS accuracy and code phase estimation error has not been deduced. In this paper, in order to theoretically analyze the effects of INS errors on the performance of Doppler shift and code phase estimations, the connections between them are re-deduced. Moreover, the curves of the corresponding relations are given for the first time. Then, in order to have a better verification of the INS-aided acquisition, a high dynamic scenario is designed. Furthermore, by using the deduced mathematical relation, the effects of different grade INS on the GNSS (including Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS)) signal acquisition are analyzed. Experimental results demonstrate that the INS-aided acquisition can reduce the search range of local frequency and code phase, and achieve fast acquisition. According to the experimental results, a suitable INS can be chosen for the deeply coupled integration. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

4163 KiB  
Article
An Improved Multi-Sensor Fusion Navigation Algorithm Based on the Factor Graph
by Qinghua Zeng, Weina Chen, Jianye Liu and Huizhe Wang
Sensors 2017, 17(3), 641; https://doi.org/10.3390/s17030641 - 21 Mar 2017
Cited by 45 | Viewed by 7599
Abstract
An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different [...] Read more.
An integrated navigation system coupled with additional sensors can be used in the Micro Unmanned Aerial Vehicle (MUAV) applications because the multi-sensor information is redundant and complementary, which can markedly improve the system accuracy. How to deal with the information gathered from different sensors efficiently is an important problem. The fact that different sensors provide measurements asynchronously may complicate the processing of these measurements. In addition, the output signals of some sensors appear to have a non-linear character. In order to incorporate these measurements and calculate a navigation solution in real time, the multi-sensor fusion algorithm based on factor graph is proposed. The global optimum solution is factorized according to the chain structure of the factor graph, which allows for a more general form of the conditional probability density. It can convert the fusion matter into connecting factors defined by these measurements to the graph without considering the relationship between the sensor update frequency and the fusion period. An experimental MUAV system has been built and some experiments have been performed to prove the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

4090 KiB  
Article
LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter
by Wanli Liu
Sensors 2017, 17(3), 539; https://doi.org/10.3390/s17030539 - 8 Mar 2017
Cited by 23 | Viewed by 8514 | Correction
Abstract
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay [...] Read more.
The time delay calibration between Light Detection and Ranging (LiDAR) and Inertial Measurement Units (IMUs) is an essential prerequisite for its applications. However, the correspondences between LiDAR and IMU measurements are usually unknown, and thus cannot be computed directly for the time delay calibration. In order to solve the problem of LiDAR-IMU time delay calibration, this paper presents a fusion method based on iterative closest point (ICP) and iterated sigma point Kalman filter (ISPKF), which combines the advantages of ICP and ISPKF. The ICP algorithm can precisely determine the unknown transformation between LiDAR-IMU; and the ISPKF algorithm can optimally estimate the time delay calibration parameters. First of all, the coordinate transformation from the LiDAR frame to the IMU frame is realized. Second, the measurement model and time delay error model of LiDAR and IMU are established. Third, the methodology of the ICP and ISPKF procedure is presented for LiDAR-IMU time delay calibration. Experimental results are presented that validate the proposed method and demonstrate the time delay error can be accurately calibrated. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

2995 KiB  
Article
A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations
by Rui Tu, Rui Zhang, Cuixian Lu, Pengfei Zhang, Jinhai Liu and Xiaochun Lu
Sensors 2017, 17(3), 507; https://doi.org/10.3390/s17030507 - 3 Mar 2017
Cited by 5 | Viewed by 5293
Abstract
In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized [...] Read more.
In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized by performing different corrections at the user end. This algorithm was assessed and validated with the BDS data collected at four regional stations from Day of Year (DOY) 080 to 083 of 2016. When the users are located within the local reference network, the fast and high precision RTK service can be achieved using the regional observation corrections, revealing a convergence time of about several seconds and a precision of about 2–3 cm. For the users out of the regional reference network, the global broadcast State-Space Represented (SSR) corrections can be utilized to realize the global PPP service which shows a convergence time of about 25 min for achieving an accuracy of 10 cm. With this unified model, it can not only integrate the Network RTK (NRTK) and PPP into a seamless positioning service, but also recover the ionosphere Vertical Total Electronic Content (VTEC) and Differential Code Bias (DCB) values that are useful for the ionosphere monitoring and modeling. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

5023 KiB  
Article
Operational Modal Analysis of Bridge Structures with Data from GNSS/Accelerometer Measurements
by Chunbao Xiong, Huali Lu and Jinsong Zhu
Sensors 2017, 17(3), 436; https://doi.org/10.3390/s17030436 - 23 Feb 2017
Cited by 59 | Viewed by 9172
Abstract
Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the [...] Read more.
Real-time dynamic displacement and acceleration responses of the main span section of the Tianjin Fumin Bridge in China under ambient excitation were tested using a Global Navigation Satellite System (GNSS) dynamic deformation monitoring system and an acceleration sensor vibration test system. Considering the close relationship between the GNSS multipath errors and measurement environment in combination with the noise reduction characteristics of different filtering algorithms, the researchers proposed an AFEC mixed filtering algorithm, which is an combination of autocorrelation function-based empirical mode decomposition (EMD) and Chebyshev mixed filtering to extract the real vibration displacement of the bridge structure after system error correction and filtering de-noising of signals collected by the GNSS. The proposed AFEC mixed filtering algorithm had high accuracy (1 mm) of real displacement at the elevation direction. Next, the traditional random decrement technique (used mainly for stationary random processes) was expanded to non-stationary random processes. Combining the expanded random decrement technique (RDT) and autoregressive moving average model (ARMA), the modal frequency of the bridge structural system was extracted using an expanded ARMA_RDT modal identification method, which was compared with the power spectrum analysis results of the acceleration signal and finite element analysis results. Identification results demonstrated that the proposed algorithm is applicable to analyze the dynamic displacement monitoring data of real bridge structures under ambient excitation and could identify the first five orders of the inherent frequencies of the structural system accurately. The identification error of the inherent frequency was smaller than 6%, indicating the high identification accuracy of the proposed algorithm. Furthermore, the GNSS dynamic deformation monitoring method can be used to monitor dynamic displacement and identify the modal parameters of bridge structures. The GNSS can monitor the working state of bridges effectively and accurately. Research results can provide references to evaluate the bearing capacity, safety performance, and durability of bridge structures during operation. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

6906 KiB  
Article
Performance Enhancement of a USV INS/CNS/DVL Integration Navigation System Based on an Adaptive Information Sharing Factor Federated Filter
by Qiuying Wang, Xufei Cui, Yibing Li and Fang Ye
Sensors 2017, 17(2), 239; https://doi.org/10.3390/s17020239 - 3 Feb 2017
Cited by 61 | Viewed by 5994
Abstract
To improve the ability of autonomous navigation for Unmanned Surface Vehicles (USVs), multi-sensor integrated navigation based on Inertial Navigation System (INS), Celestial Navigation System (CNS) and Doppler Velocity Log (DVL) is proposed. The CNS position and the DVL velocity are introduced as the [...] Read more.
To improve the ability of autonomous navigation for Unmanned Surface Vehicles (USVs), multi-sensor integrated navigation based on Inertial Navigation System (INS), Celestial Navigation System (CNS) and Doppler Velocity Log (DVL) is proposed. The CNS position and the DVL velocity are introduced as the reference information to correct the INS divergence error. The autonomy of the integrated system based on INS/CNS/DVL is much better compared with the integration based on INS/GNSS alone. However, the accuracy of DVL velocity and CNS position are decreased by the measurement noise of DVL and bad weather, respectively. Hence, the INS divergence error cannot be estimated and corrected by the reference information. To resolve the problem, the Adaptive Information Sharing Factor Federated Filter (AISFF) is introduced to fuse data. The information sharing factor of the Federated Filter is adaptively adjusted to maintaining multiple component solutions usable as back-ups, which can improve the reliability of overall system. The effectiveness of this approach is demonstrated by simulation and experiment, the results show that for the INS/CNS/DVL integrated system, when the DVL velocity accuracy is decreased and the CNS cannot work under bad weather conditions, the INS/CNS/DVL integrated system can operate stably based on the AISFF method. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

4668 KiB  
Article
Hyperspectral Imagery Super-Resolution by Adaptive POCS and Blur Metric
by Shaoxing Hu, Shuyu Zhang, Aiwu Zhang and Shatuo Chai
Sensors 2017, 17(1), 82; https://doi.org/10.3390/s17010082 - 3 Jan 2017
Cited by 7 | Viewed by 4644
Abstract
The spatial resolution of a hyperspectral image is often coarse as the limitations on the imaging hardware. A novel super-resolution reconstruction algorithm for hyperspectral imagery (HSI) via adaptive projection onto convex sets and image blur metric (APOCS-BM) is proposed in this paper to [...] Read more.
The spatial resolution of a hyperspectral image is often coarse as the limitations on the imaging hardware. A novel super-resolution reconstruction algorithm for hyperspectral imagery (HSI) via adaptive projection onto convex sets and image blur metric (APOCS-BM) is proposed in this paper to solve these problems. Firstly, a no-reference image blur metric assessment method based on Gabor wavelet transform is utilized to obtain the blur metric of the low-resolution (LR) image. Then, the bound used in the APOCS is automatically calculated via LR image blur metric. Finally, the high-resolution (HR) image is reconstructed by the APOCS method. With the contribution of APOCS and image blur metric, the fixed bound problem in POCS is solved, and the image blur information is utilized during the reconstruction of HR image, which effectively enhances the spatial-spectral information and improves the reconstruction accuracy. The experimental results for the PaviaU, PaviaC and Jinyin Tan datasets indicate that the proposed method not only enhances the spatial resolution, but also preserves HSI spectral information well. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

9390 KiB  
Article
Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification
by Da Liu and Jianxun Li
Sensors 2016, 16(12), 2146; https://doi.org/10.3390/s16122146 - 16 Dec 2016
Cited by 9 | Viewed by 4173
Abstract
Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were [...] Read more.
Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

Other

Jump to: Research

185 KiB  
Correction
LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter. Sensors 2017, 17, 539
by Wanli Liu
Sensors 2017, 17(12), 2821; https://doi.org/10.3390/s17122821 - 5 Dec 2017
Cited by 2 | Viewed by 3064
Abstract
The IMU consists of three gyros and three accelerometers [...]
Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
13909 KiB  
Technical Note
Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems
by Seunghwan Hong, Ilsuk Park, Jisang Lee, Kwangyong Lim, Yoonjo Choi and Hong-Gyoo Sohn
Sensors 2017, 17(3), 474; https://doi.org/10.3390/s17030474 - 27 Feb 2017
Cited by 27 | Viewed by 8606
Abstract
This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one [...] Read more.
This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) were mounted. The geometric relationships between three sensors were solved by the proposed calibration, considering the GNSS/INS as one unit sensor. Our solution basically uses the point cloud generated by a 3-dimensional (3D) terrestrial laser scanner rather than using conventionally obtained 3D ground control features. With the terrestrial laser scanner, accurate and precise reference data could be produced and the plane features corresponding with the sparse mobile laser scanning data could be determined with high precision. Furthermore, corresponding point features could be extracted from the dense terrestrial laser scanning data and the images captured by the video cameras. The parameters of the boresight and the lever-arm were calculated based on the least squares approach and the precision of the boresight and lever-arm could be achieved by 0.1 degrees and 10 mm, respectively. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

Back to TopTop