Next Article in Journal
Relationships between Body Weight Support and Gait Speed Parameters and Muscle Activity and Torque during Robot-Assisted Gait Training in Non-Neurological Adults: A Preliminary Investigation
Previous Article in Journal
Development of a Metasilencer Considering Flow in HVAC Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Concept Paper

Research and Conceptual Design of Sensor Fusion for Object Detection in Dense Smoke Environments

School of Mechanical Automotive Engineering, Kyungil University, Gyeongsan 38428, Korea
Appl. Sci. 2022, 12(22), 11325; https://doi.org/10.3390/app122211325
Submission received: 22 October 2022 / Revised: 5 November 2022 / Accepted: 6 November 2022 / Published: 8 November 2022

Abstract

:
In this paper, we propose a conceptual framework for a sensor fusion system that can detect objects in a dense smoke environment with a visibility of less than 1 m. Based on the review of several articles, we determined that by using a single thermal IR camera, a single Frequency-Modulated Continuous-Wave (FMCW) radar, and multiple ultrasonic sensors simultaneously, the system can overcome the challenges of detecting objects in dense smoke. The four detailed methods proposed are as follows: First, a 3D ultrasonic sensor system that detects the 3D position of an object at a short distance and is not affected by temperature change/gradient; Second, detecting and classifying objects such as walls, stairs, and other obstacles using a thermal IR camera; Third, a 2D radial distance measurement method for a distant object using an FMCW radar; Fourth, sensor fusion for 3D position visualization of multiple objects using a thermal IR camera, 3D ultrasonic sensor system, and FMCW radar. Finally, a conceptual design is presented based on the proposed methodologies, and their theoretical usefulness is discussed. The framework is intended to motivate future research on the development of a sensor fusion system for object detection in dense smoke environments.

1. Introduction

According to several reports [1,2,3,4,5], about 60% of fire accidents occur in indoor environments, and more than 70% of firefighter deaths and injuries are caused by smoke inhalation, burns, and being trapped in indoor fires. In particular, extremely poor visibility from dense smoke triggers mental panic in firefighters and limits even their normal behavior [6,7,8,9]. Typically, the visibility in dense smoke is less than 1 m. Therefore, these facts motivated research on visual aid systems based on a sensor fusion that can provide useful visualization information to firefighters, as shown in Figure 1.
Various robotic and visual assistance platforms have been developed and studied to enable firefighters to successfully perform their missions in these situations [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]. Most of these studies on firefighting robots and firefighter visual aids have utilized visual and IR night vision cameras for perception sensors. Due to their short wavelength, these sensors are not suitable for use in dense smoke environments. In addition, the camera is not suitable for the high temperatures of dense smoke. Therefore, in those studies, a temperature sensor was used to compensate for these shortcomings.
Dense smoke contains various combustion gases such as CO2, CO, H2S, SO2, NH3, HCN, COCl2, and HCl. In addition, because these gases combine with water vapor, the absorption, reflection, and scattering properties of radiant energy in dense smoke differ from normal environments, and they are also very different from the characteristics of white smoke. The dense smoke also creates a high temperature gradient in an indoor air space. In these conditions, a sensor with a longer wavelength is advantageous for environmental perception. Therefore, thermal IR cameras, FMCW radars, and ultrasonic sensors are more suitable for use in dense smoke environments than visual and/or IR night vision cameras because of their longer wavelengths, as shown in Figure 2 [29].
For these reasons, several studies have attempted to detect objects in dense smoke environments by combining thermal IR cameras, FMCW radars, or ultrasonic sensors, individually or in combination with other general environmental sensors, as shown in Table 1. However, these studies are insufficient for firefighting support due to the limitations and characteristics given in Table 1 and Figure 3 [30].
Additionally, although there are several studies on sensor fusion for firefighting robots [31,32,33,34,35], these studies are related not to object detection in dense smoke environments but to flame detection or navigation in a general fire environment. In other words, studies on sensor fusion for object detection in dense smoke environments are very rare.
As an alternative to existing studies, we propose the novel concept of a sensor fusion system that can provide 3D position information on objects with a visibility of less than 1 m. This system uses a single thermal IR camera, a single FMCW radar, and multiple ultrasonic sensors simultaneously to maximize the strengths of each sensor, as shown in Figure 2, as well as compensate for its weaknesses. Based on this sensor fusion, this paper contributes to enabling new perspectives and methodologies that can detect and visualize indistinguishable objects in dense smoke environments as follows: (1) 3D global position estimation for multiple objects through two types of 2D information and one type of 3D information, (2) application of image transformation theory to general data information transformation, (3) 3D position estimation of an object using 1D ultrasonic sensor information, (4) extension of the directional angle and detection area of the ultrasonic sensor system, (5) rejection of measurement error from ultrasonic sensors caused by temperature change in dense smoke, (6) object detection and classification through image processing from a thermal IR camera. As a result, the system presented in this paper can provide visualized information about objects in a dense smoke environment to firefighters or firefighting robots, as shown in Figure 1.
The remainder of this paper is organized as follows. In Section 2, the theoretical methodologies of proposed sensor fusion are presented for object detection using ultrasonic sensors, FMCW radar, and thermal IR cameras. In Section 3, we describe the conceptual H/W and S/W design to implement the proposed methodologies. Finally, in Section 4, we draw conclusions and give an outlook to future works.

2. Theoretical Methodology for Object Detection Using Ultrasonic Sensors, FMCW Radar, and Thermal IR Camera

2.1. Principle of 3D Position Measurement Using Ultrasonic Sensors: 3D Ultrasonic Sensor Module

Figure 4 is the configuration diagram of ultrasonic sensors, called the “3D ultrasonic sensor module” in this paper, for measuring a position of an object in a three-dimensional space. Let d1, d2, d3, and d4 be the measured distances from the sensors located at (a, 0, 0), (0, 0, 0), (0, b, 0), and (0, −b, 0) in the coordinate system (x, y, z) to an object, respectively. The distance d* from the ultrasonic sensor with the notation number * is measured as follows
d * = c t Δ t * 2 ,   c t = 331.3 + 0.606 T   [ m / s ]
where ct is the speed of sound, Δt* is the ultrasonic propagation time, called Time-of-Flight (TOF), from the sensor with the designation number * to the object, and T is the ambient temperature. Using Equation (1) and geometric information, the three-dimensional position (px, py, pz) of the object using the ultrasonic sensor can be derived as follows
p x = d 2 2 d 1 2 2 a + a 2 p y = d 2 2 d 3 2 2 b + b 2 p z = d 2 2 p x 2 p y 2

2.1.1. Eliminate Measurement Errors Due to Temperature Changes

Note that the ct is a function of T in Equation (1), and the temperature of dense smoke is generally very high and non-uniform or has a temperature gradient, as described in the introduction. Therefore, the three-dimensional position of an object derived by Equation (2) cannot be accurately measured in a dense smoke environment.
To avoid inaccuracies due to the temperature effects of ultrasonic sensors, we define the coefficient κ of the TOF ratio [36] as
κ 1 = Δ t 1 Δ t 2 = d 1 d 2 κ 3 = Δ t 3 Δ t 2 = d 3 d 2 κ 4 = Δ t 4 Δ t 2 = d 4 d 2
At this point, the measured κ1, κ2, and κ3 are related to the unknown position (px, py, pz) of the object, as shown below
κ 1 2 1 p x 2 + p y 2 + p z 2 = 2 a p x + a 2 κ 3 2 1 p x 2 + p y 2 + p z 2 = 2 b p y + b 2 κ 4 2 1 p x 2 + p y 2 + p z 2 = 2 b p y + b 2
Thus, the temperature independent (px, py, pz) can be derived as
p x = 4 b 2 α β 2 β + 1 a 2 4 a β + 1 p y = b 1 β 2 β + 1 p z = 2 a p x + a 2 κ 1 2 1 p x 2 + p y 2 κ 1 2 1 ,   if   κ 1 1 2 b p y + b 2 κ 3 2 1 p x 2 + p y 2 κ 3 2 1 ,   if   κ 3 1 2 b p y + b 2 κ 4 2 1 p x 2 + p y 2 κ 4 2 1 ,   if   κ 4 1
where α and β are defined as
α = κ 1 2 1 κ 3 2 1 ,   β = κ 3 2 1 κ 4 2 1

2.1.2. Extension of Directional Angle of View and Detection Area: 3D Ultrasonic Sensor System

In the 3D ultrasonic sensor module of Figure 4, the angle of view and the detection area exist in only one direction, as shown in Figure 5a,b due to the characteristics of the ultrasonic sensor. This disadvantage can be improved upon by adding additional sensor modules on top, bottom, left, and right, as shown in Figure 5c,d, called the “3D ultrasonic sensor system” in this paper.
In Figure 5, each detection area from #1 to #5 has its own local coordinate frame, from {1} to {5}, respectively. In these frames, the z-axes represent directional angles of view at the corresponding detection areas. Each sensor module measures the relative position of an object based on its own local coordinate frame. For example, the object in detection area #2 is at (px2, py2, pz2) of frame {2} and the object in detection area #3 is at (px3, py3, pz3) of frame {3}.
Therefore, all local positions have to be converted to global positions (pxs, pys, pzs) of global coordinate frame {S} in Figure 5. This transformation utilizes a homogeneous transformation matrix. Let i = 1, 2,⋯, 5, then the homogeneous transformation matrix T s i from frame {S} to {i} is derived as
T s 1 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 , T s 2 = cos θ 0 sin θ l 2 1 + cos θ 0 1 0 0 sin θ 0 cos θ l 2 sin θ 0 0 0 1 ,   T s 3 = 1 0 0 0 0 cos θ sin θ l 2 1 + cos θ 0 sin θ cos θ l 2 sin θ 0 0 0 1 T s 4 = cos θ 0 sin θ l 2 1 + cos θ 0 1 0 0 sin θ 0 cos θ l 2 sin θ 0 0 0 1 ,   T s 5 = 1 0 0 0 0 cos θ sin θ l 2 1 + cos θ 0 sin θ cos θ l 2 sin θ 0 0 0 1
Using Equation (7), a relative position Pi with respect to frame {i} can be transformed into a global position Ps with respect to frame {S} as
P s = T s i P i
where P s = p x s   p y s   p z s   1 T and P i = p x i   p y i   p z i   1 T .

2.2. Thermal IR Image Processing: Object Detection

Most indoor fire sites are composed of typical structures such as walls, stairs, and floors, as well as atypical objects such as various pieces of furniture and broken or burnt objects. Therefore, thermal IR images taken at the sites show a mixture of all objects indoors.
Appropriate image processing needs to be applied to these images (called query images) to distinguish between objects that may be obstacles and those that are not. Figure 6 shows an image processing overview for this distinction. This processing is based on the object detection method using a basic Canny algorithm and Hough transformation.
After obtaining the query image from image sequences of the thermal IR camera, calibration and rectification are applied to the image for pre-processing of edge detection. The pre-processed image is utilized to detect edges using the Canny algorithm. We chose the Canny algorithm for the following reasons [37,38]:
The maximum number of edges can be detected.
It gives very good results for detecting horizontal and vertical edges.
Circular and corner edges can be detected.
Good localization.
Only one response to a single edge.
Edge images are used to establish obstacle boundaries and planes for walls, stairs, and floors. By using Hough transformation to extract line features from these edge images, we classify vertical and horizontal lines as features of walls, stairs, and floors. After this, for object detection, we collect groups of objects such as walls, stairs, and floors using the image correlation technique of template matching.
Image correlation using template matching is detailed in ref. [39]. The correlation is processed by calculating a correlation coefficient. The correlation coefficient is obtained by moving a filter mask over the query and template images and calculating the sum of the products at each location. The maximum value of the correlation coefficient occurs when regions of the template and the query images are identical. This is called the maximum correlation, i.e., the best possible match. On the other hand, minimum correlation occurs when the regions of the template and the query images are most dissimilar.
After this template matching, everything except walls, stairs, and floors is considered an object. From these objects, small objects are discarded, and large objects are identified as obstacles because they can interfere with the heading path of robots or firefighters. Finally, a display shows the thermal IR images with 2D locations (u, v) and the size box of an object (w, h) overlay.
Note that for better computational cost, we use a relatively simple object detection algorithm (template matching algorithm) rather than a complex one. To use a template matching algorithm, we assume that we have obtained prior (or reference) data by pre-scanning the designated fire hazard area (or the other normal conditions inside buildings) with various types of walls, stairs, and floors. In this case, the template matching algorithm can work more effectively because the performance of the algorithm depends on how much data can be obtained. Nevertheless, if prior data do not fit appropriately in an environment, a traditional method such as SIFT [40] or a deep learning model such as YOLO [41] can be used instead of a template matching algorithm because these algorithms can provide better performance. However, detailed discussions about these algorithms themselves are outside the scope of this paper.

2.3. Principle of 2D Radial Position Measurement Using FMCW Radar

We use an FMCW radar that outputs a radial distance R and angle θ. According to the radar principle, the following section presents how to measure the distance and angle.

2.3.1. Range Measurement for an Object Target

Figure 7a shows the basic operation of the FMCW radar system. The transceiver of the radar system transmits a frequency-modulated signal (fTX) to an object, and then the receiver of the radar system obtains the reflected signal (fRX) from the object. The transmitted and received chirp signal can be expressed as
f T X = f 0 + B t c h i r p
f R X = f 0 + B t c h i r p ( t Δ t ) ,   Δ t t t c h i r p
where f0, tchirp, and B are the starting sweep frequency, frequency sweep time, and frequency sweep bandwidth of the transmitted signal, respectively. The time delay (Δt) caused by the round-trip of the transmitted signal up to reflection by the object is measured as
Δ t = 2 R c l
where cl is the speed of light in vacuum (3 × 108 m/s).
After this, the intermediate frequency (Δf) can be generated using a mixture of the transmitted and received signals. Δf can be used to estimate the range of the object. Now, the distance R of the object is derived as
R = c l t c h i r p Δ f 2 B
If there are n number of multiple objects in front of the FMCW radar system, Equation (12) can be expressed as
R i = c l t c h i r p Δ f i 2 B ,   i = 1 , 2 , 3 , , n

2.3.2. Angle Measurement

As shown in Figure 7b, the angle θ, called the angle of arrival (AoA) of the reflected signal in the horizontal plane, can be estimated using at least two RX antennas in the FMCW radar system. The differential distance (ΔR) from the target object to each RX antenna mathematically causes a phase shift (Δφ) as
Δ φ = 2 π Δ R λ
where λ is the wavelength of the FMCW radar. The phase shift can also be measured from the FFT peak of two received signals. Assuming the system is constructed with a basic planar wavefront geometry, and the distance between the RX antennas is L, the differential distance (ΔR) is related to the angle θ as
Δ R = L sin θ
Thus, the angle θ can be estimated from the measured Δφ, using
θ = sin 1 λ Δ φ 2 π L
If there are n number of multiple objects in front of the FMCW radar system, Equation (16) can be expressed as
θ i = sin 1 λ Δ φ i 2 π L ,   i = 1 , 2 , 3 , , n
Note that due to the nonlinear dependence of Δφ and sin(θ), the angles that can be estimated with the two antennas are limited by the maximum viewing angle of
θ max = sin 1 λ 2 L

2.4. Data Fusion

2.4.1. Overview

The proposed system consists of a 3D ultrasonic sensor system, a thermal IR camera system, and an FMCW radar system. This means that each piece of information from each sensor system is measured in its own coordinate system. Therefore, this measured information must be transformed into a single global coordinate or common plane for sensor fusion. Since the information measured in each of the three types of sensor systems can be regarded as image information taken from different perspectives of the same object, such a transformation can be accomplished with the calibration methods covered by image transformation theory. Therefore, for this transformation, we estimate the homography matrix shown in Figure 8.
Adopting the calibration method proposed in ref. [42,43], we simply estimate the homography describing the transformation between the image plane and the radar plane or ultrasound plane, as shown in Figure 8. By using the calibration results and data mapping to the common plane, the proposed sensor system can visualize the information of objects, such as size and position, measured by the FMCW radar and 3D ultrasonic sensor system in an image sequence.

2.4.2. Calibration between Thermal IR Camera and 3D Ultrasonic Sensor System or FMCW Radar System

As shown in Figure 8, (xs, ys, zs), (xr, yr, zr), and (xc, yc, zc) are 3D ultrasonic sensor system coordinates, FMCW radar coordinates, and thermal IR camera coordinates. These coordinates are denoted as the coordinate frames {S}, {R}, and {C}, respectively, and (u, v) is the image plane coordinate system. Additionally, (pxs, pys, pzs) and (rx, ry, rz) are the positions of near and far objects as measured by the 3D ultrasonic sensor system and the FMCW radar system, respectively. These positions should correspond to (us, vs) and (ur, vr) detected by image processing of the thermal IR camera image for short- and long-range objects, respectively.
In order to map (pxs, pys, pzs) to (us, vs), we use the transformation equation between {S} and (u, v) as
ω s u v 1 = H x s y s z s
where H = [hij]i,j = 1,2,3 is a 3 × 3 homography matrix and ωs is an unknown constant. By estimating H, we determine the transformation between the ultrasonic plane (Пs) and the image plane (Пi). The ωs is an unknown parameter, but we can assume that the objects are equidistant in two planes, so ωs = zs. This means that H is the affine homography matrix with h32 = h32 = 0 and h33 = 1. Then, we obtain two algebraic equations:
u = h 11 z s x s + h 12 z s y s + h 13 v = h 21 z s x s + h 22 z s y s + h 23
Note that zs is not a constant, but we can treat it as a constant on each image frame, especially for static objects. Thus, for Equation (20), we get the following homogeneous linear square problem:
A u H s = 0 ,   A u H s = 0 A u = x s y s 1 0 0 0 0 0 u A v = 0 0 0 x s y s 1 0 0 v H s = h 11 z s h 12 z s h 13 h 21 z s h 22 z s h 23 0 0 1 T
Then, using Singular Value Decomposition (SVD), we estimate the homogeneous matrix, H, in Equation (19).
To match (rx, ry, rz) to (ur, vr), we use the same approach described in Equations (19) to (21). The corresponding radius R and angle θ are converted to Cartesian coordinates using yr = Rsinθ and zr = Rcosθ. Additionally, since all FMCW radar data comes from somewhere in the radar plane (Пr), we can take xr = 0. Hence, the transformation between {R} and (u, v) becomes
ω r u v 1 = M y r z r 1
where M = [mij]i,j = 1,2,3 is a 3 × 3 homography matrix and ωr is an unknown constant. By estimating the M, we determine the transformation between the radar plane (Пr) and the image plane (Пi). We simplify m32 = m32 = 0 and m33 = 1 by assuming the affine homography. Thus, ωr = 1, and we obtain the following two algebraic equations:
u = m 11 y r + m 12 z r + m 13 v = m 21 y r + m 22 z r + m 23
Thus, for Equation (23), we get the homogeneous linear square problem of
B u M r = 0 ,   B v M r = 0 B u = y r z r 1 0 0 0 0 0 u B v = 0 0 0 y r z r 1 0 0 v M r = m 11 m 12 m 13 m 21 m 22 m 23 0 0 1 T
Then, we estimate the homogeneous matrix, M, in Equation (22) using Singular Value Decomposition (SVD).

3. Conceptual Design Based on the Theoretical Methodology

Figure 9 shows the overall block diagram of the sensor fusion method described in Section 2. Two types of homography matrices are estimated by calibrating data obtained from a thermal IR camera, an FMCW radar system, and a 3D ultrasound system. We use the estimated matrices to map data sets of (u, v) corresponding to (xs, ys, zs) and (xr, yr, zr) to a common plane. The object information is then visualized and displayed along with updated (u, v) sets of objects and their distances.
To implement the methodology described in Section 2 and Figure 9, a hardware sensor system, as shown in Figure 10, was conceptually designed. The conceptual system consists of five 3D ultrasonic modules (the green points of Figure 10), a single thermal IR camera (the red point of Figure 10), and a single FMCW radar system (the blue point of Figure 10). The three types of sensors are related to the sensors in Figure 9. Based on Equation (5), the distance between the ultrasonic sensors in each 3D ultrasonic module is 20 cm. This distance is related to the values of ‘a’ and ‘b’ in Figure 4. In addition, the angle θ between modules, as shown in Figure 5, was selected as 45 degrees according to Equation (7). Specifications for the selected ultrasonic sensors, thermal IR cameras, and FMCW radar systems in Figure 9 and Figure 10 are given in Table 2. The presented sensor fusion system is currently under construction.

4. Future Research Directions and Conclusions

In this paper, we have presented a conceptual sensor fusion framework to detect objects even in dense smoke environments using a thermal IR camera, ultrasonic sensors, and an FMCW radar based on the review of several research papers.
The proposed sensor fusion can provide new methodologies such as (1) 3D global position estimation for multiple objects through two types of 2D information and one type of 3D information, (2) application of image transformation theory to general data information transformation, (3) 3D position estimation of an object using 1D ultrasonic sensor information, (4) extension of the directional angle and detection area of the ultrasonic sensor system, (5) rejection of measurement error in ultrasonic sensors caused by temperature change in dense smoke, (6) object detection and classification through image processing of a thermal IR camera. The proposed system using these methodologies is expected to be able to detect objects even in environments with less than 1 m visibility, such as in dense smoke environments.
There are some limitations of the proposed methodologies as well. Because thermal IR cameras cannot provide detailed images of the environment like RGB cameras, the environmental information from the images can be ambiguous. Most of all, because the sensors will be normally used in very high-temperature environments of dense smoke, the performance of object detection can be affected by image saturation in thermal IR cameras or detection error propagation in ultrasonic sensors and FMCW radar. These unwanted phenomena are normally caused by extremely high temperatures. As a result, the proposed sensor fusion system remains most predictable when the temperature specification of used sensors is as high as possible, but this is impossible due to the implementation costs and technical limitations. Nevertheless, the proposed sensor fusion will be more useful than other studies because the selected sensors are the most appropriate for the dense smoke environment, and the proposed methodologies maximize the strengths of each sensor as well as compensate for its weaknesses.
In future work, we will complete the hardware and software shown in Figure 9 and Figure 10 and the experimental setup to validate the theoretical methodology proposed in this paper. Then, we will evaluate the performance based on some experimental conditions and find more limitations of the proposed system. The obtained results will also need to be compared to various existing methods. In addition, the computational cost will be studied because it is very important to reduce computational costs while maintaining or increasing algorithm performance simultaneously.
Theoretically, the detection error of an ultrasonic sensor and radar is none, according to the theoretical analysis in this paper. However, in reality, the detection error of an ultrasonic sensor and radar depends on the sensor’s specifications that the user chooses for their applications. Therefore, we need to study how much detection error by ultrasonic and radar sensors is available after completing the hardware setup.
Most of all, the operating temperature range of all electric devices, including the sensors used in this paper, is normally −40 to 100 degrees Celsius, and the temperature of dense smoke is more than 500 degrees Celsius. Because of that, the sensors cannot work normally at relatively high temperatures. Therefore, all applications for fire disasters, such as firefighting robots and firefighter aids, need cooling systems and/or countermeasures of heat resistance/insulation. The cooling system and the heat resistance/insulation system are other research topics in the field of firefighting platforms. The scope of this paper is not a study in any detail of cooling systems or heat resistance/insulation systems but a study of how to deal with sensors for object detection in dense smoke environments. Therefore, in this paper, note that we have assumed that the H/W platform (which will be developed in the future) will equip a system to prepare for extremely high temperatures well. Under this assumption, we propose the sensor fusion concept using ultrasonic sensors, FMCW radar, and a thermal IR camera.

Funding

This work was supported by the National Research Foundation of Korea (NRF), with a grant funded by the Korean government (MSIT) (No. 2021R1F1A1063895).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. National Fire Agency. 2022 National Fire Agency Statistical Yearbook in Korea; National Fire Agency: Seoul, Korea, 2022.
  2. National Fire Agency. 2021 Fire Statistical Yearbook in Korea; National Fire Agency: Seoul, Korea, 2022.
  3. Fahy, R.F.; Petrillo, J.T.; Molis, J.L. Firefighter Fatalities in the US—2019; NFPA Research; NFPA: Quincy, MA, USA, July 2020. [Google Scholar]
  4. Campbell, R.; Evarts, B. United States Firefighter Injuries in 2019; NFPA Research; NFPA: Quincy, MA, USA, November 2020. [Google Scholar]
  5. Ahrens, M.; Evarts, B. Fire Loss in the United States during 2019; NFPA Research; NFPA: Quincy, MA, USA, September 2020. [Google Scholar]
  6. Ronchi, E.; Gwynne, S.; Purser, D.; Colonna, P. Representation of the impact of smoke on agent walking speeds in evacuation models. Fire Technol. 2012, 49, 411–431. [Google Scholar] [CrossRef]
  7. Bryan, J.L. Behavioral response to fire and smoke. In SFPE Handbook of Fire Protection Engineering; SFPE: Washington, DC, USA, 2002; Volume 2. [Google Scholar]
  8. Wright, M.; Cook, G.; Webber, G. The effects of smoke on people’s walking Speeds using overhead lighting and Wayguidance provision. In Proceedings of the 2nd International Symposium on Human Behavior in Fire; MIT: Boston, MA, USA, 2001; pp. 275–284. [Google Scholar]
  9. Jin, T. Visibility through Fire Smoke Report (No. 42); Fire Research Institute of Japan: Tokyo, Japan, 1976. [Google Scholar]
  10. Chang, P.-H.; Park, K.-B.; Cho, G.-R.; Kim, J.-K.; Lee, W.-J. A Vision enhancement technique for remote control of fire fighting robots. In Proceedings of the 2007 KIFSE Fall Conference, Seoul, Korea, 15–16 July 2007; pp. 219–224. [Google Scholar]
  11. Li, S.; Feng, C.; Liang, X.; Qin, H.; Li, H.; Shi, L. A Guided Vehicle under Fire Conditions Based on a Modified Ultrasonic Obstacle Avoidance Technology. Sensors 2018, 18, 4366. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Kim, J.H.; Keller, B.; Lattimer, B.Y. Sensor fusion based seek-and-find fire algorithm for intelligent firefighting robot. In Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Wollongong, Australia, 9–12 July 2013; pp. 1482–1486. [Google Scholar]
  13. McNeil, J.G.; Starr, J.; Lattimer, B.Y. Autonomous Fire Suppression Using Multispectral Sensors. In Proceedings of the 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Wollongong, Australia, 9–12 July 2013. [Google Scholar]
  14. Starr, J.W.; Lattimer, B.Y. Application of Thermal Infrared Stereo Vision in Fire Environments. In Proceedings of the 2013 EEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Wollongong, Australia, 9–12 July 2013. [Google Scholar]
  15. Starr, J.W.; Lattimer, B.Y. A comparison of IR stereo vision and LIDAR for use in fire environments. In Proceedings of the 2012 IEEE Sensors, Taipei, Taiwan, 28–31 October 2012; pp. 1–4. [Google Scholar]
  16. Khoon, T.N.; Sebastian, P.; Saman, A.B.S. Autonomous Fire Fighting Mobile Platform. Procedia Eng. 2012, 41, 1145–1153. [Google Scholar] [CrossRef]
  17. Liljeback, P.; Stavdahl, O.; Beitnes, A. SnakeFighter-development of a water hydraulic firefighting snake robot. In Proceedings of the 2006 9th International Conference on Control, Automation, Robotics and Vision, Singapore, 5–8 December 2006; pp. 1–6. [Google Scholar]
  18. Hong, J.H.; Min, B.-C.; Taylor, J.M.; Raskin, V.; Matson, E.T. NL-based communication with firefighting robots. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Korea, 14–17 October 2012; pp. 1461–1466. [Google Scholar]
  19. Tan, C.F.; Alkahari, M.; Rahman, A. Development of Ground Vehicle for Fire Fighting Purpose. In Proceedings of the Hari Penyelidikan, Malacca, Malaysia, 20 July 2011; pp. 78–81. [Google Scholar]
  20. Miyazawa, K. Fire robots developed by the Tokyo Fire Department. Adv. Robot. 2002, 16, 553–556. [Google Scholar] [CrossRef]
  21. Chang, P.H.; Kang, Y.H.; Cho, G.R.; Kim, J.H.; Jin, M.; Lee, J.; Jeong, J.H.; Han, D.K.; Jung, J.H.; Lee, W.-J.; et al. Control architecture design for a fire searching robot using task oriented design methodology. In Proceedings of the 2006 International Joint Conference on SICE-ICASE, Pusan, Korea, 18–21 October 2006; pp. 3126–3131. [Google Scholar]
  22. Kim, Y.D.; Kim, Y.G.; Lee, S.H.; Kang, J.H.; An, J. Portable fire evacuation guide robot system. In Proceedings of the 2009 IEEE/RSJ International Conference on IROS, St. Louis, MO, USA, 11–15 October 2009; pp. 2789–2794. [Google Scholar]
  23. Longo, D.; Muscato, G. CLAWAR WP3 Applications: Natural/Outdoor and Underwater Robots. In Climbing and Walking Robots: Proceeings of the 7th International Conference Clawar; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1159–1170. [Google Scholar]
  24. Penders, J.; Alboul, L.; Witkowski, U.; Naghsh, A.; Saez-Pons, J.; Herbrechtsmeier, S.; El-Habbal, M. A robot swarm assisting a human fire-fighter. Adv. Robot. 2011, 25, 93–117. [Google Scholar] [CrossRef] [Green Version]
  25. Bertram, C.; Evans, M.H.; Javaid, M.; Stafford, T.; Prescott, T. Sensory augmentation with distal touch: The tactile helmet project. In Proceedings of the Biomimetic and Biohybrid Systems. Second international conference, Living Machines, London, UK, 29 July–2 August 2013; pp. 24–35. [Google Scholar]
  26. Rutherford, P. Auditory Navigation and the Escape from Smoke Filled Buildings. In Proceedings of the CAAD Futures 1997, Munchen, Germany, 4–6 August 1997; pp. 299–304. [Google Scholar]
  27. Kim, J.-H.; Starr, J.W.; Lattimer, B.Y. Firefighting Robot Stereo Infrared Vision and Radar Sensor Fusion for Imaging through Fire Smoke. Fire Technol. 2015, 51, 823–845. [Google Scholar] [CrossRef]
  28. Cho, M.-Y.; Shin, D.-I.; Jun, S. A Sensor module overcoming thick smoke through investigation of fire characteristics. J. Korea Robot. Soc. 2018, 13, 237–247. [Google Scholar] [CrossRef]
  29. Starr, J.W.; Lattimer, B.Y. Evaluation of Navigation Sensors in Fire Smoke Environments. Fire Technol. 2014, 50, 1459–1481. [Google Scholar] [CrossRef]
  30. Hsu, C.-P.; Li, B.; Solano-Rivas, B.; Gohil, A.R.; Chan, P.H.; Moore, A.D.; Donzella, V. A Review and Perspective on Optical Phased Array for Automotive LiDAR. IEEE J. Sel. Top. Quantum Electron. 2021, 27, 1–16. [Google Scholar] [CrossRef]
  31. Zhang, S.; Yao, J.; Wang, R.; Liu, Z.; Ma, C.; Wang, Y.; Zhao, Y. Design of intelligent fire-fighting robot based on multi-sensor fusion and experimental study on fire scene patrol. Robot. Auton. Syst. 2022, 154, 104122. [Google Scholar] [CrossRef]
  32. Bankapur, K.; Mathur, H.; Singh, H.; Harikrishnan, R.; Gupta, A. A Flame Sensor-Based Firefighting Assistance Robot with Simulation Based Multi-Robot Implementation. In Proceedings of the 2022 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), Chennai, India, 28–29 January 2022; pp. 1–8. [Google Scholar]
  33. Reitbauer, E.; Schmied, C.; Didari, H. Subterranean positioning for a semi-autonomous robot supporting emergency task forces. In Proceedings of the 2022 International Conference on Localization and GNSS, Tampere, Finland, 7–9 June 2022; pp. 1–7. [Google Scholar]
  34. Chen, S.; Liu, X.; Zhang, T.; Yi, M. Research and design of intelligent fire-fighting robot based on machine vision detection technology. In Proceedings of the International Conference on Intelligent Systems, Communications, and Computer Networks (ISCCN 2022), Chengdu, China, 17–19 June 2022; pp. 117–125. [Google Scholar]
  35. Yu, W.B.; Xiong, Z.J.; Dong, Z.Q.; Wang, S.Y.; Li, J.Y.; Liu, G.P.; Liu, A.X. Zero-Error Coding via Classical and Quantum Channels in Sensor Networks. Sensors 2019, 19, 5071. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Shen, M.; Wang, Y.; Jiang, Y.; Ji, H.; Wang, B.; Huang, Z. A New Positioning Method Based on Multiple Ultrasonic Sensors for Autonomous Mobile Robot. Sensors 2019, 20, 17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Canny, J. A Computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  38. Ushma, A.; Mohamed Shanavas, A.R. Object detection in image processing using edge detection techniques. IOSR J. Eng. 2014, 4, 10–13. [Google Scholar]
  39. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 4th ed.; Pearson Prentice Hall, International: Upper Saddle River, NJ, USA, 2018. [Google Scholar]
  40. Lowe, G. Sift-the scale invariant feature transform. Int. J. 2004, 2, 91–110. [Google Scholar]
  41. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  42. Sungimoto, S.; Tateda, H.; Takahashi, H.; Okutomi, M. Obstacle detection using millimeter-wave radar and its visualization on image sequence. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 342–345. [Google Scholar]
  43. Kim, D.Y.; Jeon, M. Data fusion of radar and image measurements for multi-object tracking via Kalman filtering. Inf. Sci. 2014, 278, 641–652. [Google Scholar] [CrossRef]
Figure 1. Motivation: Provide visual information to firefighters: (a) normal environment (everything can be seen), (b) object detection through sensor fusion to visualize objects in a dense smoke environment (nothing can be seen).
Figure 1. Motivation: Provide visual information to firefighters: (a) normal environment (everything can be seen), (b) object detection through sensor fusion to visualize objects in a dense smoke environment (nothing can be seen).
Applsci 12 11325 g001
Figure 2. The relationship between visibility and wavelength of different perceptual sensors.
Figure 2. The relationship between visibility and wavelength of different perceptual sensors.
Applsci 12 11325 g002
Figure 3. Characteristics of suitable sensors for use in dense smoke: Thermal IR cameras, Ultrasonic sensors, and Radars.
Figure 3. Characteristics of suitable sensors for use in dense smoke: Thermal IR cameras, Ultrasonic sensors, and Radars.
Applsci 12 11325 g003
Figure 4. 3D measurement configuration using ultrasonic sensors: 3D ultrasonic sensor module.
Figure 4. 3D measurement configuration using ultrasonic sensors: 3D ultrasonic sensor module.
Applsci 12 11325 g004
Figure 5. The angle of view and detection area of the ultrasonic sensor system: (a,b) are side and front views, respectively, of a single 3D ultrasonic sensor module, (c,d) are side and top cross-sectional views, respectively, of multiple 3D ultrasonic sensor modules.
Figure 5. The angle of view and detection area of the ultrasonic sensor system: (a,b) are side and front views, respectively, of a single 3D ultrasonic sensor module, (c,d) are side and top cross-sectional views, respectively, of multiple 3D ultrasonic sensor modules.
Applsci 12 11325 g005
Figure 6. Overview of thermal IR image processing (u, v, w, h: positions at x and y axis, width, and height of an object, respectively).
Figure 6. Overview of thermal IR image processing (u, v, w, h: positions at x and y axis, width, and height of an object, respectively).
Applsci 12 11325 g006
Figure 7. (a) Basic operation of the FMCW radar system, (b) Two RX antennas configuration for estimating AoA.
Figure 7. (a) Basic operation of the FMCW radar system, (b) Two RX antennas configuration for estimating AoA.
Applsci 12 11325 g007
Figure 8. Sensor geometry and transformation.
Figure 8. Sensor geometry and transformation.
Applsci 12 11325 g008
Figure 9. Overall block diagram of the proposed sensor fusion.
Figure 9. Overall block diagram of the proposed sensor fusion.
Applsci 12 11325 g009
Figure 10. Concept of hardware implementation.
Figure 10. Concept of hardware implementation.
Applsci 12 11325 g010
Table 1. Research on detecting objects in dense smoke and its limitations.
Table 1. Research on detecting objects in dense smoke and its limitations.
StudiesUsed SensorsLimitationsAdvantages
[11]Ultrasonic sensor
-
Impossible to measure 3D position
-
Increase of measurement error with respect to temperature variation/gradient
-
Very simple H/W and S/W implementation
[12,13]Single thermal IR camera + Lidar
+ UV sensor + normal camera
-
Impossible to measure distance due to using only single camera (Compensated by other sensors)
-
Other sensors: inappropriate to use in dense smoke
-
Powerful visualization in white smoke and normal environment
[14,15,21]Stereo thermal IR camera
-
Ghost effect due to stereo misalignments
-
Full-resolution image processing time taken is long
-
Object detection failure due to high-temperature thermal saturation
-
Possible to use simple sensor fusion algorithm because a single type of sensor is used
[27]Stereo thermal IR camera + Radar
-
Same to above
-
The longer distance, the bigger the measurement error
-
Long-distance accuracy increased by radar
-
Continued short-range object detection failure
-
Possible to use relatively simple sensor fusion algorithm
[28]RGB camera + Radar
+ Stereo thermal IR camera
-
The same problems to [14,15,21,27]
-
RGB camera utilized for white low-temperature smoke
-
Powerful visualization in white smoke and normal environment
Table 2. The specification of the chosen sensors.
Table 2. The specification of the chosen sensors.
SensorsCategorySpecification# of Sensors Used
Ultrasonic sensor
(Hargisonic,
HG-M40)
Frequency (kHz)4020
Input pulseTTL or Pulse
Output signal (V)5
Detection range (m)0.3~3
FMCW radar
(Infineon, BGT24MTR12)
Field of View (degree)19 × 761
Carrier frequency (GHz)24
Bandwidth (MHz)200
Sweep time (μs)300
Max. IF (kHz)10
Detection range (m)3~10
Thermal IR camera
(FLIR A35)
Focal plane (pixel)320 × 2561
Pixel space (μm)25
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hahn, B. Research and Conceptual Design of Sensor Fusion for Object Detection in Dense Smoke Environments. Appl. Sci. 2022, 12, 11325. https://doi.org/10.3390/app122211325

AMA Style

Hahn B. Research and Conceptual Design of Sensor Fusion for Object Detection in Dense Smoke Environments. Applied Sciences. 2022; 12(22):11325. https://doi.org/10.3390/app122211325

Chicago/Turabian Style

Hahn, Bongsu. 2022. "Research and Conceptual Design of Sensor Fusion for Object Detection in Dense Smoke Environments" Applied Sciences 12, no. 22: 11325. https://doi.org/10.3390/app122211325

APA Style

Hahn, B. (2022). Research and Conceptual Design of Sensor Fusion for Object Detection in Dense Smoke Environments. Applied Sciences, 12(22), 11325. https://doi.org/10.3390/app122211325

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop