sensors-logo

Journal Browser

Journal Browser

Advances in Intelligent Single/Multiple Sensing Systems and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 October 2019) | Viewed by 73310

Special Issue Editors


E-Mail Website
Guest Editor
Associate Professor, Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
Interests: vision-based automation; pattern recognition; color image processing; imaging systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
Interests: multimedia; big data; deep learning; computer vision; pattern recognition; data science; machine learning; mobile multimedia applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electronics Engineering, National Chiao Tung University, Hsinchu 30010, Taiwan
Interests: multimedia; artificial intelligence; computer vision; machine learning; social media; financial technology

E-Mail Website
Guest Editor
Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei 10608, Taiwan
Interests: Internet of Things; artificial intelligence/computational intelligence; cloud and edge computing; smart grid technology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the preceding decade, intelligent sensing systems based on single or multiple sensors have demonstrated rapid progress. One of the most typical examples of such a system is Microsoft Kinect (or recently, a similar system named Intel RealSense), which integrates an RGB camera, multi-array microphone, and depth sensor to precisely capture three-dimensional body motions and hand gestures and even recognize faces and voices. The sensor enables users to control the game console by using gesture-based or spoken commands, rather than by holding a game controller. Kinect has also been involved with other applications, such as smart homes and home automation. In smart homes, as an example considering electrical energy management with home automation, Kinect can be incorporated with an ARM®-based embedded system as a central home controller and can be used to implement remote automated and optimized electrical appliance control with the past trends of using electrical energy via multi-sensor data fusion. Another typical example of an intelligent multi-sensor system is an advanced driver assistance system named “Delphi RACam”, which integrates an RGB camera and a 76-GHz radar to accurately detect pedestrians and track lanes to prevent collisions.

The term “intelligent sensing system” indicates that the system not only senses ordinary inputs (light, heat, sound, etc.) but is also able to analyze the input and take appropriate action. Such intelligent sensing enables unlimited technological potential for the future world. In addition to multi-sensor systems, intelligent single-sensor systems have also achieved tremendous progress. For example, with recent achievements in artificial intelligence (AI) and signal processing, the “Omron smart camera” demonstrates excellent industrial inspection ability by using a single RGB camera, and “BMW night vision” demonstrates excellent nighttime road obstacle recognition ability using a thermal imaging camera.

Under different circumstances, each sensor has different strengths and weaknesses. Regarding the intelligent sensing systems, whether signal processing techniques can be employed to minimize the weaknesses and enhance the strengths of a single sensor or even fuse multiple sensors for greater sensing ability warrants investigation. With Kinect as an example in the case of a pure background with sufficient luminance, an RGB image sensed by the camera is likely sufficiently clear for gesture recognition. However, if the user is in a dark room, the image is likely to be too dim for gesture recognition. In such circumstances, fusing information from the depth sensor is a solution. In addition, some image enhancement techniques, such as high dynamic range (HDR), tone mapping and equalization, can stretch the contrast and preserve the details in highlights/shadows of an image to enhance recognition ability. Such image enhancement or denoising techniques are also included in our Special Issue because they are key components of an intelligent vision sensing system.

This Special Issue investigates novel methodologies and applications related to intelligent systems with single (or multiple) sensor(s). Both reviews and original research articles are welcome. Topics of interest for this Special Issue include (but are not limited to):

  • Intelligent single (or multiple) sensing systems
  • Applications involved with Kinect/RealSense interaction
  • Novel RGB-D sensing systems
  • Intelligent sensing system
  • Smart/intelligent vision sensors
  • IoT-oriented intelligent multi-sensing in smart homes
  • Intelligent sensing applications in smart cities/smart campuses/smart factories
  • Image enhancement, e.g. high dynamic range (HDR) and tone mapping
  • Advanced signal and image processing
  • Applications in intelligent thermal/Radar/Lidar/depth sensing
  • Multi-sensor (or multi-modal) data fusion
  • AI and machine learning in intelligent sensing
  • Novel methodologies for analyzing sensor data
  • New application scenarios for intelligent sensing
  • Multi-sensory data applications

Assoc. Prof. Yung-Yao Chen
Assoc. Prof. Kai-Lung Hua
Prof. Wen-Huang Cheng
Assist. Prof. Yu-Hsiu Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent sensing systems
  • multi-sensory data
  • multi-modal data fusion
  • smart city
  • advanced signal and image processing
  • IoT-oriented sensing
  • applications with single (or multiple) sensor(s)
  • RGB/thermal/Radar/LiDar/depth sensing
  • artificial intelligence applications
  • sensor data analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 68991 KiB  
Article
Photo Composition with Real-Time Rating
by Yi-Feng Li, Chuan-Kai Yang and Yi-Zhen Chang
Sensors 2020, 20(3), 582; https://doi.org/10.3390/s20030582 - 21 Jan 2020
Cited by 4 | Viewed by 5918
Abstract
Taking a photo has become a part of our daily life. With the powerfulness and convenience of a smartphone, capturing what we see may have never been easier. However, taking good photos may not be always easy or intuitive for everyone. As numerous [...] Read more.
Taking a photo has become a part of our daily life. With the powerfulness and convenience of a smartphone, capturing what we see may have never been easier. However, taking good photos may not be always easy or intuitive for everyone. As numerous studies have shown that photo composition plays a very important role in making a good photo, in this study, we, therefore, propose to develop a photo-taking app to give a real-time suggestion through a scoring mechanism to guide a user to take a good photo. Due to the concern of real-time performance, only eight commonly used composition rules are adopted in our system, and several detailed evaluations have been conducted to prove the effectiveness of our system. Full article
Show Figures

Figure 1

20 pages, 1920 KiB  
Article
SVM-Enabled Intelligent Genetic Algorithmic Model for Realizing Efficient Universal Feature Selection in Breast Cyst Image Acquired via Ultrasound Sensing Systems
by Chuan-Yu Chang, Kathiravan Srinivasan, Mao-Cheng Chen and Shao-Jer Chen
Sensors 2020, 20(2), 432; https://doi.org/10.3390/s20020432 - 12 Jan 2020
Cited by 6 | Viewed by 3000
Abstract
In recent years, there are several cost-effective intelligent sensing systems such as ultrasound imaging systems for visualizing the internal body structures of the body. Further, such intelligent sensing systems such as ultrasound systems have been deployed by medical doctors around the globe for [...] Read more.
In recent years, there are several cost-effective intelligent sensing systems such as ultrasound imaging systems for visualizing the internal body structures of the body. Further, such intelligent sensing systems such as ultrasound systems have been deployed by medical doctors around the globe for efficient detection of several diseases and disorders in the human body. Even though the ultrasound sensing system is a useful tool for obtaining the imagery of various body parts, there is always a possibility of inconsistencies in these images due to the variation in the settings of the system parameters. Therefore, in order to overcome such issues, this research devises an SVM-enabled intelligent genetic algorithmic model for choosing the universal features with four distinct settings of the parameters. Subsequently, the distinguishing characteristics of these features are assessed utilizing the Sorensen-Dice coefficient, t-test, and Pearson’s R measure. It is apparent from the results of the SVM-enabled intelligent genetic algorithmic model that this approach aids in the effectual selection of universal features for the breast cyst images. In addition, this approach also accomplishes superior accuracy in the classification of the ultrasound image for four distinct settings of the parameters. Full article
Show Figures

Figure 1

15 pages, 3158 KiB  
Article
Sensor Classification Using Convolutional Neural Network by Encoding Multivariate Time Series as Two-Dimensional Colored Images
by Chao-Lung Yang, Zhi-Xuan Chen and Chen-Yi Yang
Sensors 2020, 20(1), 168; https://doi.org/10.3390/s20010168 - 27 Dec 2019
Cited by 119 | Viewed by 10787
Abstract
This paper proposes a framework to perform the sensor classification by using multivariate time series sensors data as inputs. The framework encodes multivariate time series data into two-dimensional colored images, and concatenate the images into one bigger image for classification through a Convolutional [...] Read more.
This paper proposes a framework to perform the sensor classification by using multivariate time series sensors data as inputs. The framework encodes multivariate time series data into two-dimensional colored images, and concatenate the images into one bigger image for classification through a Convolutional Neural Network (ConvNet). This study applied three transformation methods to encode time series into images: Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF), and Markov Transition Field (MTF). Two open multivariate datasets were used to evaluate the impact of using different transformation methods, the sequences of concatenating images, and the complexity of ConvNet architectures on classification accuracy. The results show that the selection of transformation methods and the sequence of concatenation do not affect the prediction outcome significantly. Surprisingly, the simple structure of ConvNet is sufficient enough for classification as it performed equally well with the complex structure of VGGNet. The results were also compared with other classification methods and found that the proposed framework outperformed other methods in terms of classification accuracy. Full article
Show Figures

Figure 1

15 pages, 1110 KiB  
Article
An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network
by Mu-Chun Su, Tat-Meng U, Yi-Zeng Hsieh, Zhe-Fu Yeh, Shu-Fang Lee and Shih-Syun Lin
Sensors 2020, 20(1), 25; https://doi.org/10.3390/s20010025 - 19 Dec 2019
Cited by 9 | Viewed by 3808
Abstract
The human eye is a vital sensory organ that provides us with visual information about the world around us. It can also convey such information as our emotional state to people with whom we interact. In technology, eye tracking has become a hot [...] Read more.
The human eye is a vital sensory organ that provides us with visual information about the world around us. It can also convey such information as our emotional state to people with whom we interact. In technology, eye tracking has become a hot research topic recently, and a growing number of eye-tracking devices have been widely applied in fields such as psychology, medicine, education, and virtual reality. However, most commercially available eye trackers are prohibitively expensive and require that the user’s head remain completely stationary in order to accurately estimate the direction of their gaze. To address these drawbacks, this paper proposes an inner corner-pupil center vector (ICPCV) eye-tracking system based on a deep neural network, which does not require that the user’s head remain stationary or expensive hardware to operate. The performance of the proposed system is compared with those of other currently available eye-tracking estimation algorithms, and the results show that it outperforms these systems. Full article
Show Figures

Figure 1

18 pages, 8177 KiB  
Article
OCT-Based Periodontal Inspection Framework
by Yu-Chi Lai, Chia-Hsing Chiu, Zhong-Qi Cai, Jin-Yang Lin, Chih-Yuan Yao, Dong-Yuan Lyu, Shyh-Yuan Lee, Kuo-Wei Chen and I-Yu Chen
Sensors 2019, 19(24), 5496; https://doi.org/10.3390/s19245496 - 12 Dec 2019
Cited by 8 | Viewed by 3040
Abstract
Periodontal diagnosis requires discovery of the relations among teeth, gingiva (i.e., gums), and alveolar bones, but alveolar bones are inside gingiva and not visible for inspection. Traditional probe examination causes pain, and X-ray based examination is not suited for frequent inspection. This work [...] Read more.
Periodontal diagnosis requires discovery of the relations among teeth, gingiva (i.e., gums), and alveolar bones, but alveolar bones are inside gingiva and not visible for inspection. Traditional probe examination causes pain, and X-ray based examination is not suited for frequent inspection. This work develops an automatic non-invasive periodontal inspection framework based on gum penetrative Optical Coherence Tomography (OCT), which can be frequently applied without high radiation. We sum up interference responses of all penetration depths for all shooting directions respectively to form the shooting amplitude projection. Because the reaching interference strength decays exponentially with tissues’ penetration depth, this projection mainly reveals the responses of the top most gingiva or teeth. Since gingiva and teeth have different air-tissue responses, the gumline, revealing itself as an obvious boundary between teeth and gingiva, is the basis line for periodontal inspection. Our system can also automatically identify regions of gingiva, teeth, and alveolar bones from slices of the cross-sectional volume. Although deep networks can successfully and possibly segment noisy maps, reducing the number of manually labeled maps for training is critical for our framework. In order to enhance the effectiveness and efficiency of training and classification, we adjust Snake segmentation to consider neighboring slices in order to locate those regions possibly containing gingiva-teeth and gingiva–alveolar boundaries. Additionally, we also adapt a truncated direct logarithm based on the Snake-segmented region for intensity quantization to emphasize these boundaries for easier identification. Later, the alveolar-gingiva boundary point directly under the gumline is the desired alveolar sample, and we can measure the distance between the gumline and alveolar line for visualization and direct periodontal inspection. At the end, we experimentally verify our choice in intensity quantization and boundary identification against several other algorithms while applying the framework to locate gumline and alveolar line in vivo data successfully. Full article
Show Figures

Figure 1

18 pages, 3765 KiB  
Article
Optimized CapsNet for Traffic Jam Speed Prediction Using Mobile Sensor Data under Urban Swarming Transportation
by Hendrik Tampubolon, Chao-Lung Yang, Arnold Samuel Chan, Hendri Sutrisno and Kai-Lung Hua
Sensors 2019, 19(23), 5277; https://doi.org/10.3390/s19235277 - 29 Nov 2019
Cited by 15 | Viewed by 3538
Abstract
Urban swarming transportation (UST) is a type of road transportation where multiple types of vehicles such as cars, buses, trucks, motorcycles, and bicycles, as well as pedestrians are allowed and mixed together on the roads. Predicting the traffic jam speed under UST is [...] Read more.
Urban swarming transportation (UST) is a type of road transportation where multiple types of vehicles such as cars, buses, trucks, motorcycles, and bicycles, as well as pedestrians are allowed and mixed together on the roads. Predicting the traffic jam speed under UST is very different and difficult from the single road network traffic prediction which has been commonly studied in the intelligent traffic system (ITS) research. In this research, the road network wide (RNW) traffic prediction which predicts traffic jam speeds of multiple roads at once by utilizing citizens’ mobile GPS sensor records is proposed to better predict traffic jam under UST. In order to conduct the RNW traffic prediction, a specific data preprocessing is needed to convert traffic data into an image representing spatial-temporal relationships among RNW. In addition, a revised capsule network (CapsNet), named OCapsNet, which utilizes nonlinearity functions in the first two convolution layers and the modified dynamic routing to optimize the performance of CapsNet, is proposed. The experiments were conducted using real-world urban road traffic data of Jakarta to evaluate the performance. The results show that OCapsNet has better performance than Convolution Neural Network (CNN) and original CapsNet with better accuracy and precision. Full article
Show Figures

Figure 1

17 pages, 13902 KiB  
Article
Robust Detection of Abandoned Object for Smart Video Surveillance in Illumination Changes
by Hyeseung Park, Seungchul Park and Youngbok Joo
Sensors 2019, 19(23), 5114; https://doi.org/10.3390/s19235114 - 22 Nov 2019
Cited by 16 | Viewed by 5668
Abstract
Most existing abandoned object detection algorithms use foreground information generated from background models. Detection using the background subtraction technique performs well under normal circumstances. However, it has a significant problem where the foreground information is gradually absorbed into the background as time passes [...] Read more.
Most existing abandoned object detection algorithms use foreground information generated from background models. Detection using the background subtraction technique performs well under normal circumstances. However, it has a significant problem where the foreground information is gradually absorbed into the background as time passes and disappears, making it very vulnerable to sudden illumination changes that increase the false alarm rate. This paper presents an algorithm for detecting abandoned objects using a dual background model, which is robust even in illumination changes as well as other complex circumstances like occlusion, long-term abandonment, and owner re-attendance. The proposed algorithm can adapt quickly to various illumination changes. And also, it can precisely track the target objects to determine whether it is abandoned regardless of the existence of foreground information and the effect from the illumination changes, thanks to the largest-contour-based presence authentication mechanism proposed in this paper. For performance evaluation, we trialed the algorithm with the PETS2006, ABODA datasets as well as our dataset, especially to demonstrate its robustness in various illumination changes. Full article
Show Figures

Figure 1

23 pages, 8822 KiB  
Article
An Adaptive Exposure Fusion Method Using Fuzzy Logic and Multivariate Normal Conditional Random Fields
by Yu-Hsiu Lin, Kai-Lung Hua, Hsin-Han Lu, Wei-Lun Sun and Yung-Yao Chen
Sensors 2019, 19(21), 4743; https://doi.org/10.3390/s19214743 - 31 Oct 2019
Cited by 3 | Viewed by 2944
Abstract
High dynamic range (HDR) has wide applications involving intelligent vision sensing which includes enhanced electronic imaging, smart surveillance, self-driving cars, intelligent medical diagnosis, etc. Exposure fusion is an essential HDR technique which fuses different exposures of the same scene into an HDR-like image. [...] Read more.
High dynamic range (HDR) has wide applications involving intelligent vision sensing which includes enhanced electronic imaging, smart surveillance, self-driving cars, intelligent medical diagnosis, etc. Exposure fusion is an essential HDR technique which fuses different exposures of the same scene into an HDR-like image. However, determining the appropriate fusion weights is difficult because each differently exposed image only contains a subset of the scene’s details. When blending, the problem of local color inconsistency is more challenging; thus, it often requires manual tuning to avoid image artifacts. To address this problem, we present an adaptive coarse-to-fine searching approach to find the optimal fusion weights. In the coarse-tuning stage, fuzzy logic is used to efficiently decide the initial weights. In the fine-tuning stage, the multivariate normal conditional random field model is used to adjust the fuzzy-based initial weights which allows us to consider both intra- and inter-image information in the data. Moreover, a multiscale enhanced fusion scheme is proposed to blend input images when maintaining the details in each scale-level. The proposed fuzzy-based MNCRF (Multivariate Normal Conditional Random Fields) fusion method provided a smoother blending result and a more natural look. Meanwhile, the details in the highlighted and dark regions were preserved simultaneously. The experimental results demonstrated that our work outperformed the state-of-the-art methods not only in several objective quality measures but also in a user study analysis. Full article
Show Figures

Figure 1

14 pages, 75099 KiB  
Article
Interactive OCT-Based Tooth Scan and Reconstruction
by Yu-Chi Lai, Jin-Yang Lin, Chih-Yuan Yao, Dong-Yuan Lyu, Shyh-Yuan Lee, Kuo-Wei Chen and I-Yu Chen
Sensors 2019, 19(19), 4234; https://doi.org/10.3390/s19194234 - 29 Sep 2019
Cited by 5 | Viewed by 4072
Abstract
Digital dental reconstruction can be a more efficient and effective mechanism for artificial crown construction and period inspection. However, optical methods cannot reconstruct those portions under gums, and X-ray-based methods have high radiation to limit their applied frequency. Optical coherence tomography (OCT) can [...] Read more.
Digital dental reconstruction can be a more efficient and effective mechanism for artificial crown construction and period inspection. However, optical methods cannot reconstruct those portions under gums, and X-ray-based methods have high radiation to limit their applied frequency. Optical coherence tomography (OCT) can harmlessly penetrate gums using low-coherence infrared rays, and thus, this work designs an OCT-based framework for dental reconstruction using optical rectification, fast Fourier transform, volumetric boundary detection, and Poisson surface reconstruction to overcome noisy imaging. Additionally, in order to operate in a patient’s mouth, the caliber of the injector is small along with its short penetration depth and effective operation range, and thus, reconstruction requires multiple scans from various directions along with proper alignment. However, flat regions, such as the mesial side of front teeth, may not have enough features for alignment. As a result, we design a scanning order for different types of teeth starting from an area of abundant features for easier alignment while using gyros to track scanned postures for better initial orientations. It is important to provide immediate feedback for each scan, and thus, we accelerate the entire signal processing, boundary detection, and point-cloud alignment using Graphics Processing Units (GPUs) while streamlining the data transfer and GPU computations. Finally, our framework can successfully reconstruct three isolated teeth and a side of one living tooth with comparable precisions against the state-of-art method. Moreover, a user study also verifies the effectiveness of our interactive feedback for efficient and fast clinic scanning. Full article
Show Figures

Figure 1

26 pages, 8746 KiB  
Article
A Lightweight Leddar Optical Fusion Scanning System (FSS) for Canopy Foliage Monitoring
by Zhouxin Xi, Christopher Hopkinson, Stewart B. Rood, Celeste Barnes, Fang Xu, David Pearce and Emily Jones
Sensors 2019, 19(18), 3943; https://doi.org/10.3390/s19183943 - 12 Sep 2019
Cited by 4 | Viewed by 4059
Abstract
A growing need for sampling environmental spaces in high detail is driving the rapid development of non-destructive three-dimensional (3D) sensing technologies. LiDAR sensors, capable of precise 3D measurement at various scales from indoor to landscape, still lack affordable and portable products for broad-scale [...] Read more.
A growing need for sampling environmental spaces in high detail is driving the rapid development of non-destructive three-dimensional (3D) sensing technologies. LiDAR sensors, capable of precise 3D measurement at various scales from indoor to landscape, still lack affordable and portable products for broad-scale and multi-temporal monitoring. This study aims to configure a compact and low-cost 3D fusion scanning system (FSS) with a multi-segment Leddar (light emitting diode detection and ranging, LeddarTech), a monocular camera, and rotational robotics to recover hemispherical, colored point clouds. This includes an entire framework of calibration and fusion algorithms utilizing Leddar depth measurements and image parallax information. The FSS was applied to scan a cottonwood (Populus spp.) stand repeatedly during autumnal leaf drop. Results show that the calibration error based on bundle adjustment is between 1 and 3 pixels. The FSS scans exhibit a similar canopy volume profile to the benchmarking terrestrial laser scans, with an r2 between 0.5 and 0.7 in varying stages of leaf cover. The 3D point distribution information from FSS also provides a valuable correction factor for the leaf area index (LAI) estimation. The consistency of corrected LAI measurement demonstrates the practical value of deploying FSS for canopy foliage monitoring. Full article
Show Figures

Graphical abstract

19 pages, 6053 KiB  
Article
Real-Time Automatic Calculation of Euro Coins and Banknotes in a Cash Drawer
by Manuel Cereijido, Fernando Nuño, Alberto M. Pernía, Miguel J. Prieto and Pedro J. Villegas
Sensors 2019, 19(11), 2623; https://doi.org/10.3390/s19112623 - 9 Jun 2019
Cited by 1 | Viewed by 5301
Abstract
A very interesting and useful complement to classical cash-registers is presented in this paper, coming up with a real-time auto-counting solution for the money inside a cash drawer. The system allows knowing not only the total amount of money but also how many [...] Read more.
A very interesting and useful complement to classical cash-registers is presented in this paper, coming up with a real-time auto-counting solution for the money inside a cash drawer. The system allows knowing not only the total amount of money but also how many coins and banknotes there are of each value. The embedded solution developed has been intended to become a low-cost solution, allowing better control over the money and helping both owners and workers in the establishments. By using this system, new utilities including automatic final balancing, instant error handling when making operations, and the lack of certain types of banknotes or coins inside the drawer or the excess of some in a certain compartment, could be implemented. Coins-counting solution is based on their weight, and small individual scales made by load cells have been integrated in each coin compartment. With respect to the banknotes, an innovative alternative based on the electrical properties of capacitors is presented. Additionally, considering the relevance of interoperability in today’s systems, a Bluetooth module has been integrated into the system, allowing for data to be accessed remotely from any smartphone, tablet or computer within the range of the module. In this work, an Android application to both control and interact with the system has also been designed. Full article
Show Figures

Figure 1

15 pages, 4684 KiB  
Article
Autonomous Searching for a Diffusive Source Based on Minimizing the Combination of Entropy and Potential Energy
by Cheng Song, Yuyao He and Xiaokang Lei
Sensors 2019, 19(11), 2465; https://doi.org/10.3390/s19112465 - 29 May 2019
Cited by 9 | Viewed by 2783
Abstract
The infotaxis scheme is a search strategy for a diffusive source, where the sensor platform is driven to reduce the uncertainty about the source through climbing the information gradient. The infotaxis scheme has been successfully applied in many source searching tasks and has [...] Read more.
The infotaxis scheme is a search strategy for a diffusive source, where the sensor platform is driven to reduce the uncertainty about the source through climbing the information gradient. The infotaxis scheme has been successfully applied in many source searching tasks and has demonstrated fast and stable searching capabilities. However, the infotaxis scheme focuses on gathering information to reduce the uncertainty down to zero, rather than chasing the most probable estimated source when a reliable estimation is obtained. This leads the sensor to spend more time exploring the space and yields a longer search path. In this paper, from the context of exploration-exploitation balance, a novel search scheme based on minimizing free energy that combines the entropy and the potential energy is proposed. The term entropy is implemented as the exploration to gather more information. The term potential energy, leveraging the distance to the estimated sources, is implemented as the exploitation to reinforce the chasing behavior with the receding of the uncertainty. It results in a faster effective search strategy by which the sensor determines its actions by minimizing the free energy rather than only the entropy in traditional infotaxis. Simulations of the source search task based on the computational plume verify the efficiency of the proposed strategy, achieving a shorter mean search time. Full article
Show Figures

Figure 1

18 pages, 8035 KiB  
Article
Capillary Sensor with Disposable Optrode for Diesel Fuel Quality Testing
by Michal Borecki, Przemyslaw Prus and Michael L. Korwin-Pawlowski
Sensors 2019, 19(9), 1980; https://doi.org/10.3390/s19091980 - 27 Apr 2019
Cited by 12 | Viewed by 4424
Abstract
Diesel fuel quality can be considered from many different points of view. Fuel producers, fuel consumers, and ecologists have their own ideas. In this paper, a sensor of diesel fuel quality type, and fuel condition that is oriented to the fuel’s consumers, is [...] Read more.
Diesel fuel quality can be considered from many different points of view. Fuel producers, fuel consumers, and ecologists have their own ideas. In this paper, a sensor of diesel fuel quality type, and fuel condition that is oriented to the fuel’s consumers, is presented. The fuel quality types include premium, standard, and full bio-diesel classes. The fuel conditions include fuel fit for use and fuel degraded classes. The classes of fuel are connected with characteristics of engine operation. The presented sensor uses signal processing of an optoelectronic device monitoring fuel samples that are locally heated to the first step of boiling. Compared to previous works which consider diesel fuel quality sensing with disposable optrodes which use a more complex construction, the sensor now consists only of a capillary probe and advanced signal processing. The signal processing addresses automatic conversion of the data series to form a data pattern, estimates the measurement uncertainty, eliminates outlier data, and determines the fuel quality with an intelligent artificial neural network classifier. The sensor allows the quality classification of different unknown diesel fuel samples in less than a few minutes with the measurement costs of a single disposable capillary probe and two plugs. Full article
Show Figures

Figure 1

18 pages, 9033 KiB  
Article
Intelligent Positioning for a Commercial Mobile Platform in Seamless Indoor/Outdoor Scenes based on Multi-sensor Fusion
by Dongsheng Wang, Yongjie Lu, Lei Zhang and Guoping Jiang
Sensors 2019, 19(7), 1696; https://doi.org/10.3390/s19071696 - 9 Apr 2019
Cited by 14 | Viewed by 3617
Abstract
Many traffic occasions such as tunnels, subway stations and underground parking require accurate and continuous positioning. Navigation and timing services offered by the Global Navigation Satellite System (GNSS) is the most popular outdoor positioning method, but its signals are vulnerable to interference, leading [...] Read more.
Many traffic occasions such as tunnels, subway stations and underground parking require accurate and continuous positioning. Navigation and timing services offered by the Global Navigation Satellite System (GNSS) is the most popular outdoor positioning method, but its signals are vulnerable to interference, leading to a degraded performance or even unavailability. The combination of magnetometer and Inertial Measurement Unit (IMU) is one of the commonly used indoor positioning methods. Within the proposed mobile platform for positioning in seamless indoor and outdoor scenes, the data of magnetometer and IMU are used to update the positioning when the GNSS signals are weak. Because the magnetometer is susceptible to environmental interference, an intelligent method for calculating heading angle by magnetometer is proposed, which can dynamically calculate and correct the heading angle of the mobile platform in a working environment. The results show that the proposed method of calculating heading angle by magnetometer achieved better performance with interference existence. Compared with the uncorrected heading angle, the corrected accuracy results could be improved by 60%, and the effect was more obvious when the interference was stronger. The error of overall positioning trajectory and true trajectory was within 2 m. Full article
Show Figures

Figure 1

22 pages, 1372 KiB  
Article
Continuous Driver’s Gaze Zone Estimation Using RGB-D Camera
by Yafei Wang, Guoliang Yuan, Zetian Mi, Jinjia Peng, Xueyan Ding, Zheng Liang and Xianping Fu
Sensors 2019, 19(6), 1287; https://doi.org/10.3390/s19061287 - 14 Mar 2019
Cited by 29 | Viewed by 5028
Abstract
The driver gaze zone is an indicator of a driver’s attention and plays an important role in the driver’s activity monitoring. Due to the bad initialization of point-cloud transformation, gaze zone systems using RGB-D cameras and ICP (Iterative Closet Points) algorithm do not [...] Read more.
The driver gaze zone is an indicator of a driver’s attention and plays an important role in the driver’s activity monitoring. Due to the bad initialization of point-cloud transformation, gaze zone systems using RGB-D cameras and ICP (Iterative Closet Points) algorithm do not work well under long-time head motion. In this work, a solution for a continuous driver gaze zone estimation system in real-world driving situations is proposed, combining multi-zone ICP-based head pose tracking and appearance-based gaze estimation. To initiate and update the coarse transformation of ICP, a particle filter with auxiliary sampling is employed for head state tracking, which accelerates the iterative convergence of ICP. Multiple templates for different gaze zone are applied to balance the templates revision of ICP under large head movement. For the RGB information, an appearance-based gaze estimation method with two-stage neighbor selection is utilized, which treats the gaze prediction as the combination of neighbor query (in head pose and eye image feature space) and linear regression (between eye image feature space and gaze angle space). The experimental results show that the proposed method outperforms the baseline methods on gaze estimation, and can provide a stable head pose tracking for driver behavior analysis in real-world driving scenarios. Full article
Show Figures

Figure 1

17 pages, 3989 KiB  
Article
Real-Time Monitoring of Jet Trajectory during Jetting Based on Near-Field Computer Vision
by Jinsong Zhu, Wei Li, Da Lin and Ge Zhao
Sensors 2019, 19(3), 690; https://doi.org/10.3390/s19030690 - 8 Feb 2019
Cited by 8 | Viewed by 4425
Abstract
A novel method of near-field computer vision (NFCV) was developed to monitor the jet trajectory during the jetting process, which was used to precisely predict the falling point position of the jet trajectory. By means of a high-resolution webcam, the NFCV sensor device [...] Read more.
A novel method of near-field computer vision (NFCV) was developed to monitor the jet trajectory during the jetting process, which was used to precisely predict the falling point position of the jet trajectory. By means of a high-resolution webcam, the NFCV sensor device collected near-field images of the jet trajectory. Preprocessing of collected images was carried out, which included squint image correction, noise elimination, and jet trajectory extraction. The features of the jet trajectory in the processed image were extracted, including: start-point slope (SPS), end-point slope (EPS), and overall trajectory slope (OTS) based on the proposed mean position method. A multiple regression jet trajectory range prediction model was established based on these trajectory characteristics and the reliability of the model was verified. The results show that the accuracy of the prediction model is not less than 94% and the processing time is less than 0.88 s, which satisfy the requirements of real-time online jet trajectory monitoring. Full article
Show Figures

Figure 1

Back to TopTop