applsci-logo

Journal Browser

Journal Browser

Computing and Artificial Intelligence for Visual Data Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 73076

Special Issue Editors


E-Mail Website
Guest Editor
Department of Software, Korea National University of Transportation, Chungju 27469, Korea
Interests: artificial intelligence; machine learning; computer vision; autonomous driving; visual surveillance; biomedical engineering; smart healthcare
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering and Applied Science, University of Regina, 3737 Wascana Parkway, Regina, SK S4S 0A2, Canada
Interests: artificial general intelligence; machine learning; computer vision; natural language processing; pattern recognition, smart environments
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Gwangju Institute of Science and Technology, Gwangju 61005, Korea
Interests: artificial intelligence; machine learning; computer vision; visual surveillance; autonomous driving
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are proud to announce this Special Issue on “Computing and Artificial Intelligence”.

Owing to the recent advances in their growing capability and speed of processing large amounts of data, computing and artificial intelligence have been receing more attention and playing important roles in many application fields, such as surveillance, intelligent transportation systems, virtual/augmented reality, robotics and autonomous systems, smart healthcare, and so forth. Furthermore, advancements in the era of big data have given rise to an extensive variety of impressive applications by utilizing big data and established demanding research and development issues.

This Special Issue aims at disseminating recent advances in theory and applications of computing and artificial intelligence. We cordially invite authors to submit papers presenting original contributions in the field of visual intelligence.

This Special Issue aims to disseminate a set of high-quality works that address theory and design of computing and artificial intelligence and apply these to interesting applications (visual surveillance, autonomous driving, biomedical data analysis, etc.). We will be glad to receive papers with state-of-the-art reviews, original research, and real-world applications.

Please contact us if you have any doubts regarding your submission.

Prof. Dr. Jeonghwan Gwak
Prof. Dr. Kin-Choong Yow
Prof. Dr. Moongu Jeon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computing methods and algorithms
  • Artificial intelligence
  • Machine learning
  • Computer vision
  • Visual intelligence
  • Autonomous driving
  • Intelligent transportation systems
  • Information fusion
  • Intelligent systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 11920 KiB  
Article
EAR-Net: Efficient Atrous Residual Network for Semantic Segmentation of Street Scenes Based on Deep Learning
by Seokyong Shin, Sanghun Lee and Hyunho Han
Appl. Sci. 2021, 11(19), 9119; https://doi.org/10.3390/app11199119 - 30 Sep 2021
Cited by 9 | Viewed by 2660
Abstract
Segmentation of street scenes is a key technology in the field of autonomous vehicles. However, conventional segmentation methods achieve low accuracy because of the complexity of street landscapes. Therefore, we propose an efficient atrous residual network (EAR-Net) to improve accuracy while maintaining computation [...] Read more.
Segmentation of street scenes is a key technology in the field of autonomous vehicles. However, conventional segmentation methods achieve low accuracy because of the complexity of street landscapes. Therefore, we propose an efficient atrous residual network (EAR-Net) to improve accuracy while maintaining computation costs. First, we performed feature extraction and restoration, utilizing depthwise separable convolution (DSConv) and interpolation. Compared with conventional methods, DSConv and interpolation significantly reduce computation costs while minimizing performance degradation. Second, we utilized residual learning and atrous spatial pyramid pooling (ASPP) to achieve high accuracy. Residual learning increases the ability to extract context information by preventing the problem of feature and gradient losses. In addition, ASPP extracts additional context information while maintaining the resolution of the feature map. Finally, to alleviate the class imbalance between the image background and objects and to improve learning efficiency, we utilized focal loss. We evaluated EAR-Net on the Cityscapes dataset, which is commonly used for street scene segmentation studies. Experimental results showed that the EAR-Net had better segmentation results and similar computation costs as the conventional methods. We also conducted an ablation study to analyze the contributions of the ASPP and DSConv in the EAR-Net. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

21 pages, 8752 KiB  
Article
Automated Surface Defect Inspection Based on Autoencoders and Fully Convolutional Neural Networks
by Cheng-Wei Lei, Li Zhang, Tsung-Ming Tai, Chen-Chieh Tsai, Wen-Jyi Hwang and Yun-Jie Jhang
Appl. Sci. 2021, 11(17), 7838; https://doi.org/10.3390/app11177838 - 25 Aug 2021
Cited by 3 | Viewed by 2671
Abstract
This study aims to develop a novel automated computer vision algorithm for quality inspection of surfaces with complex patterns. The proposed algorithm is based on both an autoencoder (AE) and a fully convolutional neural network (FCN). The AE is adopted for the self-generation [...] Read more.
This study aims to develop a novel automated computer vision algorithm for quality inspection of surfaces with complex patterns. The proposed algorithm is based on both an autoencoder (AE) and a fully convolutional neural network (FCN). The AE is adopted for the self-generation of templates from test targets for defect detection. Because the templates are produced from the test targets, the position alignment issues for the matching operations between templates and test targets can be alleviated. The FCN is employed for the segmentation of a template into a number of coherent regions. Because the AE has the limitation that its capacities for the regeneration of each coherent region in the template may be different, the segmentation of the template by FCN is beneficial for allowing the inspection of each region to be independently carried out. In this way, more accurate detection results can be achieved. Experimental results reveal that the proposed algorithm has the advantages of simplicity for training data collection, high accuracy for defect detection, and high flexibility for online inspection. The proposed algorithm is therefore an effective alternative for the automated inspection in smart factories with a growing demand for the reliability for high quality production. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

13 pages, 822 KiB  
Article
Clustering Moving Object Trajectories: Integration in CROSS-CPP Analytic Toolbox
by Alberto Blazquez-Herranz, Juan-Ignacio Caballero-Garzon, Albert Zilverberg, Christian Wolff, Alejandro Rodríguez-Gonzalez and Ernestina Menasalvas
Appl. Sci. 2021, 11(8), 3693; https://doi.org/10.3390/app11083693 - 20 Apr 2021
Viewed by 2315
Abstract
Mobile devices equipped with sensors are generating an amount of geo-spatial related data that, properly analyzed can be used for future applications. In particular, being able to establish similar trajectories is crucial to analyze events on common points in the trajectories. CROSS-CPP is [...] Read more.
Mobile devices equipped with sensors are generating an amount of geo-spatial related data that, properly analyzed can be used for future applications. In particular, being able to establish similar trajectories is crucial to analyze events on common points in the trajectories. CROSS-CPP is a European project whose main aim is to provide tools to store data in a data market and to have a toolbox to analyze the data. As part of these analytic tools, a set of functionalities has been developed to cluster trajectories. Based on previous work on clustering algorithms we present in this paper a Quickbundels algorithm adaptation to trajectory clustering . Experiments using different distance measures show that Quickbundles outperforms spectral clustering, with the WS84 geodesic distance being the one that provides the best results. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

11 pages, 3883 KiB  
Article
Boosted Prediction of Antihypertensive Peptides Using Deep Learning
by Anum Rauf, Aqsa Kiran, Malik Tahir Hassan, Sajid Mahmood, Ghulam Mustafa and Moongu Jeon
Appl. Sci. 2021, 11(5), 2316; https://doi.org/10.3390/app11052316 - 5 Mar 2021
Cited by 10 | Viewed by 2749
Abstract
Heart attack and other heart-related diseases are among the main causes of fatalities in the world. These diseases and some other severe problems like kidney failure and paralysis are mainly caused by hypertension. Since bioactive peptides extracted from naturally existing food substances possess [...] Read more.
Heart attack and other heart-related diseases are among the main causes of fatalities in the world. These diseases and some other severe problems like kidney failure and paralysis are mainly caused by hypertension. Since bioactive peptides extracted from naturally existing food substances possess antihypertensive activity, these antihypertensive peptides (AHTP) can function as prospective replacements for existing pharmacological drugs with no or fewer side effects. Such naturally existing peptides can be identified using in-silico approaches. The in-silico methods have been proven to save huge amounts of time and money in the identification of effective peptides. The proposed methodology is a deep learning-based in-silico approach for the identification of antihypertensive peptides (AHTPs). An ensemble method is proposed that combines convolutional neural network (CNN) and support vector machine (SVM) classifiers. Amino acid composition (AAC) and g-gap dipeptide composition (DPC) techniques are used for feature extraction. The proposed methodology has been evaluated on two standard antihypertensive peptide sequence datasets. The model yields 95% accuracy on the benchmarking dataset and 88.9% accuracy on the independent dataset. Comparative analysis is provided to demonstrate that the proposed method outperforms existing state-of-the-art methods on both of the benchmarking and independent datasets. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

16 pages, 2308 KiB  
Article
EDC-Net: Edge Detection Capsule Network for 3D Point Clouds
by Dena Bazazian and M. Eulàlia Parés
Appl. Sci. 2021, 11(4), 1833; https://doi.org/10.3390/app11041833 - 19 Feb 2021
Cited by 19 | Viewed by 4022
Abstract
Edge features in point clouds are prominent due to the capability of describing an abstract shape of a set of points. Point clouds obtained by 3D scanner devices are often immense in terms of size. Edges are essential features in large scale point [...] Read more.
Edge features in point clouds are prominent due to the capability of describing an abstract shape of a set of points. Point clouds obtained by 3D scanner devices are often immense in terms of size. Edges are essential features in large scale point clouds since they are capable of describing the shapes in down-sampled point clouds while maintaining the principal information. In this paper, we tackle challenges of edge detection tasks in 3D point clouds. To this end, we propose a novel technique to detect edges of point clouds based on a capsule network architecture. In this approach, we define the edge detection task of point clouds as a semantic segmentation problem. We built a classifier through the capsules to predict edge and non-edge points in 3D point clouds. We applied a weakly-supervised learning approach in order to improve the performance of our proposed method and built in the capability of testing the technique in wider range of shapes. We provide several quantitative and qualitative experimental results to demonstrate the robustness of our proposed EDC-Net for edge detection in 3D point clouds. We performed a statistical analysis over the ABC and ShapeNet datasets. Our numerical results demonstrate the robust and efficient performance of EDC-Net. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

21 pages, 1512 KiB  
Article
Anomalous Event Recognition in Videos Based on Joint Learning of Motion and Appearance with Multiple Ranking Measures
by Shikha Dubey, Abhijeet Boragule, Jeonghwan Gwak and Moongu Jeon
Appl. Sci. 2021, 11(3), 1344; https://doi.org/10.3390/app11031344 - 2 Feb 2021
Cited by 33 | Viewed by 3955
Abstract
Given the scarcity of annotated datasets, learning the context-dependency of anomalous events as well as mitigating false alarms represent challenges in the task of anomalous activity detection. We propose a framework, Deep-network with Multiple Ranking Measures (DMRMs), which addresses context-dependency using a joint [...] Read more.
Given the scarcity of annotated datasets, learning the context-dependency of anomalous events as well as mitigating false alarms represent challenges in the task of anomalous activity detection. We propose a framework, Deep-network with Multiple Ranking Measures (DMRMs), which addresses context-dependency using a joint learning technique for motion and appearance features. In DMRMs, the spatial-time-dependent features are extracted from a video using a 3D residual network (ResNet), and deep motion features are extracted by integrating the motion flow maps’ information with the 3D ResNet. Afterward, the extracted features are fused for joint learning. This data fusion is then passed through a deep neural network for deep multiple instance learning (DMIL) to learn the context-dependency in a weakly-supervised manner using the proposed multiple ranking measures (MRMs). These MRMs consider multiple measures of false alarms, and the network is trained with both normal and anomalous events, thus lowering the false alarm rate. Meanwhile, in the inference phase, the network predicts each frame’s abnormality score along with the localization of moving objects using motion flow maps. A higher abnormality score indicates the presence of an anomalous event. Experimental results on two recent and challenging datasets demonstrate that our proposed framework improves the area under the curve (AUC) score by 6.5% compared to the state-of-the-art method on the UCF-Crime dataset and shows AUC of 68.5% on the ShanghaiTech dataset. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

14 pages, 7489 KiB  
Article
Detecting Deformation on Pantograph Contact Strip of Railway Vehicle on Image Processing and Deep Learning
by Kyung-Min Na, Kiwon Lee, Seung-Kwon Shin and Hyungchul Kim
Appl. Sci. 2020, 10(23), 8509; https://doi.org/10.3390/app10238509 - 28 Nov 2020
Cited by 15 | Viewed by 4352
Abstract
An electric railway vehicle is supplied with electricity by an OCL (Overhead Contact Line) through the contact strip of its pantograph. This transmitted electricity is then used to power the electrical equipment of the railway vehicle. This contact strip wears out due to [...] Read more.
An electric railway vehicle is supplied with electricity by an OCL (Overhead Contact Line) through the contact strip of its pantograph. This transmitted electricity is then used to power the electrical equipment of the railway vehicle. This contact strip wears out due to contact with the OCL. In particular, deformations due to chipping and material loss occur because of friction with the fittings on the OCL. These deformations on the contact strip affect its power transmission quality because of contact loss with the OCL. However, it is difficult to monitor the contact strip during operation and judge its condition in order to implement accident prevention measures. Thus, in this study, we developed a contact strip monitoring method based on image processing for inspection. The proposed method measures the deformation in the contact strip based on an algorithm that determines the wear on the deformed contact strip using deep learning and image processing. The image of the contact strip is acquired by installing a camera and laser to capture the pantograph as it passes the setup. The proposed algorithm is able to determine the wear size by extracting the edges of the laser line using deep learning and estimating the fitted line of the deformations based on the least squares method. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

25 pages, 6683 KiB  
Article
General Moving Object Localization from a Single Flying Camera
by Kin-Choong Yow and Insu Kim
Appl. Sci. 2020, 10(19), 6945; https://doi.org/10.3390/app10196945 - 4 Oct 2020
Cited by 8 | Viewed by 2957
Abstract
Object localization is an important task in the visual surveillance of scenes, and it has important applications in locating personnel and/or equipment in large open spaces such as a farm or a mine. Traditionally, object localization can be performed using the technique of [...] Read more.
Object localization is an important task in the visual surveillance of scenes, and it has important applications in locating personnel and/or equipment in large open spaces such as a farm or a mine. Traditionally, object localization can be performed using the technique of stereo vision: using two fixed cameras for a moving object, or using a single moving camera for a stationary object. This research addresses the problem of determining the location of a moving object using only a single moving camera, and it does not make use of any prior information on the type of object nor the size of the object. Our technique makes use of a single camera mounted on a quadrotor drone, which flies in a specific pattern relative to the object in order to remove the depth ambiguity associated with their relative motion. In our previous work, we showed that with three images, we can recover the location of an object moving parallel to the direction of motion of the camera. In this research, we find that with four images, we can recover the location of an object moving linearly in an arbitrary direction. We evaluated our algorithm on over 70 image sequences of objects moving in various directions, and the results showed a much smaller depth error rate (less than 8.0% typically) than other state-of-the-art algorithms. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

19 pages, 5093 KiB  
Article
Pothole Classification Model Using Edge Detection in Road Image
by Ji-Won Baek and Kyungyong Chung
Appl. Sci. 2020, 10(19), 6662; https://doi.org/10.3390/app10196662 - 23 Sep 2020
Cited by 44 | Viewed by 7360
Abstract
Since the image related to road damage includes objects such as potholes, cracks, shadows, and lanes, there is a problem that it is difficult to detect a specific object. In this paper, we propose a pothole classification model using edge detection in road [...] Read more.
Since the image related to road damage includes objects such as potholes, cracks, shadows, and lanes, there is a problem that it is difficult to detect a specific object. In this paper, we propose a pothole classification model using edge detection in road image. The proposed method converts RGB (red green and blue) image data, including potholes and other objects, to gray-scale to reduce the amount of computation. It detects all objects except potholes using an object detection algorithm. The detected object is removed, and a pixel value of 255 is assigned to process it as a background. In addition, to extract the characteristics of a pothole, the contour of the pothole is extracted through edge detection. Finally, potholes are detected and classified based by the (you only look once) YOLO algorithm. The performance evaluation evaluates the distortion rate and restoration rate of the image, and the validity of the model and accuracy of the classification. The result of the evaluation shows that the mean square error (MSE) of the distortion rate and restoration rate of the proposed method has errors of 0.2–0.44. The peak signal to noise ratio (PSNR) is evaluated as 50 db or higher. The structural similarity index map (SSIM) is evaluated as 0.71–0.82. In addition, the result of the pothole classification shows that the area under curve (AUC) is evaluated as 0.9. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

15 pages, 1676 KiB  
Article
DeNERT-KG: Named Entity and Relation Extraction Model Using DQN, Knowledge Graph, and BERT
by SungMin Yang, SoYeop Yoo and OkRan Jeong
Appl. Sci. 2020, 10(18), 6429; https://doi.org/10.3390/app10186429 - 15 Sep 2020
Cited by 23 | Viewed by 6720
Abstract
Along with studies on artificial intelligence technology, research is also being carried out actively in the field of natural language processing to understand and process people’s language, in other words, natural language. For computers to learn on their own, the skill of understanding [...] Read more.
Along with studies on artificial intelligence technology, research is also being carried out actively in the field of natural language processing to understand and process people’s language, in other words, natural language. For computers to learn on their own, the skill of understanding natural language is very important. There are a wide variety of tasks involved in the field of natural language processing, but we would like to focus on the named entity registration and relation extraction task, which is considered to be the most important in understanding sentences. We propose DeNERT-KG, a model that can extract subject, object, and relationships, to grasp the meaning inherent in a sentence. Based on the BERT language model and Deep Q-Network, the named entity recognition (NER) model for extracting subject and object is established, and a knowledge graph is applied for relation extraction. Using the DeNERT-KG model, it is possible to extract the subject, type of subject, object, type of object, and relationship from a sentence, and verify this model through experiments. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

15 pages, 7903 KiB  
Article
Image-Based Gimbal Control in a Drone for Centering Photovoltaic Modules in a Thermal Image
by Hyun-Cheol Park, Sang-Woong Lee and Heon Jeong
Appl. Sci. 2020, 10(13), 4646; https://doi.org/10.3390/app10134646 - 5 Jul 2020
Cited by 5 | Viewed by 3868
Abstract
Recently, there have been many types of research applying drones with a thermal camera to detect deteriorations in photovoltaic (PV) modules. A thermal camera can measure temperatures on the surface of PV modules and find the deteriorated area. However, a thermal camera generally [...] Read more.
Recently, there have been many types of research applying drones with a thermal camera to detect deteriorations in photovoltaic (PV) modules. A thermal camera can measure temperatures on the surface of PV modules and find the deteriorated area. However, a thermal camera generally has a lower resolution than a visible camera because of the limitations of cost. Due to different resolutions between the visible and thermal cameras, there are often invalid frames from a thermal camera. In this paper, we describe a gimbal controller with a real-time image processing algorithm to control the angle of the camera to position the region of interest (ROI) in the center of target PV modules to solve this problem. We derived the horizontal angle and vertical position of ROI in visible images using image processing algorithms such as the Hough transform. These values are converted into a PID control signal for controlling the gimbal. This process makes the thermal camera capture the effective area of target PV modules. Finally, experimental results showed that the photovoltaic module’s control area was properly located at the center of the thermal image. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Graphical abstract

16 pages, 21441 KiB  
Article
Automatic Detection System of Deteriorated PV Modules Using Drone with Thermal Camera
by Chris Henry, Sahadev Poudel, Sang-Woong Lee and Heon Jeong
Appl. Sci. 2020, 10(11), 3802; https://doi.org/10.3390/app10113802 - 29 May 2020
Cited by 99 | Viewed by 12127
Abstract
In the last few decades, photovoltaic (PV) power station installations have surged across the globe. The output efficiency of these stations deteriorates with the passage of time due to multiple factors such as hotspots, shaded cell or module, short-circuited bypass diodes, etc. Traditionally, [...] Read more.
In the last few decades, photovoltaic (PV) power station installations have surged across the globe. The output efficiency of these stations deteriorates with the passage of time due to multiple factors such as hotspots, shaded cell or module, short-circuited bypass diodes, etc. Traditionally, technicians inspect each solar panel in a PV power station using infrared thermography to ensure consistent output efficiency. With the advancement of drone technology, researchers have proposed to use drones equipped with thermal cameras for PV power station monitoring. However, most of these drone-based approaches require technicians to manually control the drone which in itself is a cumbersome task in the case of large PV power stations. To tackle this issue, this study presents an autonomous drone-based solution. The drone is mounted with both RGB (Red, Green, Blue) and thermal cameras. The proposed system can automatically detect and estimate the exact location of faulty PV modules among hundreds or thousands of PV modules in the power station. In addition, we propose an automatic drone flight path planning algorithm which eliminates the requirement of manual drone control. The system also utilizes an image processing algorithm to process RGB and thermal images for fault detection. The system was evaluated on a 1-MW solar power plant located in Suncheon, South Korea. The experimental results demonstrate the effectiveness of our solution. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

17 pages, 4036 KiB  
Article
Impacts of Weather on Short-Term Metro Passenger Flow Forecasting Using a Deep LSTM Neural Network
by Lijuan Liu, Rung-Ching Chen and Shunzhi Zhu
Appl. Sci. 2020, 10(8), 2962; https://doi.org/10.3390/app10082962 - 25 Apr 2020
Cited by 24 | Viewed by 3707
Abstract
Metro systems play a key role in meeting urban transport demands in large cities. The close relationship between historical weather conditions and the corresponding passenger flow has been widely analyzed by researchers. However, few studies have explored the issue of how to use [...] Read more.
Metro systems play a key role in meeting urban transport demands in large cities. The close relationship between historical weather conditions and the corresponding passenger flow has been widely analyzed by researchers. However, few studies have explored the issue of how to use historical weather data to make passenger flow forecasting more accurate. To this end, an hourly metro passenger flow forecasting model using a deep long short-term memory neural network (LSTM_NN) was developed. The optimized traditional input variables, including the different temporal data and historical passenger flow data, were combined with weather variables for data modeling. A comprehensive analysis of the weather impacts on short-term metro passenger flow forecasting is discussed in this paper. The experimental results confirm that weather variables have a significant effect on passenger flow forecasting. It is interesting to find out that the previous variables of one-hour temperature and wind speed are the two most important weather variables to obtain more accurate forecasting results on rainy days at Taipei Main Station, which is a primary interchange station in Taipei. Compared to the four widely used algorithms, the deep LSTM_NN is an extremely powerful method, which has the capability of making more accurate forecasts when suitable weather variables are included. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

18 pages, 11937 KiB  
Article
Body-Part-Aware and Multitask-Aware Single-Image-Based Action Recognition
by Bhishan Bhandari, Geonu Lee and Jungchan Cho
Appl. Sci. 2020, 10(4), 1531; https://doi.org/10.3390/app10041531 - 24 Feb 2020
Cited by 11 | Viewed by 4081
Abstract
Action recognition is an application that, ideally, requires real-time results. We focus on single-image-based action recognition instead of video-based because of improved speed and lower cost of computation. However, a single image contains limited information, which makes single-image-based action recognition a difficult problem. [...] Read more.
Action recognition is an application that, ideally, requires real-time results. We focus on single-image-based action recognition instead of video-based because of improved speed and lower cost of computation. However, a single image contains limited information, which makes single-image-based action recognition a difficult problem. To get an accurate representation of action classes, we propose three feature-stream-based shallow sub-networks (image-based, attention-image-based, and part-image-based feature networks) on the deep pose estimation network in a multitasking manner. Moreover, we design the multitask-aware loss function, so that the proposed method can be adaptively trained with heterogeneous datasets where only human pose annotations or action labels are included (instead of both pose and action information), which makes it easier to apply the proposed approach to new data on behavioral analysis on intelligent systems. In our extensive experiments, we showed that these streams represent complementary information and, hence, the fused representation is robust in distinguishing diverse fine-grained action classes. Unlike other methods, the human pose information was trained using heterogeneous datasets in a multitasking manner; nevertheless, it achieved 91.91% mean average precision on the Stanford 40 Actions Dataset. Moreover, we demonstrated the proposed method can be flexibly applied to multi-labels action recognition problem on the V-COCO Dataset. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

20 pages, 5118 KiB  
Article
Multi-View Interactive Visual Exploration of Individual Association for Public Transportation Passengers
by Di Lv, Yong Zhang, Jiongbin Lin, Peiyuan Wan and Yongli Hu
Appl. Sci. 2020, 10(2), 628; https://doi.org/10.3390/app10020628 - 15 Jan 2020
Viewed by 2327
Abstract
More and more people in mega cities are choosing to travel by public transportation due to its convenience and punctuality. It is widely acknowledged that there may be some potential associations between passengers. Their travel behavior may be working together, shopping together, or [...] Read more.
More and more people in mega cities are choosing to travel by public transportation due to its convenience and punctuality. It is widely acknowledged that there may be some potential associations between passengers. Their travel behavior may be working together, shopping together, or even some abnormal behaviors, such as stealing or begging. Thus, analyzing association between passengers is very important for management departments. It is very helpful to make operational plans, provide better services to passengers and ensure public transport safety. In order to quickly explore the association between passengers, we propose a multi-view interactive exploration method that provides five interactive views: passenger 3D travel trajectory view, passenger travel time pixel matrix view, passenger origin-destination chord view, passenger travel vehicle bubble chart view and passenger 2D travel trajectory view. It can explore the associated passengers from multiple aspects such as travel trajectory, travel area, travel time, and vehicles used for travel. Using Beijing public transportation data, the experimental results verified that our method can effectively explore the association between passengers and deduce the relationship. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

Review

Jump to: Research

18 pages, 15955 KiB  
Review
A Review of Modelling and Simulation Methods for Flashover Prediction in Confined Space Fires
by Daniel Cortés, David Gil, Jorge Azorín, Florian Vandecasteele and Steven Verstockt
Appl. Sci. 2020, 10(16), 5609; https://doi.org/10.3390/app10165609 - 13 Aug 2020
Cited by 12 | Viewed by 5475
Abstract
Confined space fires are common emergencies in our society. Enclosure size, ventilation, or type and quantity of fuel involved are factors that determine the fire evolution in these situations. In some cases, favourable conditions may give rise to a flashover phenomenon. However, the [...] Read more.
Confined space fires are common emergencies in our society. Enclosure size, ventilation, or type and quantity of fuel involved are factors that determine the fire evolution in these situations. In some cases, favourable conditions may give rise to a flashover phenomenon. However, the difficulty of handling this complicated emergency through fire services can have fatal consequences for their staff. Therefore, there is a huge demand for new methods and technologies to tackle this life-threatening emergency. Modelling and simulation techniques have been adopted to conduct research due to the complexity of obtaining a real cases database related to this phenomenon. In this paper, a review of the literature related to the modelling and simulation of enclosure fires with respect to the flashover phenomenon is carried out. Furthermore, the related literature for comparing images from thermal cameras with computed images is reviewed. Finally, the suitability of artificial intelligence (AI) techniques for flashover prediction in enclosed spaces is also surveyed. Full article
(This article belongs to the Special Issue Computing and Artificial Intelligence for Visual Data Analysis)
Show Figures

Figure 1

Back to TopTop