Deep Neural Networks and Their Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 March 2020) | Viewed by 119302

Special Issue Editor


E-Mail Website
Guest Editor
Division of Electronics Engineering and Intelligent Robot Research Center, Chonbuk National University, Jeonju 567-54896, Republic of Korea
Interests: neural networks; deep learning; memristors; neuromorphics; intelligent robotics

Special Issue Information

Dear Colleagues,

By virtue of the success of recent deep neural network technologies, Artificial Intelligence has recently received great attention from almost all fields of academia and industries. Though the current success of Artificial Intelligence arose with the software version of neural networks, it is gradually extending to hardware implementations and human–computer interfaces. This Special Issue aims to provide a platform to researchers from both software and hardware of Artificial Intelligence to share cutting-edge developments in the field. The scope of this Special Issue is deep learning, neuromorphics, and brain–computer interfaces.

We solicit original research papers as well as review articles, including but not limited to the following key words:

  • Artificial Intelligence
  • Brain–computer interface (BCI)
  • Brain signal processing for BCI
  • Deep learning (AI) algorithm
  • Deep learning (AI) architecture
  • Deep learning applications
  • Intelligent bioinformatics
  • Intelligent robots
  • Intelligent systems
  • Machine learning
  • Memristors
  • Neural networks
  • Neural rehabilitation engineering
  • Neuromorphics
  • Parallel processing
  • Web intelligence applications and search

Prof. Hyongsuk Kim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • Brain–computer interface (BCI)
  • Brain signal processing for BCI
  • Deep learning (AI) algorithm
  • Deep learning (AI) architecture
  • Deep learning applications
  • Intelligent bioinformatics
  • Intelligent robots
  • Intelligent systems
  • Machine learning
  • Memristors
  • Neural networks
  • Neural rehabilitation engineering
  • Neuromorphics
  • Parallel processing
  • Web intelligence applications and search

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 5400 KiB  
Article
A Robust Structured Tracker Using Local Deep Features
by Mohammadreza Javanmardi, Amir Hossein Farzaneh and Xiaojun Qi
Electronics 2020, 9(5), 846; https://doi.org/10.3390/electronics9050846 - 20 May 2020
Cited by 2 | Viewed by 2251
Abstract
Deep features extracted from convolutional neural networks have been recently utilized in visual tracking to obtain a generic and semantic representation of target candidates. In this paper, we propose a robust structured tracker using local deep features (STLDF). This tracker exploits the deep [...] Read more.
Deep features extracted from convolutional neural networks have been recently utilized in visual tracking to obtain a generic and semantic representation of target candidates. In this paper, we propose a robust structured tracker using local deep features (STLDF). This tracker exploits the deep features of local patches inside target candidates and sparsely represents them by a set of templates in the particle filter framework. The proposed STLDF utilizes a new optimization model, which employs a group-sparsity regularization term to adopt local and spatial information of the target candidates and attain the spatial layout structure among them. To solve the optimization model, we propose an efficient and fast numerical algorithm that consists of two subproblems with the close-form solutions. Different evaluations in terms of success and precision on the benchmarks of challenging image sequences (e.g., OTB50 and OTB100) demonstrate the superior performance of the STLDF against several state-of-the-art trackers. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

12 pages, 1970 KiB  
Article
Attention-LSTM-Attention Model for Speech Emotion Recognition and Analysis of IEMOCAP Database
by Yeonguk Yu and Yoon-Joong Kim
Electronics 2020, 9(5), 713; https://doi.org/10.3390/electronics9050713 - 26 Apr 2020
Cited by 71 | Viewed by 7663
Abstract
We propose a speech-emotion recognition (SER) model with an “attention-long Long Short-Term Memory (LSTM)-attention” component to combine IS09, a commonly used feature for SER, and mel spectrogram, and we analyze the reliability problem of the interactive emotional dyadic motion capture (IEMOCAP) database. The [...] Read more.
We propose a speech-emotion recognition (SER) model with an “attention-long Long Short-Term Memory (LSTM)-attention” component to combine IS09, a commonly used feature for SER, and mel spectrogram, and we analyze the reliability problem of the interactive emotional dyadic motion capture (IEMOCAP) database. The attention mechanism of the model focuses on emotion-related elements of the IS09 and mel spectrogram feature and the emotion-related duration from the time of the feature. Thus, the model extracts emotion information from a given speech signal. The proposed model for the baseline study achieved a weighted accuracy (WA) of 68% for the improvised dataset of IEMOCAP. However, the WA of the proposed model of the main study and modified models could not achieve more than 68% in the improvised dataset. This is because of the reliability limit of the IEMOCAP dataset. A more reliable dataset is required for a more accurate evaluation of the model’s performance. Therefore, in this study, we reconstructed a more reliable dataset based on the labeling results provided by IEMOCAP. The experimental results of the model for the more reliable dataset confirmed a WA of 73%. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

18 pages, 1790 KiB  
Article
SMO-DNN: Spider Monkey Optimization and Deep Neural Network Hybrid Classifier Model for Intrusion Detection
by Neelu Khare, Preethi Devan, Chiranji Lal Chowdhary, Sweta Bhattacharya, Geeta Singh, Saurabh Singh and Byungun Yoon
Electronics 2020, 9(4), 692; https://doi.org/10.3390/electronics9040692 - 24 Apr 2020
Cited by 123 | Viewed by 9765
Abstract
The enormous growth in internet usage has led to the development of different malicious software posing serious threats to computer security. The various computational activities carried out over the network have huge chances to be tampered and manipulated and this necessitates the emergence [...] Read more.
The enormous growth in internet usage has led to the development of different malicious software posing serious threats to computer security. The various computational activities carried out over the network have huge chances to be tampered and manipulated and this necessitates the emergence of efficient intrusion detection systems. The network attacks are also dynamic in nature, something which increases the importance of developing appropriate models for classification and predictions. Machine learning (ML) and deep learning algorithms have been prevalent choices in the analysis of intrusion detection systems (IDS) datasets. The issues pertaining to quality and quality of data and the handling of high dimensional data is managed by the use of nature inspired algorithms. The present study uses a NSL-KDD and KDD Cup 99 dataset collected from the Kaggle repository. The dataset was cleansed using the min-max normalization technique and passed through the 1-N encoding method for achieving homogeneity. A spider monkey optimization (SMO) algorithm was used for dimensionality reduction and the reduced dataset was fed into a deep neural network (DNN). The SMO based DNN model generated classification results with 99.4% and 92% accuracy, 99.5%and 92.7% of precision, 99.5% and 92.8% of recall and 99.6%and 92.7% of F1-score, utilizing minimal training time. The model was further compared with principal component analysis (PCA)-based DNN and the classical DNN models, wherein the results justified the advantage of implementing the proposed model over other approaches. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

14 pages, 3989 KiB  
Article
A 3D Shape Recognition Method Using Hybrid Deep Learning Network CNN–SVM
by Long Hoang, Suk-Hwan Lee and Ki-Ryong Kwon
Electronics 2020, 9(4), 649; https://doi.org/10.3390/electronics9040649 - 15 Apr 2020
Cited by 23 | Viewed by 4744
Abstract
3D shape recognition becomes necessary due to the popularity of 3D data resources. This paper aims to introduce the new method, hybrid deep learning network convolution neural network–support vector machine (CNN–SVM), for 3D recognition. The vertices of the 3D mesh are interpolated to [...] Read more.
3D shape recognition becomes necessary due to the popularity of 3D data resources. This paper aims to introduce the new method, hybrid deep learning network convolution neural network–support vector machine (CNN–SVM), for 3D recognition. The vertices of the 3D mesh are interpolated to be converted into Point Clouds; those Point Clouds are rotated for 3D data augmentation. We obtain and store the 2D projection of this 3D augmentation data in a 32 × 32 × 12 matrix, the input data of CNN–SVM. An eight-layer CNN is used as the algorithm for feature extraction, then SVM is applied for classifying feature extraction. Two big datasets, ModelNet40 and ModelNet10, of the 3D model are used for model validation. Based on our numerical experimental results, CNN–SVM is more accurate and efficient than other methods. The proposed method is 13.48% more accurate than the PointNet method in ModelNet10 and 8.5% more precise than 3D ShapeNets for ModelNet40. The proposed method works with both the 3D model in the augmented/virtual reality system and in the 3D Point Clouds, an output of the LIDAR sensor in autonomously driving cars. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

21 pages, 5500 KiB  
Article
Deep Multi-Modal Metric Learning with Multi-Scale Correlation for Image-Text Retrieval
by Yan Hua, Yingyun Yang and Jianhe Du
Electronics 2020, 9(3), 466; https://doi.org/10.3390/electronics9030466 - 10 Mar 2020
Cited by 3 | Viewed by 3671
Abstract
Multi-modal retrieval is a challenge due to heterogeneous gap and a complex semantic relationship between different modal data. Typical research map different modalities into a common subspace with a one-to-one correspondence or similarity/dissimilarity relationship of inter-modal data, in which the distances of heterogeneous [...] Read more.
Multi-modal retrieval is a challenge due to heterogeneous gap and a complex semantic relationship between different modal data. Typical research map different modalities into a common subspace with a one-to-one correspondence or similarity/dissimilarity relationship of inter-modal data, in which the distances of heterogeneous data can be compared directly; thus, inter-modal retrieval can be achieved by the nearest neighboring search. However, most of them ignore intra-modal relations and complicated semantics between multi-modal data. In this paper, we propose a deep multi-modal metric learning method with multi-scale semantic correlation to deal with the retrieval tasks between image and text modalities. A deep model with two branches is designed to nonlinearly map raw heterogeneous data into comparable representations. In contrast to binary similarity, we formulate semantic relationship with multi-scale similarity to learn fine-grained multi-modal distances. Inter-modal and intra-modal correlations constructed on multi-scale semantic similarity are incorporated to train the deep model in an end-to-end way. Experiments validate the effectiveness of our proposed method on multi-modal retrieval tasks, and our method outperforms state-of-the-art methods on NUS-WIDE, MIR Flickr, and Wikipedia datasets. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

15 pages, 1039 KiB  
Article
Structure Fusion Based on Graph Convolutional Networks for Node Classification in Citation Networks
by Guangfeng Lin, Jing Wang, Kaiyang Liao, Fan Zhao and Wanjun Chen
Electronics 2020, 9(3), 432; https://doi.org/10.3390/electronics9030432 - 4 Mar 2020
Cited by 8 | Viewed by 3651
Abstract
Suffering from the multi-view data diversity and complexity, most of the existing graph convolutional networks focus on the networks’ architecture construction or the salient graph structure preservation for node classification in citation networks and usually ignore capturing the complete graph structure of nodes [...] Read more.
Suffering from the multi-view data diversity and complexity, most of the existing graph convolutional networks focus on the networks’ architecture construction or the salient graph structure preservation for node classification in citation networks and usually ignore capturing the complete graph structure of nodes for enhancing classification performance. To mine the more complete distribution structure from multi-graph structures of multi-view data with the consideration of their specificity and the commonality, we propose structure fusion based on graph convolutional networks (SF-GCN) for improving the performance of node classification in a semi-supervised way. SF-GCN can not only exploit the special characteristic of each view datum by spectral embedding preserving multi-graph structures, but also explore the common style of multi-view data by the distance metric between multi-graph structures. Suppose the linear relationship between multi-graph structures; we can construct the optimization function of the structure fusion model by balancing the specificity loss and the commonality loss. By solving this function, we can simultaneously obtain the fusion spectral embedding from the multi-view data and the fusion structure as the adjacent matrix to input graph convolutional networks for node classification in a semi-supervised way. Furthermore, we generalize the structure fusion to structure diffusion propagation and present structure propagation fusion based on graph convolutional networks (SPF-GCN) for utilizing these structure interactions. Experiments demonstrate that the performance of SPF-GCN outperforms that of the state-of-the-art methods on three challenging datasets, which are Cora, Citeseer, and Pubmed in citation networks. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

15 pages, 2417 KiB  
Article
Accurate and Consistent Image-to-Image Conditional Adversarial Network
by Naeem Ul Islam, Sungmin Lee and Jaebyung Park
Electronics 2020, 9(3), 395; https://doi.org/10.3390/electronics9030395 - 27 Feb 2020
Cited by 8 | Viewed by 4067
Abstract
Image-to-image translation based on deep learning has attracted interest in the robotics and vision community because of its potential impact on terrain analysis and image representation, interpretation, modification, and enhancement. Currently, the most successful approach for generating a translated image is a conditional [...] Read more.
Image-to-image translation based on deep learning has attracted interest in the robotics and vision community because of its potential impact on terrain analysis and image representation, interpretation, modification, and enhancement. Currently, the most successful approach for generating a translated image is a conditional generative adversarial network (cGAN) for training an autoencoder with skip connections. Despite its impressive performance, it has low accuracy and a lack of consistency; further, its training is imbalanced. This paper proposes a balanced training strategy for image-to-image translation, resulting in an accurate and consistent network. The proposed approach uses two generators and a single discriminator. The generators translate images from one domain to another. The discriminator takes the input of three different configurations and guides both the generators to generate realistic images in their corresponding domains while ensuring high accuracy and consistency. Experiments are conducted on different datasets. In particular, the proposed approach outperforms the cGAN in realistic image translation in terms of accuracy and consistency in training. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

15 pages, 5869 KiB  
Article
SEEK: A Framework of Superpixel Learning with CNN Features for Unsupervised Segmentation
by Talha Ilyas, Abbas Khan, Muhammad Umraiz and Hyongsuk Kim
Electronics 2020, 9(3), 383; https://doi.org/10.3390/electronics9030383 - 25 Feb 2020
Cited by 22 | Viewed by 5833
Abstract
Supervised semantic segmentation algorithms have been a hot area of exploration recently, but now the attention is being drawn towards completely unsupervised semantic segmentation. In an unsupervised framework, neither the targets nor the ground truth labels are provided to the network. That being [...] Read more.
Supervised semantic segmentation algorithms have been a hot area of exploration recently, but now the attention is being drawn towards completely unsupervised semantic segmentation. In an unsupervised framework, neither the targets nor the ground truth labels are provided to the network. That being said, the network is unaware about any class instance or object present in the given data sample. So, we propose a convolutional neural network (CNN) based architecture for unsupervised segmentation. We used the squeeze and excitation network, due to its peculiar ability to capture the features’ interdependencies, which increases the network’s sensitivity to more salient features. We iteratively enable our CNN architecture to learn the target generated by a graph-based segmentation method, while simultaneously preventing our network from falling into the pit of over-segmentation. Along with this CNN architecture, image enhancement and refinement techniques are exploited to improve the segmentation results. Our proposed algorithm produces improved segmented regions that meet the human level segmentation results. In addition, we evaluate our approach using different metrics to show the quantitative outperformance. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

16 pages, 7713 KiB  
Article
Early Detection of Diabetic Retinopathy Using PCA-Firefly Based Deep Learning Model
by Thippa Reddy Gadekallu, Neelu Khare, Sweta Bhattacharya, Saurabh Singh, Praveen Kumar Reddy Maddikunta, In-Ho Ra and Mamoun Alazab
Electronics 2020, 9(2), 274; https://doi.org/10.3390/electronics9020274 - 5 Feb 2020
Cited by 268 | Viewed by 14441
Abstract
Diabetic Retinopathy is a major cause of vision loss and blindness affecting millions of people across the globe. Although there are established screening methods - fluorescein angiography and optical coherence tomography for detection of the disease but in majority of the cases, the [...] Read more.
Diabetic Retinopathy is a major cause of vision loss and blindness affecting millions of people across the globe. Although there are established screening methods - fluorescein angiography and optical coherence tomography for detection of the disease but in majority of the cases, the patients remain ignorant and fail to undertake such tests at an appropriate time. The early detection of the disease plays an extremely important role in preventing vision loss which is the consequence of diabetes mellitus remaining untreated among patients for a prolonged time period. Various machine learning and deep learning approaches have been implemented on diabetic retinopathy dataset for classification and prediction of the disease but majority of them have neglected the aspect of data pre-processing and dimensionality reduction, leading to biased results. The dataset used in the present study is a diabetes retinopathy dataset collected from the UCI machine learning repository. At its inceptions, the raw dataset is normalized using the Standardscalar technique and then Principal Component Analysis (PCA) is used to extract the most significant features in the dataset. Further, Firefly algorithm is implemented for dimensionality reduction. This reduced dataset is fed into a Deep Neural Network Model for classification. The results generated from the model is evaluated against the prevalent machine learning models and the results justify the superiority of the proposed model in terms of Accuracy, Precision, Recall, Sensitivity and Specificity. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

17 pages, 618 KiB  
Article
Hybrid Translation with Classification: Revisiting Rule-Based and Neural Machine Translation
by Jin-Xia Huang, Kyung-Soon Lee and Young-Kil Kim
Electronics 2020, 9(2), 201; https://doi.org/10.3390/electronics9020201 - 21 Jan 2020
Cited by 13 | Viewed by 5055
Abstract
This paper proposes a hybrid machine-translation system that combines neural machine translation with well-developed rule-based machine translation to utilize the stability of the latter to compensate for the inadequacy of neural machine translation in rare-resource domains. A classifier is introduced to predict which [...] Read more.
This paper proposes a hybrid machine-translation system that combines neural machine translation with well-developed rule-based machine translation to utilize the stability of the latter to compensate for the inadequacy of neural machine translation in rare-resource domains. A classifier is introduced to predict which translation from the two systems is more reliable. We explore a set of features that reflect the reliability of translation and its process, and training data is automatically expanded with a small, human-labeled dataset to solve the insufficient-data problem. A series of experiments shows that the hybrid system’s translation accuracy is improved, especially in out-of-domain translations, and classification accuracy is greatly improved when using the proposed features and the automatically constructed training set. A comparison between feature- and text-based classification is also performed, and the results show that the feature-based model achieves better classification accuracy, even when compared to neural network text classifiers. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

13 pages, 3004 KiB  
Article
A Deep Learning Method for 3D Object Classification Using the Wave Kernel Signature and A Center Point of the 3D-Triangle Mesh
by Long Hoang, Suk-Hwan Lee, Oh-Heum Kwon and Ki-Ryong Kwon
Electronics 2019, 8(10), 1196; https://doi.org/10.3390/electronics8101196 - 20 Oct 2019
Cited by 6 | Viewed by 8848
Abstract
Computer vision recently has many applications such as smart cars, robot navigation, and computer-aided manufacturing. Object classification, in particular 3D classification, is a major part of computer vision. In this paper, we propose a novel method, wave kernel signature (WKS) and a center [...] Read more.
Computer vision recently has many applications such as smart cars, robot navigation, and computer-aided manufacturing. Object classification, in particular 3D classification, is a major part of computer vision. In this paper, we propose a novel method, wave kernel signature (WKS) and a center point (CP) method, which extracts color and distance features from a 3D model to tackle 3D object classification. The motivation of this idea is from the nature of human vision, which we tend to classify an object based on its color and size. Firstly, we find a center point of the mesh to define distance feature. Secondly, we calculate eigenvalues from the 3D mesh, and WKS values, respectively, to capture color feature. These features will be an input of a 2D convolution neural network (CNN) architecture. We use two large-scale 3D model datasets: ModelNet10 and ModelNet40 to evaluate the proposed method. Our experimental results show more accuracy and efficiency than other methods. The proposed method could apply for actual-world problems like autonomous driving and augmented/virtual reality. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

11 pages, 1720 KiB  
Article
Fully Convolutional Single-Crop Siamese Networks for Real-Time Visual Object Tracking
by Dong-Hyun Lee
Electronics 2019, 8(10), 1084; https://doi.org/10.3390/electronics8101084 - 24 Sep 2019
Cited by 4 | Viewed by 2786
Abstract
The visual object tracking problem seeks to track an arbitrary object in a video, and many deep convolutional neural network-based algorithms have achieved significant performance improvements in recent years. However, most of them do not guarantee real-time operation due to the large computation [...] Read more.
The visual object tracking problem seeks to track an arbitrary object in a video, and many deep convolutional neural network-based algorithms have achieved significant performance improvements in recent years. However, most of them do not guarantee real-time operation due to the large computation overhead for deep feature extraction. This paper presents a single-crop visual object tracking algorithm based on a fully convolutional Siamese network (SiamFC). The proposed algorithm significantly reduces the computation burden by extracting multiple scale feature maps from a single image crop. Experimental results show that the proposed algorithm demonstrates superior speed performance in comparison with that of SiamFC. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

12 pages, 2095 KiB  
Article
Dynamic Deep Forest: An Ensemble Classification Method for Network Intrusion Detection
by Bo Hu, Jinxi Wang, Yifan Zhu and Tan Yang
Electronics 2019, 8(9), 968; https://doi.org/10.3390/electronics8090968 - 30 Aug 2019
Cited by 13 | Viewed by 3080
Abstract
Network Intrusion Detection System (NIDS) is one of the key technologies to prevent network attacks and data leakage. In combination with machine learning, intrusion detection has achieved great progress in recent years. However, due to the diversity of intrusion types, the representation learning [...] Read more.
Network Intrusion Detection System (NIDS) is one of the key technologies to prevent network attacks and data leakage. In combination with machine learning, intrusion detection has achieved great progress in recent years. However, due to the diversity of intrusion types, the representation learning ability of the existing models is still deficient, which limits the further improvement of the detection performance. Meanwhile, with the increasing of model complexity, the training time becomes longer and longer. In this paper, we propose a Dynamic Deep Forest method for network intrusion detection. It uses cascade tree structure to strengthen the representation learning ability. At the same time, the training process is accelerated due to small-scale parameter fitting and dynamic level-growing strategy. The proposed Dynamic Deep Forest is a tree-based ensemble approach and consists of two parts. The first part, Multi-Grained Traversing, uses selectors to pick up features as complete as possible. The selectors are constructed dynamically so that the training process will stop as soon as the optimal feature combination is found. The second part, Cascade Forest, introduces level-by-level tree structures. It has fewer hyper-parameters and follows a dynamic level-growing strategy to reduce model complexity. In experiments, we evaluate our model on network intrusion dataset KDD’99. The results show that the Dynamic Deep Forest method obtains higher recall and precision through a short time of model training. Moreover, the Dynamic Deep Forest method has lower risk of misclassification, which is more stable and reliable in a real network environment. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

19 pages, 6103 KiB  
Article
Ship Target Detection Algorithm Based on Improved Faster R-CNN
by Liang Qi, Bangyu Li, Liankai Chen, Wei Wang, Liang Dong, Xuan Jia, Jing Huang, Chengwei Ge, Ganmin Xue and Dong Wang
Electronics 2019, 8(9), 959; https://doi.org/10.3390/electronics8090959 - 29 Aug 2019
Cited by 64 | Viewed by 5962
Abstract
Ship target detection has urgent needs and broad application prospects in military and marine transportation. In order to improve the accuracy and efficiency of the ship target detection, an improved Faster R-CNN (Faster Region-based Convolutional Neural Network) algorithm of ship target detection is [...] Read more.
Ship target detection has urgent needs and broad application prospects in military and marine transportation. In order to improve the accuracy and efficiency of the ship target detection, an improved Faster R-CNN (Faster Region-based Convolutional Neural Network) algorithm of ship target detection is proposed. In the proposed method, the image downscaling method is used to enhance the useful information of the ship image. The scene narrowing technique is used to construct the target regional positioning network and the Faster R-CNN convolutional neural network into a hierarchical narrowing network, aiming at reducing the target detection search scale and improving the computational speed of Faster R-CNN. Furthermore, deep cooperation between main network and subnet is realized to optimize network parameters after researching Faster R-CNN with subject narrowing function and selecting texture features and spatial difference features as narrowed sub-networks. The experimental results show that the proposed method can significantly shorten the detection time of the algorithm while improving the detection accuracy of Faster R-CNN algorithm. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

18 pages, 5456 KiB  
Article
A Deep Learning-Based Scatter Correction of Simulated X-ray Images
by Heesin Lee and Joonwhoan Lee
Electronics 2019, 8(9), 944; https://doi.org/10.3390/electronics8090944 - 27 Aug 2019
Cited by 30 | Viewed by 6951
Abstract
X-ray scattering significantly limits image quality. Conventional strategies for scatter reduction based on physical equipment or measurements inevitably increase the dose to improve the image quality. In addition, scatter reduction based on a computational algorithm could take a large amount of time. We [...] Read more.
X-ray scattering significantly limits image quality. Conventional strategies for scatter reduction based on physical equipment or measurements inevitably increase the dose to improve the image quality. In addition, scatter reduction based on a computational algorithm could take a large amount of time. We propose a deep learning-based scatter correction method, which adopts a convolutional neural network (CNN) for restoration of degraded images. Because it is hard to obtain real data from an X-ray imaging system for training the network, Monte Carlo (MC) simulation was performed to generate the training data. For simulating X-ray images of a human chest, a cone beam CT (CBCT) was designed and modeled as an example. Then, pairs of simulated images, which correspond to scattered and scatter-free images, respectively, were obtained from the model with different doses. The scatter components, calculated by taking the differences of the pairs, were used as targets to train the weight parameters of the CNN. Compared with the MC-based iterative method, the proposed one shows better results in projected images, with as much as 58.5% reduction in root-mean-square error (RMSE), and 18.1% and 3.4% increases in peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), on average, respectively. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

18 pages, 7936 KiB  
Article
A Rapid Recognition Method for Electronic Components Based on the Improved YOLO-V3 Network
by Rui Huang, Jinan Gu, Xiaohong Sun, Yongtao Hou and Saad Uddin
Electronics 2019, 8(8), 825; https://doi.org/10.3390/electronics8080825 - 25 Jul 2019
Cited by 83 | Viewed by 11137
Abstract
Rapid object recognition in the industrial field is the key to intelligent manufacturing. The research on fast recognition methods based on deep learning was the focus of researchers in recent years, but the balance between detection speed and accuracy was not well solved. [...] Read more.
Rapid object recognition in the industrial field is the key to intelligent manufacturing. The research on fast recognition methods based on deep learning was the focus of researchers in recent years, but the balance between detection speed and accuracy was not well solved. In this paper, a fast recognition method for electronic components in a complex background is presented. Firstly, we built the image dataset, including image acquisition, image augmentation, and image labeling. Secondly, a fast recognition method based on deep learning was proposed. The balance between detection accuracy and detection speed was solved through the lightweight improvement of YOLO (You Only Look Once)-V3 network model. Finally, the experiment was completed, and the proposed method was compared with several popular detection methods. The results showed that the accuracy reached 95.21% and the speed was 0.0794 s, which proved the superiority of this method for electronic component detection. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

15 pages, 8074 KiB  
Article
Predicting Image Aesthetics for Intelligent Tourism Information Systems
by Ricardo Kleinlein, Álvaro García-Faura, Cristina Luna Jiménez, Juan Manuel Montero, Fernando Díaz-de-María and Fernando Fernández-Martínez
Electronics 2019, 8(6), 671; https://doi.org/10.3390/electronics8060671 - 13 Jun 2019
Cited by 9 | Viewed by 4251
Abstract
Image perception can vary considerably between subjects, yet some sights are regarded as aesthetically pleasant more often than others due to their specific visual content, this being particularly true in tourism-related applications. We introduce the ESITUR project, oriented towards the development of ’smart [...] Read more.
Image perception can vary considerably between subjects, yet some sights are regarded as aesthetically pleasant more often than others due to their specific visual content, this being particularly true in tourism-related applications. We introduce the ESITUR project, oriented towards the development of ’smart tourism’ solutions aimed at improving the touristic experience. The idea is to convert conventional tourist showcases into fully interactive information points accessible from any smartphone, enriched with automatically-extracted contents from the analysis of public photos uploaded to social networks by other visitors. Our baseline, knowledge-driven system reaches a classification accuracy of 64.84 ± 4.22% telling suitable images from unsuitable ones for a tourism guide application. As an alternative we adopt a data-driven Mixture of Experts (MEX) approach, in which multiple learners specialize in partitions of the problem space. In our case, a location tag is attached to every picture providing a criterion to segment the data by, and the MEX model accordingly defined achieves an accuracy of 85.08 ± 2.23%. We conclude ours is a successful approach in environments in which some kind of data segmentation can be applied, such as touristic photographs. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

17 pages, 2681 KiB  
Article
Automatic Tool for Fast Generation of Custom Convolutional Neural Networks Accelerators for FPGA
by Miguel Rivera-Acosta, Susana Ortega-Cisneros and Jorge Rivera
Electronics 2019, 8(6), 641; https://doi.org/10.3390/electronics8060641 - 6 Jun 2019
Cited by 19 | Viewed by 6276
Abstract
This paper presents a platform that automatically generates custom hardware accelerators for convolutional neural networks (CNNs) implemented in field-programmable gate array (FPGA) devices. It includes a user interface for configuring and managing these accelerators. The herein-presented platform can perform all the processes necessary [...] Read more.
This paper presents a platform that automatically generates custom hardware accelerators for convolutional neural networks (CNNs) implemented in field-programmable gate array (FPGA) devices. It includes a user interface for configuring and managing these accelerators. The herein-presented platform can perform all the processes necessary to design and test CNN accelerators from the CNN architecture description at both layer and internal parameter levels, training the desired architecture with any dataset and generating the configuration files required by the platform. With these files, it can synthesize the register-transfer level (RTL) and program the customized CNN accelerator into the FPGA device for testing, making it possible to generate custom CNN accelerators quickly and easily. All processes save the CNN architecture description are fully automatized and carried out by the platform, which manages third-party software to train the CNN and synthesize and program the generated RTL. The platform has been tested with the implementation of some of the CNN architectures found in the state-of-the-art for freely available datasets such as MNIST, CIFAR-10, and STL-10. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

16 pages, 6542 KiB  
Article
Surface Defects Recognition of Wheel Hub Based on Improved Faster R-CNN
by Xiaohong Sun, Jinan Gu, Rui Huang, Rong Zou and Benjamin Giron Palomares
Electronics 2019, 8(5), 481; https://doi.org/10.3390/electronics8050481 - 29 Apr 2019
Cited by 70 | Viewed by 6253
Abstract
Machine vision is one of the key technologies used to perform intelligent manufacturing. In order to improve the recognition rate of multi-class defects in wheel hubs, an improved Faster R-CNN method was proposed. A data set for wheel hub defects was built. This [...] Read more.
Machine vision is one of the key technologies used to perform intelligent manufacturing. In order to improve the recognition rate of multi-class defects in wheel hubs, an improved Faster R-CNN method was proposed. A data set for wheel hub defects was built. This data set consisted of four types of defects in 2,412 1080 × 1440 pixels images. Faster R-CNN was modified, trained, verified and tested based on this database. The recognition rate for this proposed method was excellent. The proposed method was compared with the popular R-CNN and YOLOv3 methods showing simpler, faster, and more accurate defect detection, which demonstrates the superiority of the improved Faster R-CNN for wheel hub defects. Full article
(This article belongs to the Special Issue Deep Neural Networks and Their Applications)
Show Figures

Figure 1

Back to TopTop