sensors-logo

Journal Browser

Journal Browser

Computer Vision and Machine Learning for Intelligent Sensing Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 June 2022) | Viewed by 40873

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Institute of Systems Science, National University of Singapore, Singapore, Singapore,
Interests: computer vision; machine learning; video analytics; multimedia application
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the rapid development of computer vision and machine learning technology, intelligent sensing systems have been fueled to make sense of vision sensory data to address complex and challenging real-world sense-making problems. This has raised tremendous opportunities and challenges of managing and understanding vision sensory data for intelligent sensing systems. With the recent advances in machine learning techniques, we are now able to better analyze vision sensory data. This has attracted massive research efforts devoted to addressing challenges in this area, including visual surveillance, smart cities, healthcare, etc. The Special Issue aims to provide a collection of high-quality research articles that address the broad challenges in both theoretical and application aspects of computer vision and machine learning for intelligent sensing systems.

The topics of interest include but are not limited to:

  • Computer vision for intelligent sensing systems:
    • Sensing, representation, modeling;
    • Restoration, enhancement, and super-resolution;
    • Color, multispectral, and hyperspectral imaging;
    • Stereoscopic, multiview, and 3D processing;
  • Machine learning for intelligent sensing systems:
    • Classification, detection, segmentation;
    • Action and event recognition, behavior understanding;
    • Multimodal machine learning;
  • Computer vision applications for healthcare, manufacture, security and safety, biomedical sciences, and other emerging applications.

Dr. Jing Tian
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Machine learning
  • Deep learning
  • Computer vision
  • Image classification
  • Image analysis
  • Object detection
  • Image segmentation
  • Action recognition

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 182 KiB  
Editorial
Computer Vision and Machine Learning for Intelligent Sensing Systems
by Jing Tian
Sensors 2023, 23(9), 4214; https://doi.org/10.3390/s23094214 - 23 Apr 2023
Cited by 3 | Viewed by 1603
Abstract
Intelligent sensing systems have been fueled to make sense of visual sensory data to handle complex and difficult real-world sense-making challenges due to the rapid growth of computer vision and machine learning technologies [...] Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)

Research

Jump to: Editorial

23 pages, 5622 KiB  
Article
Online Self-Calibration of 3D Measurement Sensors Using a Voxel-Based Network
by Jingyu Song and Joonwoong Lee
Sensors 2022, 22(17), 6447; https://doi.org/10.3390/s22176447 - 26 Aug 2022
Cited by 4 | Viewed by 2104
Abstract
Multi-sensor fusion is important in the field of autonomous driving. A basic prerequisite for multi-sensor fusion is calibration between sensors. Such calibrations must be accurate and need to be performed online. Traditional calibration methods have strict rules. In contrast, the latest online calibration [...] Read more.
Multi-sensor fusion is important in the field of autonomous driving. A basic prerequisite for multi-sensor fusion is calibration between sensors. Such calibrations must be accurate and need to be performed online. Traditional calibration methods have strict rules. In contrast, the latest online calibration methods based on convolutional neural networks (CNNs) have gone beyond the limits of the conventional methods. We propose a novel algorithm for online self-calibration between sensors using voxels and three-dimensional (3D) convolution kernels. The proposed approach has the following features: (1) it is intended for calibration between sensors that measure 3D space; (2) the proposed network is capable of end-to-end learning; (3) the input 3D point cloud is converted to voxel information; (4) it uses five networks that process voxel information, and it improves calibration accuracy through iterative refinement of the output of the five networks and temporal filtering. We use the KITTI and Oxford datasets to evaluate the calibration performance of the proposed method. The proposed method achieves a rotation error of less than 0.1° and a translation error of less than 1 cm on both the KITTI and Oxford datasets. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

22 pages, 1890 KiB  
Article
Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception
by Effat Jalaeian Zaferani, Mohammad Teshnehlab, Amirreza Khodadadian, Clemens Heitzinger, Mansour Vali, Nima Noii and Thomas Wick
Sensors 2022, 22(16), 6206; https://doi.org/10.3390/s22166206 - 18 Aug 2022
Cited by 6 | Viewed by 1731
Abstract
In this work, a method for automatic hyper-parameter tuning of the stacked asymmetric auto-encoder is proposed. In previous work, the deep learning ability to extract personality perception from speech was shown, but hyper-parameter tuning was attained by trial-and-error, which is time-consuming and requires [...] Read more.
In this work, a method for automatic hyper-parameter tuning of the stacked asymmetric auto-encoder is proposed. In previous work, the deep learning ability to extract personality perception from speech was shown, but hyper-parameter tuning was attained by trial-and-error, which is time-consuming and requires machine learning knowledge. Therefore, obtaining hyper-parameter values is challenging and places limits on deep learning usage. To address this challenge, researchers have applied optimization methods. Although there were successes, the search space is very large due to the large number of deep learning hyper-parameters, which increases the probability of getting stuck in local optima. Researchers have also focused on improving global optimization methods. In this regard, we suggest a novel global optimization method based on the cultural algorithm, multi-island and the concept of parallelism to search this large space smartly. At first, we evaluated our method on three well-known optimization benchmarks and compared the results with recently published papers. Results indicate that the convergence of the proposed method speeds up due to the ability to escape from local optima, and the precision of the results improves dramatically. Afterward, we applied our method to optimize five hyper-parameters of an asymmetric auto-encoder for automatic personality perception. Since inappropriate hyper-parameters lead the network to over-fitting and under-fitting, we used a novel cost function to prevent over-fitting and under-fitting. As observed, the unweighted average recall (accuracy) was improved by 6.52% (9.54%) compared to our previous work and had remarkable outcomes compared to other published personality perception works. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

18 pages, 664 KiB  
Article
Multiple Attention Mechanism Graph Convolution HAR Model Based on Coordination Theory
by Kai Hu, Yiwu Ding, Junlan Jin, Min Xia and Huaming Huang
Sensors 2022, 22(14), 5259; https://doi.org/10.3390/s22145259 - 14 Jul 2022
Cited by 10 | Viewed by 1991
Abstract
Human action recognition (HAR) is the foundation of human behavior comprehension. It is of great significance and can be used in many real-world applications. From the point of view of human kinematics, the coordination of limbs is an important intrinsic factor of motion [...] Read more.
Human action recognition (HAR) is the foundation of human behavior comprehension. It is of great significance and can be used in many real-world applications. From the point of view of human kinematics, the coordination of limbs is an important intrinsic factor of motion and contains a great deal of information. In addition, for different movements, the HAR algorithm provides important, multifaceted attention to each joint. Based on the above analysis, this paper proposes a HAR algorithm, which adopts two attention modules that work together to extract the coordination characteristics in the process of motion, and strengthens the attention of the model to the more important joints in the process of moving. Experimental data shows these two modules can improve the recognition accuracy of the model on the public HAR dataset (NTU-RGB + D, Kinetics-Skeleton). Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

20 pages, 5217 KiB  
Article
Event Collapse in Contrast Maximization Frameworks
by Shintaro Shiba, Yoshimitsu Aoki and Guillermo Gallego
Sensors 2022, 22(14), 5190; https://doi.org/10.3390/s22145190 - 11 Jul 2022
Cited by 16 | Viewed by 3203
Abstract
Contrast maximization (CMax) is a framework that provides state-of-the-art results on several event-based computer vision tasks, such as ego-motion or optical flow estimation. However, it may suffer from a problem called event collapse, which is an undesired solution where events are warped into [...] Read more.
Contrast maximization (CMax) is a framework that provides state-of-the-art results on several event-based computer vision tasks, such as ego-motion or optical flow estimation. However, it may suffer from a problem called event collapse, which is an undesired solution where events are warped into too few pixels. As prior works have largely ignored the issue or proposed workarounds, it is imperative to analyze this phenomenon in detail. Our work demonstrates event collapse in its simplest form and proposes collapse metrics by using first principles of space–time deformation based on differential geometry and physics. We experimentally show on publicly available datasets that the proposed metrics mitigate event collapse and do not harm well-posed warps. To the best of our knowledge, regularizers based on the proposed metrics are the only effective solution against event collapse in the experimental settings considered, compared with other methods. We hope that this work inspires further research to tackle more complex warp models. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

22 pages, 4466 KiB  
Article
Heuristic Attention Representation Learning for Self-Supervised Pretraining
by Van Nhiem Tran, Shen-Hsuan Liu, Yung-Hui Li and Jia-Ching Wang
Sensors 2022, 22(14), 5169; https://doi.org/10.3390/s22145169 - 10 Jul 2022
Cited by 4 | Viewed by 2761
Abstract
Recently, self-supervised learning methods have been shown to be very powerful and efficient for yielding robust representation learning by maximizing the similarity across different augmented views in embedding vector space. However, the main challenge is generating different views with random cropping; the semantic [...] Read more.
Recently, self-supervised learning methods have been shown to be very powerful and efficient for yielding robust representation learning by maximizing the similarity across different augmented views in embedding vector space. However, the main challenge is generating different views with random cropping; the semantic feature might exist differently across different views leading to inappropriately maximizing similarity objective. We tackle this problem by introducing Heuristic Attention Representation Learning (HARL). This self-supervised framework relies on the joint embedding architecture in which the two neural networks are trained to produce similar embedding for different augmented views of the same image. HARL framework adopts prior visual object-level attention by generating a heuristic mask proposal for each training image and maximizes the abstract object-level embedding on vector space instead of whole image representation from previous works. As a result, HARL extracts the quality semantic representation from each training sample and outperforms existing self-supervised baselines on several downstream tasks. In addition, we provide efficient techniques based on conventional computer vision and deep learning methods for generating heuristic mask proposals on natural image datasets. Our HARL achieves +1.3% advancement in the ImageNet semi-supervised learning benchmark and +0.9% improvement in AP50 of the COCO object detection task over the previous state-of-the-art method BYOL. Our code implementation is available for both TensorFlow and PyTorch frameworks. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

17 pages, 14065 KiB  
Article
Improved Feature-Based Gaze Estimation Using Self-Attention Module and Synthetic Eye Images
by Jaekwang Oh, Youngkeun Lee, Jisang Yoo and Soonchul Kwon
Sensors 2022, 22(11), 4026; https://doi.org/10.3390/s22114026 - 26 May 2022
Cited by 7 | Viewed by 3967
Abstract
Gaze is an excellent indicator and has utility in that it can express interest or intention and the condition of an object. Recent deep-learning methods are mainly appearance-based methods that estimate gaze based on a simple regression from entire face and eye images. [...] Read more.
Gaze is an excellent indicator and has utility in that it can express interest or intention and the condition of an object. Recent deep-learning methods are mainly appearance-based methods that estimate gaze based on a simple regression from entire face and eye images. However, sometimes, this method does not give satisfactory results for gaze estimations in low-resolution and noisy images obtained in unconstrained real-world settings (e.g., places with severe lighting changes). In this study, we propose a method that estimates gaze by detecting eye region landmarks through a single eye image; and this approach is shown to be competitive with recent appearance-based methods. Our approach acquires rich information by extracting more landmarks and including iris and eye edges, similar to the existing feature-based methods. To acquire strong features even at low resolutions, we used the HRNet backbone network to learn representations of images at various resolutions. Furthermore, we used the self-attention module CBAM to obtain a refined feature map with better spatial information, which enhanced the robustness to noisy inputs, thereby yielding a performance of a 3.18% landmark localization error, a 4% improvement over the existing error and A large number of landmarks were acquired and used as inputs for a lightweight neural network to estimate the gaze. We conducted a within-datasets evaluation on the MPIIGaze, which was obtained in a natural environment and achieved a state-of-the-art performance of 4.32 degrees, a 6% improvement over the existing performance. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

14 pages, 8120 KiB  
Article
DFusion: Denoised TSDF Fusion of Multiple Depth Maps with Sensor Pose Noises
by Zhaofeng Niu, Yuichiro Fujimoto, Masayuki Kanbara, Taishi Sawabe and Hirokazu Kato
Sensors 2022, 22(4), 1631; https://doi.org/10.3390/s22041631 - 19 Feb 2022
Cited by 3 | Viewed by 3553
Abstract
The truncated signed distance function (TSDF) fusion is one of the key operations in the 3D reconstruction process. However, existing TSDF fusion methods usually suffer from the inevitable sensor noises. In this paper, we propose a new TSDF fusion network, named DFusion, to [...] Read more.
The truncated signed distance function (TSDF) fusion is one of the key operations in the 3D reconstruction process. However, existing TSDF fusion methods usually suffer from the inevitable sensor noises. In this paper, we propose a new TSDF fusion network, named DFusion, to minimize the influences from the two most common sensor noises, i.e., depth noises and pose noises. To the best of our knowledge, this is the first depth fusion for resolving both depth noises and pose noises. DFusion consists of a fusion module, which fuses depth maps together and generates a TSDF volume, as well as the following denoising module, which takes the TSDF volume as the input and removes both depth noises and pose noises. To utilize the 3D structural information of the TSDF volume, 3D convolutional layers are used in the encoder and decoder parts of the denoising module. In addition, a specially-designed loss function is adopted to improve the fusion performance in object and surface regions. The experiments are conducted on a synthetic dataset as well as a real-scene dataset. The results prove that our method outperforms existing methods. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

26 pages, 58016 KiB  
Article
Content-Aware SLIC Super-Pixels for Semi-Dark Images (SLIC++)
by Manzoor Ahmed Hashmani, Mehak Maqbool Memon, Kamran Raza, Syed Hasan Adil, Syed Sajjad Rizvi and Muhammad Umair
Sensors 2022, 22(3), 906; https://doi.org/10.3390/s22030906 - 25 Jan 2022
Cited by 2 | Viewed by 3279
Abstract
Super-pixels represent perceptually similar visual feature vectors of the image. Super-pixels are the meaningful group of pixels of the image, bunched together based on the color and proximity of singular pixel. Computation of super-pixels is highly affected in terms of accuracy if the [...] Read more.
Super-pixels represent perceptually similar visual feature vectors of the image. Super-pixels are the meaningful group of pixels of the image, bunched together based on the color and proximity of singular pixel. Computation of super-pixels is highly affected in terms of accuracy if the image has high pixel intensities, i.e., a semi-dark image is observed. For computation of super-pixels, a widely used method is SLIC (Simple Linear Iterative Clustering), due to its simplistic approach. The SLIC is considerably faster than other state-of-the-art methods. However, it lacks in functionality to retain the content-aware information of the image due to constrained underlying clustering technique. Moreover, the efficiency of SLIC on semi-dark images is lower than bright images. We extend the functionality of SLIC to several computational distance measures to identify potential substitutes resulting in regular and accurate image segments. We propose a novel SLIC extension, namely, SLIC++ based on hybrid distance measure to retain content-aware information (lacking in SLIC). This makes SLIC++ more efficient than SLIC. The proposed SLIC++ does not only hold efficiency for normal images but also for semi-dark images. The hybrid content-aware distance measure effectively integrates the Euclidean super-pixel calculation features with Geodesic distance calculations to retain the angular movements of the components present in the visual image exclusively targeting semi-dark images. The proposed method is quantitively and qualitatively analyzed using the Berkeley dataset. We not only visually illustrate the benchmarking results, but also report on the associated accuracies against the ground-truth image segments in terms of boundary precision. SLIC++ attains high accuracy and creates content-aware super-pixels even if the images are semi-dark in nature. Our findings show that SLIC++ achieves precision of 39.7%, outperforming the precision of SLIC by a substantial margin of up to 8.1%. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

20 pages, 10490 KiB  
Article
Deep-Learning-Based Adaptive Advertising with Augmented Reality
by Marco A. Moreno-Armendáriz, Hiram Calvo, Carlos A. Duchanoy, Arturo Lara-Cázares, Enrique Ramos-Diaz and Víctor L. Morales-Flores
Sensors 2022, 22(1), 63; https://doi.org/10.3390/s22010063 - 23 Dec 2021
Cited by 9 | Viewed by 4317
Abstract
In this work we describe a system composed of deep neural networks that analyzes characteristics of customers based on their face (age, gender, and personality), as well as the ambient temperature, with the purpose of generating a personalized signal to potential buyers who [...] Read more.
In this work we describe a system composed of deep neural networks that analyzes characteristics of customers based on their face (age, gender, and personality), as well as the ambient temperature, with the purpose of generating a personalized signal to potential buyers who pass in front of a beverage establishment; faces are automatically detected, displaying a recommendation using deep learning methods. In order to present suitable digital posters for each person, several technologies were used: Augmented reality, estimation of age, gender, and estimation of personality through the Big Five test applied to an image. The accuracy of each one of these deep neural networks is measured separately to ensure an appropriate precision over 80%. The system has been implemented into a portable solution, and is able to generate a recommendation to one or more people at the same time. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

22 pages, 2411 KiB  
Article
Human Segmentation and Tracking Survey on Masks for MADS Dataset
by Van-Hung Le and Rafal Scherer
Sensors 2021, 21(24), 8397; https://doi.org/10.3390/s21248397 - 16 Dec 2021
Cited by 6 | Viewed by 3929
Abstract
Human segmentation and tracking often use the outcome of person detection in the video. Thus, the results of segmentation and tracking depend heavily on human detection results in the video. With the advent of Convolutional Neural Networks (CNNs), there are excellent results in [...] Read more.
Human segmentation and tracking often use the outcome of person detection in the video. Thus, the results of segmentation and tracking depend heavily on human detection results in the video. With the advent of Convolutional Neural Networks (CNNs), there are excellent results in this field. Segmentation and tracking of the person in the video have significant applications in monitoring and estimating human pose in 2D images and 3D space. In this paper, we performed a survey of many studies, methods, datasets, and results for human segmentation and tracking in video. We also touch upon detecting persons as it affects the results of human segmentation and human tracking. The survey is performed in great detail up to source code paths. The MADS (Martial Arts, Dancing and Sports) dataset comprises fast and complex activities. It has been published for the task of estimating human posture. However, before determining the human pose, the person needs to be detected as a segment in the video. Moreover, in the paper, we publish a mask dataset to evaluate the segmentation and tracking of people in the video. In our MASK MADS dataset, we have prepared 28 k mask images. We also evaluated the MADS dataset for segmenting and tracking people in the video with many recently published CNNs methods. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

9 pages, 5126 KiB  
Communication
Highly Dense FBG Temperature Sensor Assisted with Deep Learning Algorithms
by Alexey Kokhanovskiy, Nikita Shabalov, Alexandr Dostovalov and Alexey Wolf
Sensors 2021, 21(18), 6188; https://doi.org/10.3390/s21186188 - 15 Sep 2021
Cited by 14 | Viewed by 3434
Abstract
In this paper, we demonstrate the application of deep neural networks (DNNs) for processing the reflectance spectrum from a fiberoptic temperature sensor composed of densely inscribed fiber bragg gratings (FBG). Such sensors are commonly avoided in practice since close arrangement of short FBGs [...] Read more.
In this paper, we demonstrate the application of deep neural networks (DNNs) for processing the reflectance spectrum from a fiberoptic temperature sensor composed of densely inscribed fiber bragg gratings (FBG). Such sensors are commonly avoided in practice since close arrangement of short FBGs results in distortion of the spectrum caused by mutual interference between gratings. In our work the temperature sensor contained 50 FBGs with the length of 0.95 mm, edge-to-edge distance of 0.05 mm and arranged in the 1500–1600 nm spectral range. Instead of solving the direct peak detection problem for distorted signal, we applied DNNs to predict temperature distribution from entire reflectance spectrum registered by the sensor. We propose an experimental calibration setup where the dense FBG sensor is located close to an array of sparse FBG sensors. The goal of DNNs is to predict the positions of the reflectance peaks of the reference sparse FBG sensors from the reflectance spectrum of the dense FBG sensor. We show that a convolution neural network is able to predict the positions of FBG reflectance peaks of sparse sensors with mean absolute error of 7.8 pm that is slightly higher than the hardware reused interrogator equal to 5 pm. We believe that dense FBG sensors assisted with DNNs have a high potential to increase spatial resolution and also extend the length of a fiber optical sensors. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

14 pages, 2148 KiB  
Article
Energy Efficient SWIPT Based Mobile Edge Computing Framework for WSN-Assisted IoT
by Fangni Chen, Anding Wang, Yu Zhang, Zhengwei Ni and Jingyu Hua
Sensors 2021, 21(14), 4798; https://doi.org/10.3390/s21144798 - 14 Jul 2021
Cited by 10 | Viewed by 3101
Abstract
With the increasing deployment of IoT devices and applications, a large number of devices that can sense and monitor the environment in IoT network are needed. This trend also brings great challenges, such as data explosion and energy insufficiency. This paper proposes a [...] Read more.
With the increasing deployment of IoT devices and applications, a large number of devices that can sense and monitor the environment in IoT network are needed. This trend also brings great challenges, such as data explosion and energy insufficiency. This paper proposes a system that integrates mobile edge computing (MEC) technology and simultaneous wireless information and power transfer (SWIPT) technology to improve the service supply capability of WSN-assisted IoT applications. A novel optimization problem is formulated to minimize the total system energy consumption under the constraints of data transmission rate and transmitting power requirements by jointly considering power allocation, CPU frequency, offloading weight factor and energy harvest weight factor. Since the problem is non-convex, we propose a novel alternate group iteration optimization (AGIO) algorithm, which decomposes the original problem into three subproblems, and alternately optimizes each subproblem using the group interior point iterative algorithm. Numerical simulations validate that the energy consumption of our proposed design is much lower than the two benchmark algorithms. The relationship between system variables and energy consumption of the system is also discussed. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning for Intelligent Sensing Systems)
Show Figures

Figure 1

Back to TopTop