Vision-Based Sensors and Algorithms for Food Processing

A special issue of Foods (ISSN 2304-8158). This special issue belongs to the section "Food Engineering and Technology".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 22460

Special Issue Editor


E-Mail Website
Guest Editor
Industrial Information & Control Center, Department of Electrical and Electronic Engineering, Auckland University of Technology, Auckland, New Zealand
Interests: industrial food processing; optimization; simulation and modelling; process control and simulation

Special Issue Information

Dear Colleagues,

The production of food at an industrial level presents various challenges, which differ to those experienced when producing commodity chemicals. A product that is destined to be consumed by humans must meet strict regulatory constraints, and must also satisfy quality demands, many of which are nuanced and extremely difficult to quantify. In addition, there are the typical considerations associated with producing the food product efficiently, with low energy demands and a low environmental footprint.

At industrial scales, historically, process engineers used a relatively small number of transducers to both analyze the quality and control the product. However, food is a far more complex  material, and the standard transducers are often inadequate for today's competitive production.

Consequently, due to the abovementioned reasons, one must employ more sophisticated measurement technologies, and one of such technologies is to explore new sensor methodologies, such as in situ vision sensors. Such sensors are increasingly affordable and robust, and the necessary embedded algorithmic development is made easier by new and readily available image and vision video processing software libraries.

This Special Issue will cover the applications of and theoretical advances in vision and image processing technology for the monitoring and subsequent control of food processing activities. This includes, for example, assessing the moisture state in real-time fruit in a dryer, or perhaps the surface morphology, and, therefore, taste attributes, of baked products. Vision sensors not only include visible light, but also wavelengths outside that of the human perception, such as near-infrared or ultraviolet.

Finally, this Special Issue will also cover new algorithmic developments in image processing using machine learning strategies, deep learning algorithms for classification, and modern regression techniques.

Dr. David I. Wilson 
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Foods is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • vision-based sensors and quality control
  • machine learning
  • image processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 10092 KiB  
Article
Application of Three-Dimensional Digital Photogrammetry to Quantify the Surface Roughness of Milk Powder
by Haohan Ding, David I. Wilson, Wei Yu, Brent R. Young and Xiaohui Cui
Foods 2023, 12(5), 967; https://doi.org/10.3390/foods12050967 - 24 Feb 2023
Cited by 1 | Viewed by 1787
Abstract
The surface appearance of milk powders is a crucial quality property since the roughness of the milk powder determines its functional properties, and especially the purchaser perception of the milk powder. Unfortunately, powder produced from similar spray dryers, or even the same dryer [...] Read more.
The surface appearance of milk powders is a crucial quality property since the roughness of the milk powder determines its functional properties, and especially the purchaser perception of the milk powder. Unfortunately, powder produced from similar spray dryers, or even the same dryer but in different seasons, produces powder with a wide variety of surface roughness. To date, professional panelists are used to quantify this subtle visual metric, which is time-consuming and subjective. Consequently, developing a fast, robust, and repeatable surface appearance classification method is essential. This study proposes a three-dimensional digital photogrammetry technique for quantifying the surface roughness of milk powders. A contour slice analysis and frequency analysis of the deviations were performed on the three-dimensional models to classify the surface roughness of milk powder samples. The result shows that the contours for smooth-surface samples are more circular than those for rough-surface samples, and the smooth-surface samples had a low standard deviation; thus, milk powder samples with the smoother surface have lower Q (the energy of the signal) values. Lastly, the performance of the nonlinear support vector machine (SVM) model demonstrated that the technique proposed in this study is a practicable alternative technique for classifying the surface roughness of milk powders. Full article
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)
Show Figures

Figure 1

14 pages, 2411 KiB  
Article
Visual Detection of Water Content Range of Seabuckthorn Fruit Based on Transfer Deep Learning
by Yu Xu, Jinmei Kou, Qian Zhang, Shudan Tan, Lichun Zhu, Zhihua Geng and Xuhai Yang
Foods 2023, 12(3), 550; https://doi.org/10.3390/foods12030550 - 26 Jan 2023
Cited by 14 | Viewed by 1799
Abstract
To realize the classification of sea buckthorn fruits with different water content ranges, a convolution neural network (CNN) detection model of sea buckthorn fruit water content ranges was constructed. In total, 900 images of seabuckthorn fruits with different water contents were collected from [...] Read more.
To realize the classification of sea buckthorn fruits with different water content ranges, a convolution neural network (CNN) detection model of sea buckthorn fruit water content ranges was constructed. In total, 900 images of seabuckthorn fruits with different water contents were collected from 720 seabuckthorn fruits. Eight classic network models based on deep learning were used as feature extraction for transfer learning. A total of 180 images were randomly selected from the images of various water content ranges for testing. Finally, the identification accuracy of the network model for the water content range of seabuckthorn fruit was 98.69%, and the accuracy on the test set was 99.4%. The program in this study can quickly identify the moisture content range of seabuckthorn fruit by collecting images of the appearance and morphology changes during the drying process of seabuckthorn fruit. The model has a good detection effect for seabuckthorn fruits with different moisture content ranges with slight changes in characteristics. The migration deep learning can also be used to detect the moisture content range of other agricultural products, providing technical support for the rapid nondestructive testing of moisture contents of agricultural products. Full article
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)
Show Figures

Figure 1

19 pages, 3262 KiB  
Article
DeepMDSCBA: An Improved Semantic Segmentation Model Based on DeepLabV3+ for Apple Images
by Lufeng Mo, Yishan Fan, Guoying Wang, Xiaomei Yi, Xiaoping Wu and Peng Wu
Foods 2022, 11(24), 3999; https://doi.org/10.3390/foods11243999 - 10 Dec 2022
Cited by 7 | Viewed by 2274
Abstract
The semantic segmentation of apples from images plays an important role in the automation of the apple industry. However, existing semantic segmentation methods such as FCN and UNet have the disadvantages of a low speed and accuracy for the segmentation of apple images [...] Read more.
The semantic segmentation of apples from images plays an important role in the automation of the apple industry. However, existing semantic segmentation methods such as FCN and UNet have the disadvantages of a low speed and accuracy for the segmentation of apple images with complex backgrounds or rotten parts. In view of these problems, a network segmentation model based on deep learning, DeepMDSCBA, is proposed in this paper. The model is based on the DeepLabV3+ structure, and a lightweight MobileNet module is used in the encoder for the extraction of features, which can reduce the amount of parameter calculations and the memory requirements. Instead of ordinary convolution, depthwise separable convolution is used in DeepMDSCBA to reduce the number of parameters to improve the calculation speed. In the feature extraction module and the cavity space pyramid pooling module of DeepMDSCBA, a Convolutional Block Attention module is added to filter background information in order to reduce the loss of the edge detail information of apples in images, improve the accuracy of feature extraction, and effectively reduce the loss of feature details and deep information. This paper also explored the effects of rot degree, rot position, apple variety, and background complexity on the semantic segmentation performance of apple images, and then it verified the robustness of the method. The experimental results showed that the PA of this model could reach 95.3% and the MIoU could reach 87.1%, which were improved by 3.4% and 3.1% compared with DeepLabV3+, respectively, and superior to those of other semantic segmentation networks such as UNet and PSPNet. In addition, the DeepMDSCBA model proposed in this paper was shown to have a better performance than the other considered methods under different factors such as the degree or position of rotten parts, apple varieties, and complex backgrounds. Full article
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)
Show Figures

Figure 1

11 pages, 2455 KiB  
Communication
Computer Vision System for Mango Fruit Defect Detection Using Deep Convolutional Neural Network
by R. Nithya, B. Santhi, R. Manikandan, Masoumeh Rahimi and Amir H. Gandomi
Foods 2022, 11(21), 3483; https://doi.org/10.3390/foods11213483 - 2 Nov 2022
Cited by 46 | Viewed by 8304
Abstract
Machine learning techniques play a significant role in agricultural applications for computerized grading and quality evaluation of fruits. In the agricultural domain, automation improves the quality, productivity, and economic growth of a country. The quality grading of fruits is an essential measure in [...] Read more.
Machine learning techniques play a significant role in agricultural applications for computerized grading and quality evaluation of fruits. In the agricultural domain, automation improves the quality, productivity, and economic growth of a country. The quality grading of fruits is an essential measure in the export market, especially defect detection of a fruit’s surface. This is especially pertinent for mangoes, which are highly popular in India. However, the manual grading of mango is a time-consuming, inconsistent, and subjective process. Therefore, a computer-assisted grading system has been developed for defect detection in mangoes. Recently, machine learning techniques, such as the deep learning method, have been used to achieve efficient classification results in digital image classification. Specifically, the convolution neural network (CNN) is a deep learning technique that is employed for automated defect detection in mangoes. This study proposes a computer-vision system, which employs CNN, for the classification of quality mangoes. After training and testing the system using a publicly available mango database, the experimental results show that the proposed method acquired an accuracy of 98%. Full article
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)
Show Figures

Figure 1

14 pages, 1561 KiB  
Article
The Changes in Bell Pepper Flesh as a Result of Lacto-Fermentation Evaluated Using Image Features and Machine Learning
by Ewa Ropelewska, Kadir Sabanci and Muhammet Fatih Aslan
Foods 2022, 11(19), 2956; https://doi.org/10.3390/foods11192956 - 21 Sep 2022
Cited by 11 | Viewed by 2403
Abstract
Food processing allows for maintaining the quality of perishable products and extending their shelf life. Nondestructive procedures combining image analysis and machine learning can be used to control the quality of processed foods. This study was aimed at developing an innovative approach to [...] Read more.
Food processing allows for maintaining the quality of perishable products and extending their shelf life. Nondestructive procedures combining image analysis and machine learning can be used to control the quality of processed foods. This study was aimed at developing an innovative approach to distinguishing fresh and lacto-fermented red bell pepper samples involving selected image textures and machine learning algorithms. Before processing, the pieces of fresh pepper and samples subjected to spontaneous lacto-fermentation were imaged using a digital camera. The texture parameters were extracted from images converted to different color channels L, a, b, R, G, B, X, Y, and Z. The textures after selection were used to build models for the classification of fresh and lacto-fermented samples using algorithms from the groups of Lazy, Functions, Trees, Bayes, Meta, and Rules. The highest average accuracy of classification reached 99% for the models developed based on sets of selected textures for color space Lab using the IBk (instance-based K-nearest learner) algorithm from the group of Lazy, color space RGB using SMO (sequential minimal optimization) from Functions, and color space XYZ and color channel X using IBk (Lazy) and SMO (Functions). The results confirmed the differences in image features of fresh and lacto-fermented red bell pepper and revealed the effectiveness of models built based on textures using machine learning algorithms for the evaluation of the changes in the pepper flesh structure caused by processing. Full article
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)
Show Figures

Figure 1

15 pages, 4865 KiB  
Article
Surface Defect Detection of Fresh-Cut Cauliflowers Based on Convolutional Neural Network with Transfer Learning
by Yaodi Li, Jianxin Xue, Kai Wang, Mingyue Zhang and Zezhen Li
Foods 2022, 11(18), 2915; https://doi.org/10.3390/foods11182915 - 19 Sep 2022
Cited by 12 | Viewed by 2285
Abstract
A fresh-cut cauliflower surface defect detection and classification model based on a convolutional neural network with transfer learning is proposed to address the low efficiency of the traditional manual detection of fresh-cut cauliflower surface defects. Four thousand, seven hundred and ninety images of [...] Read more.
A fresh-cut cauliflower surface defect detection and classification model based on a convolutional neural network with transfer learning is proposed to address the low efficiency of the traditional manual detection of fresh-cut cauliflower surface defects. Four thousand, seven hundred and ninety images of fresh-cut cauliflower were collected in four categories including healthy, diseased, browning, and mildewed. In this study, the pre-trained MobileNet model was fine-tuned to improve training speed and accuracy. The model optimization was achieved by selecting the optimal combination of training hyper-parameters and adjusting the different number of frozen layers; the parameters downloaded from ImageNet were optimally integrated with the parameters trained on our own model. A comparison of test results was presented by combining VGG19, InceptionV3, and NASNetMobile. Experimental results showed that the MobileNet model’s loss value was 0.033, its accuracy was 99.27%, and the F1 score was 99.24% on the test set when the learning rate was set as 0.001, dropout was set as 0.5, and the frozen layer was set as 80. This model had better capability and stronger robustness and was more suitable for the surface defect detection of fresh-cut cauliflower when compared with other models, and the experiment’s results demonstrated the method’s feasibility. Full article
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)
Show Figures

Figure 1

19 pages, 6904 KiB  
Article
Convolutional Neural Network for Object Detection in Garlic Root Cutting Equipment
by Ke Yang, Baoliang Peng, Fengwei Gu, Yanhua Zhang, Shenying Wang, Zhaoyang Yu and Zhichao Hu
Foods 2022, 11(15), 2197; https://doi.org/10.3390/foods11152197 - 24 Jul 2022
Cited by 7 | Viewed by 2605
Abstract
Traditional manual garlic root cutting is inefficient and can cause food safety problems. To develop food processing equipment, a novel and accurate object detection method for garlic using deep learning—a convolutional neural network—is proposed in this study. The you-only-look-once (YOLO) algorithm, which is [...] Read more.
Traditional manual garlic root cutting is inefficient and can cause food safety problems. To develop food processing equipment, a novel and accurate object detection method for garlic using deep learning—a convolutional neural network—is proposed in this study. The you-only-look-once (YOLO) algorithm, which is based on lightweight and transfer learning, is the most advanced computer vision method for single large object detection. To detect the bulb, the YOLOv2 model was modified using an inverted residual module and residual structure. The modified model was trained based on images of bulbs with varied brightness, surface attachment, and shape, which enabled sufficient learning of the detector. The optimum minibatches and epochs were obtained by comparing the test results of different training parameters. Research shows that IRM-YOLOv2 is superior to the SqueezeNet, ShuffleNet, and YOLOv2 models of classical neural networks, as well as the YOLOv3 and YOLOv4 algorithm models. The confidence score, average accuracy, deviation, standard deviation, detection time, and storage space of IRM-YOLOv2 were 0.98228, 99.2%, 2.819 pixels, 4.153, 0.0356 s, and 24.2 MB, respectively. In addition, this study provides an important reference for the application of the YOLO algorithm in food research. Full article
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)
Show Figures

Figure 1

Back to TopTop