Next Article in Journal
Efficacy of Robot-Assisted Gait Therapy Compared to Conventional Therapy or Treadmill Training in Children with Cerebral Palsy: A Systematic Review with Meta-Analysis
Next Article in Special Issue
A New Method for Heart Disease Detection: Long Short-Term Feature Extraction from Heart Sound Data
Previous Article in Journal
Resolution and Frequency Effects on UAVs Semi-Direct Visual-Inertial Odometry (SVO) for Warehouse Logistics
Previous Article in Special Issue
A Personalized User Authentication System Based on EEG Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model

1
MOE Key Lab for Intelligent Networks and Network Security, School of Automation Science and Engineering, Faculty of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2
International Collage, Hunan University of Arts and Sciences, Changde 415000, China
3
School of Computing and Information Technology, University of Wollongong, Wollongong, NSW 2522, Australia
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9913; https://doi.org/10.3390/s22249913
Submission received: 30 November 2022 / Revised: 13 December 2022 / Accepted: 13 December 2022 / Published: 16 December 2022
(This article belongs to the Collection Deep Learning in Biomedical Informatics and Healthcare)

Abstract

:
Iris localization in non-cooperative environments is challenging and essential for accurate iris recognition. Motivated by the traditional iris-localization algorithm and the robustness of the YOLO model, we propose a novel iris-localization algorithm. First, we design a novel iris detector with a modified you only look once v4 (YOLO v4) model. We can approximate the position of the pupil center. Then, we use a modified integro-differential operator to precisely locate the iris inner and outer boundaries. Experiment results show that iris-detection accuracy can reach 99.83% with this modified YOLO v4 model, which is higher than that of a traditional YOLO v4 model. The accuracy in locating the inner and outer boundary of the iris without glasses can reach 97.72% at a short distance and 98.32% at a long distance. The locating accuracy with glasses can obtained at 93.91% and 84%, respectively. It is much higher than the traditional Daugman’s algorithm. Extensive experiments conducted on multiple datasets demonstrate the effectiveness and robustness of our method for iris localization in non-cooperative environments.

1. Introduction

Biometric identification has emerged as a critical method for ensuring information security [1]. It is a process that uses some inherent and some unique physiological or behavioral features of human beings to determine their identity. The face [2,3,4], fingerprint [5], palmprint [6,7], and iris [8,9,10] are all common biometrics.
The iris is an area of the human eye that is approximately a circle between the sclera and the pupil. Unlike other biometrics, the iris has unique characteristics, such as a hidden location, being non-contact, and a rich texture. It is frequently used in identification and disease diagnoses [11,12]. Typically, iris segmentation and iris recognition are the two main tasks of the iris recognition system [13,14]. The purpose of iris segmentation is to distinguish iris and non-iris regions [15,16,17,18,19]. In addition, iris recognition needs to detect the pupil and iris circles, called iris location [19,20,21]. Its goal is to normalize the iris region into rectangles to prepare for iris recognition. Iris recognition and computer-aided eye disease diagnoses rely on an accurate iris location. The quality of the iris location directly affects the performance of these algorithms. However, iris-localization tasks are complicated due to non-cooperative environments, such as glasses reflections, an off-angle iris, long distances, occlusion by eyelashes or eyelids, being partially recorded, etc., as shown in Figure 1 [22]. As a result, developing a robust iris-localization algorithm is a challenging task with significant theoretical and application value, and it is rapidly becoming a hotspot in iris-recognition research [23,24,25,26]. Iris texture information will be lost if the location is incorrect, reducing the effectiveness of identification or disease diagnosis.
At present, there are mainly two types of iris-localization methods. One is the traditional algorithm based on machine learning. The other one is based on deep learning. The conventional way does not need to train many neural networks, which is easy to implement and very fast. The disadvantage is that it is susceptible to noise interference, has low accuracy, and has limited application scenarios. Deep-learning-based methods have a strong performance against noise interference and they have high precision. Deep learning models, however, need a long training period and lots of labeled data. The expense of labeling data severely limits their applicability to expand into new categories.
Motivated by the convenience of the traditional iris-localization algorithm and the accuracy of deep learning, this paper proposes an iris-localization algorithm based on a modified YOLO v4 network and a modified integro-differential operator. The following is a summary the main contributions:
(1) A modified YOLO v4 network is proposed to detect the iris region and locate the outer circle of the iris. We use MobileNetV2 as the backbone network in YOLO v4 for feature extraction. The modified YOLO v4 model is only 5.8 M in size, which is much smaller than the traditional YOLO v4-tiny model, which is 21.42 M in size. In addition, it also improves the mAP (mean average precision). It addresses the problem in which traditional localization algorithms are prone to noise interference and suffer from low accuracy.
(2) A modified integro-differential operator is proposed to precisely locate the inner and outer boundaries of the iris. The location effect of Daugman’s integro-differential operator is mostly promising. However, if the image is disturbed, iris localization is prone to failure. According to the principle of Daugman’s integro-differential operator, this paper proposes a modified integro-differential operator with better robustness. After removing some noise in the image, we can improve the accuracy in locating the inner and outer boundary with this modified operator. The experimental data show that the proposed localization algorithm can achieve high accuracy under non-cooperative environments. It has good robustness regardless of short-distance and long-distance irises.
(3) Strong scalability. The iris-localization method proposed in the paper combines the benefits of deep learning and machine learning. We can achieve accurate localization of the inner and outer circles of the iris without excessive labeling. For example, we only label 5% of the CASIA-Iris-Thousand dataset. The experiment shows that we can not only locate the labeled iris images, but also achieve high location accuracy with the remaining unlabeled images.
The remaining sections are structured as follows. In Section 2, we examine some of the literature regarding iris localization. Then, in Section 3, we go over the proposed methods in greater detail. The experimental findings in Section 4 show the effectiveness of the proposed method. Finally, we conclude in Section 5.

2. Related Works

Currently, there are primarily two categories of iris-localization techniques. One is the conventional machine-learning-based iris-localization algorithm. The alternative approach relies on deep learning.

2.1. Traditional Iris-Localization Algorithm

Traditional iris-localization algorithms are mainly divided into three categories. The first method is based on differential and integral operators. In this view, the iris is regarded as an approximate circle area. The iris location can be simplified by calculating the center and radius of the inner and outer boundaries. Daugman [27,28] used the integro-differential operator to search those parameters of the circle. Because this method needs to traverse and explore all parameter spaces, its computational complexity is relatively high. The second method is based on the Hough transform. Wildes [29] used the edge-detection operator to explore the edge points of the binary iris image, and then used the Hough transform to determine the parameters of the inner and outer boundaries. The third method is the gray difference method. This method fully uses the gray level changes of the iris image to locate the iris. Ma Li et al. [30] used gray value mutation to find three points that were not on the same line, and then combined them into a circle to determine the inner and outer boundaries of the iris.

2.2. Localization Algorithm Based on Deep Learning

In recent years, deep learning has rapidly developed in the field of target detection. There are currently two main approaches: one-stage detection and two-stage detection. The two stages are based on the concept of a target candidate box, generating a series of sample candidate boxes in advance, and then classifying samples through a convolutional neural network. The region-based CNN (R-CNN) is one of them. Feng X et al. [19] proposed an iris R-CNN. It can simultaneously complete the accurate segmentation and localization of the iris in a non-cooperative environment under visible light. Li Y H et al. [31] designed a fast R-CNN with only six layers to locate eyes. He used the bounding box found by Faster R-CNN to locate the pupil area using the Gaussian mixture model. Because R-CNN needs to select thousands of proposed areas from one picture, the speed is very slow. One-stage detection is based on regression, and there is no target candidate box. Image features extracted via the backbone network are returned to the object boundary box directly. It is faster than R-CNNs, such as YOLO models [32,33,34,35,36]. YOLO has attracted wide attention since its inception.
Many existing researchers have used the YOLO algorithm to locate the iris. Naranpanawa D et al. [37] proposed a light and simple object-detection model based on YOLO v3 to detect freckles in the iris. Evair Severo et al. [38] designed an iris target-detector based on YOLO. This target detector uses a small rectangular box which tightly encloses the iris region. However, this method cannot determine the position of the inner circle of the iris. Eduardo et al. [39] implemented a real-time iris-detection and segmentation framework in video based on Tiny-YOLO. The size of Tiny-YOLO is slightly larger so it is not suitable for installation in embedded systems.
In addition to the iris-localization algorithm based on YOLO, many deep neural networks that integrate the segmentation and location of the iris have emerged in recent years. Most of them are based on U-Net. Lian S et al. [40] introduced an attention mechanism to an original U-Net model to separate the iris and non-iris pixels. Wang C et.al [41] presented a multi-task U-Net called IrisParseNet. It can predict the iris mask, pupil mask, and iris outer boundary simultaneously. Interleaved Residual U-Net, proposed by Li Y H et al., can localize the outer and inner boundaries of the iris image [42]. Although the accuracy of the above methods is quite convincing, they are only used for segmenting and locating the iris from images of the eye region. They cannot help with extracting the eye region from long-distance iris images.

3. Methods

We used MobileNetV2 to improve YOLO v4 and design an iris detector. The detector employs a small rectangular box that tightly surrounds the iris region. The iris region allows us to approximate the position of the pupil center. The improved integro-differential operator was used to precisely locate the inner and outer boundaries of the iris, greatly improving iris location robustness. Figure 2 depicts the flow chart of the localization algorithm.

3.1. Dataset

When training the YOLO v4 network, we selected two datasets in this paper. One is CASIA-Iris-Thousand, and the other is CASIA-Iris-Distance [22]. The two datasets are briefly introduced below:
The dataset CASIA-Iris-Thousand includes 20,000 pictures of the iris from 1000 people. These images were collected at a short distance with an IKEMB-100 camera. Each image has 640 × 480 pixels. Changes in pupil size under different lighting conditions, as well as specular reflection, are the main causes of intra-class changes in CASIA-Iris-Thousand. Figure 3a shows a sample of the dataset.
The iris images in the CASIA-Iris-Distance dataset were captured by a high-resolution camera over a long distance. As a result, the image area of interest included both binocular irises and facial patterns. It contained 142 themes and 2567 pictures. Each image had 2352 × 1728 pixels. Figure 3c is a sample of the dataset. The imaging system of the high-resolution camera can actively search for patterns in the field of view, such as the iris and face, to identify users from a distance of about 3 m.
The number and size of the rectangular boxes manually labeled varied due to different datasets as shown in Figure 3b,d [22].

3.2. The Modified YOLO v4 Network

Three components make up the structure of the YOLO v4 network: the Backbone; the Neck; and the Head. CSPDarkNet-53 serves as the backbone of the traditional YOLO v4 network, which uses it to extract features from the input photos. This paper used the lightweight network MobileNetV2 [43] as the backbone network instead of CSPDarkNet-53. Its overall network structure is shown in Figure 4 [44].

3.2.1. The Backbone

MobileNet was proposed by Google in 2017. MobileNetV2 further improves network performance by using a reverse residual structure [43]. Because MobileNetV2 contains seven bottleneck modules, we used seven blocks in the backbone shown in Figure 4. The backbone outputs the feature maps which are fused at the next part, namely the neck.

3.2.2. The Neck

The head and the backbone are joined by the neck. A spatial pyramid pool (SPP) module and a path aggregation network (PAN) make up the neck. The head receives feature maps as input from the neck, which connects feature maps from various layers of the backbone. The SPP module uses kernels of size 1 × 1, 5 × 5, 9 × 9, and 13 × 13 for the max-pooling operation. The stride value was set to 1. The receptive field of the backbone features is expanded by concatenating the feature maps, which also improves the ability of the network to identify small objects.

3.2.3. The Head

The head is responsible for receiving and processing a group of aggregation feature maps output by the PAN module. It predicts bounding boxes, classification scores, and objectivity scores. Three detecting heads are present in the head part of the traditional YOLO v4 network. Each detector head is a YOLO v3 network, and the respective output size is 19 × 19, 38 × 38 and 76 × 76.
Figure 3b,d shows that there were no more than two detection objects in a single image. Therefore, only two detection heads were required for the head. At the same time, because the size of the dataset was different, two feature maps with different sizes needed to be used for prediction. We extracted feature maps from the fourth and final bottleneck to predict small and large irises based on these specific situations. The network outputted feature maps of size 28 × 28 and 7 × 7.

3.3. Denoising an Iris Image

The rectangular area generated after iris detection can be regarded as the outer circle boundary of the iris. The center of the outer circle may not equal the actual pupil center, but it usually falls within the pupil. Removing noise interference without destroying the original image can provide a solid foundation for subsequent accurate localization.
It can be seen from Figure 3b,d that in the rectangular area, there were primarily two types of noise: reflective points and eyelashes. For the reflective points, we generated a mask based on those points. After we filled in regions specified by the mask using inward interpolation in the image, those reflective points were filtered. For the eyelash noise, it can be viewed as a dark detail in an image. We used morphological closure to fill the image and reduce the interference of the eyelashes.

3.4. Precise Localization of Iris Inner and Outer Boundaries Based on Improved Calculus Operator

3.4.1. Daugman’s Integro-Differential Operator

The gray value of the iris image had noticeable changes at the inner and outer boundaries. Daugman proposed an integro-differential iris-localization algorithm based on this feature [26]. The mathematical expression is shown in Formula (1).
max ( r , x 0 , y 0 ) | G σ ( r ) r r , x 0 , y 0 I ( x , y ) 2 π r d s |
where I ( x , y ) is the gray value of the point ( x , y ) , r , x 0 , y 0 I ( x , y ) 2 π r d s is the curve integral of the circle with center ( x 0 , y 0 ) and radius r. The integral path for the inner iris boundary is the entire circle. The outer boundary of the iris is easily interfered with by eyelashes and eyelids. The integration path for the outer iris boundary is the region with 90 degrees on the left and right sides of the iris. G σ ( r ) is a Gaussian function with standard deviation σ . * denotes convolution computation.

3.4.2. The Modified Integro-Differential Operator

We can achieve good results by using Formula (1) to localize the inner and outer boundary of the iris with accurate pupil center positioning. If the pupil center positioning is skewed, the effect of iris localization is inferior to ideal. Based on the principle of Daugman’s integro-differential operator, we propose a modified operator with better robustness, as shown in Formula (2) [45].
max ( x 0 , y 0 , r ) θ = 1 n ( g θ , r C θ , r )
where, the center and radius values of the search starting point are represented by ( x 0 , y 0 ) and radius r, respectively. n denotes the number of points taken uniformly around the circle, with ( x 0 , y 0 ) as the center and r as the radius. g θ , r represents the radial gradient of the θ th point on the circle of radius r. C θ , r is the compensation factor. g θ , r and C θ , r can be expressed by Formulas (3) and (4), respectively. Figure 5 depicts their schematic diagram.
g θ , r = I θ , r + Δ r I θ , r
C θ , r = 1 2 [ | g θ + 1 , r g θ , r | + | g θ 1 , r g θ , r | ]
In Figure 5, point A represents the θ th point on the circle of radius r. Point B represents the θ th point on the circle of radius r + Δ r . Points C and D represent the ( θ 1 ) th point and the ( θ + 1 ) th points on the circle of radius r, respectively. The difference of gray value between point A and point B is g θ , r .
To avoid error localization caused by the excessive radial gradient of an interference point on the circle, we introduced a compensation factor C θ , r . If the difference of gray value between the two adjacent points on the circle was large, this indicated that these points were likely to be some interference points in the image.

3.4.3. Localization of the Iris Inner Boundary

The center and radius parameters must be continuously varied in order to find the maximum value when applying Formula (2) to find the inner boundary of the iris. The initial value of the center ( x 0 , y 0 ) can be determined by the outer circle of the iris detected by YOLO v4. Because the center obtained by YOLO v4 is very close to the real center of the pupil, a smaller search field can be set to reduce the number of iterations and to speed up the localization. The search range of the inner circle radius can be set according to the prior conditions of the iris image. Because the inner circle of the iris is short, n can be set as 32. When Formula (2) returns the maximum value, we can obtain the center ( x p , y p ) and radius r p of the inner boundary of the iris.

3.4.4. Localization of Iris Outer Boundary

The centers of the iris inner and outer boundaries are generally very close. When locating the outer circle, the search range of the circle center should be limited within a tiny neighborhood of the inner circle center. The radius search range can be based on the rectangular box obtained by YOLO v4. Its range is:
1.2 r p < r 1 < 0.5 × max [ r o w s , c o l s ]
where r p is the radius of the iris inner boundary obtained in Section 3.4.3.
r 1 is the search range of the radius of the outer boundary. r o w s , c o l s represents the length and width of the rectangular box obtained by YOLO v4 when the iris is detected. n can be set as 256. When Formula (2) returns the maximum value, we can obtain the center ( x i , y i ) and radius r i of the outer boundary.

4. Experimental Results and Analysis

4.1. Iris Images Pre-Processing

To verify the robustness of the method proposed in this paper, some iris images with poor quality, insufficient clarity, eyelash occlusion, and serious eyeglass reflection were not deliberately eliminated in the experiment. In the CASIA-Iris-Thousand dataset, we select 1000 images from the first 50 people. In the CASIA-Iris-Distance dataset, we selected 1000 images from the first 54 people. Image Labeler provided by Matlab 2022a was used to label those 2000 images. A total of 3000 rectangular boxes were labeled. Among those 2000 images, 80% were randomly selected as the training set and 20% were selected as the test set.
Because the input size of MobileNetV2 used in this paper was 224 × 224 × 3, prior to the experiment, the pre-processing process must uniformly convert the size of these 2000 images to 224 × 224 × 3. This is not only to meet the input requirements of MobileNetV2, but also to adapt to color iris images.

4.2. The Experimental Platform and the Evaluation Indicators

The experiment software and hardware platforms used were as follows: 64-bit Windows 10 operating system; Intel Core i7-8700 3.20 GHz dual-core CPU; 16 G running memory; and NVIDIA™ GeForce RTX 3060 GPU with 12 GB of memory. The software development environment was Matlab 2022a. The hyperparameters of the training model were set as follows: the training batch BatchSize = 4; the number of model training epoch = 5; the learning rate was 0.005; and the Adam algorithm was used for optimization calculation. All the experiments were run on one GPU card.
We used the average precision (AP) for comparisons in order to quantitatively evaluate the performance of different iris-detection algorithms. The AP was related to precision and recall, which can be formulated as [38]:
Recall = T P / ( T P + F N )
Precision = T P / ( T P + F P )
where the letters TP, FP, and FN stand for the numbers of true positives, false positives, and false negatives, respectively. TP (true positive) means that prediction is consistent with the label. FP (false positive) indicates that a negative case is predicted as a positive case, and FN (false negative) indicates that a positive case is predicted as a negative case. Usually, all iris region proposals with ≥0.5 IoU that overlap with a ground-truth box are considered TP, while others are considered FP [46]. The mean AP (mAP) function determines the average AP value across all object categories. The AP and mAP are quantitative indicators used in object detection. Generally, the higher the AP, the better the detection performance.

4.3. Comparison Experiment with Traditional YOLO v4

There are two types of backbone in the traditional YOLO v4 network: one is csp-darknet53-coco, and the other one is tiny-yolov4-coco. The COCO dataset was used to train these two networks. The size of the detection model with csp-darknet53-coco as the backbone is usually greater than 200 M. Because iris detection is mainly used in mobile phones or embedded devices, we need a smaller detection model. Although the precision of YOLO v4-darknet53 is very high, it is not suitable for embedded devices. Therefore, the comparison experiment here did not include YOLO v4-darknet53.
When the hyperparameters of model training are consistent, the experimental results show that the size of the YOLO v4-tiny model is 21.42 M, while the YOLO v4-MobileNetV2 model proposed in the paper is only 5.8 M, which is much smaller than the traditional YOLO v4-tiny model.
Some results of iris localization under non-cooperative environments are shown in Figure 6. The yellow rectangle represents the iris-localization result. It can be seen that the iris-detection algorithm proposed in this paper has good anti-interference performance and strong robustness. We compared the mean average precision (mAP) of the iris under a different IoU in Table 1.
From Table 1, we can see that when IoU was less than or equal to 0.7, the performance of the modified YOLO v4 model proposed in this paper was better than that of the traditional YOLO v4-tiny model. However, when the IoU was equal to 0.8, the mAP of the modified YOLO v4 model was significantly reduced. However, this had no effect on subsequent iris inner and outer circle localization. In our experiment, we set the IoU threshold to 0.5. In this case, the recall curves of the two networks are shown in Figure 7.
From Figure 7, we can see that the mAP of the modified YOLO v4 network was close to 100% when epoch = 5, and the performance was better than that of the traditional yolov4-tiny network.

4.4. Experiment with Inner and Outer Iris Circle Localization

To verify the scalability and robustness of the proposed method, the images used in this section do not intersect with the images used in Section 4.1, which belongs to an open-set test. For the CASIA-Iris-Thousand dataset, image data were randomly selected from the last 950 individuals. For the CASIA-Iris-Distance dataset, image data were randomly selected from the last 88 individuals. The specific composition of the dataset is shown in Table 2.
The traditional Daugman’s localization algorithm iteratively searches the center and radius of the circle in the entire iris image, which takes a very long time, even tens of seconds, so it has no practical significance. Based on the above considerations, the comparison of localization was based on the same size of searching field. Under the same premise, Daugman’s integro-differential operator was compared with the improved operator proposed in this paper. Table 3 and Table 4 show the accuracy of different algorithms in locating the inner and outer circles at a short distance and long distance, respectively.
Table 3 and Table 4 demonstrate that the proposed operator in this study had greater accuracy than Daugman’s algorithm in images with and without glasses, which shows that the improved operator is effective and improves the accuracy of iris precise localization. Compared with Daugman’s operator, the modified operator improved the localization accuracy by 2.74% for short-distance images without glasses and by 4.06% for images with glasses. Because the compensation factors were calculated in each iteration, the localization time was a little longer. For the long-distance images, the modified operator located the iris more accurately than at a short distance, and it was much more accurate than Daugman’s operator. Especially for the images with glasses, the localization accuracy reached 84%, which is much better than Daugman’s algorithm. The success rate of localization using Daugman’s method is rather low, with only 14 irises successfully located, while no two eyes were successfully located simultaneously. It does not make sense to consider run time as a result. The experimental results show that the modified operator proposed in this paper improves the robustness of iris localization, especially under some non-cooperative environments, as shown in the Figure 8.

5. Conclusions

We propose a novel detection model and a modified integro-differential operator for accurate and robust iris localization in non-cooperative environments. We first used MobileNetV2 as the backbone network in YOLO v4 for feature extraction. The modified YOLO v4 model is only 5.8 M in size, which is significantly smaller than the traditional YOLO v4-tiny model. Then, a modified integro-differential operator was used to precisely locate the inner and outer boundaries. Extensive experiments on multiple datasets demonstrated the effectiveness and robustness of our method for iris localization in non-cooperative environments.
Limited by conditions, we did not find an image dataset of eye diseases. Iris localization for ocular diseases has a low success rate. This will be achieved in future research work. Furthermore, we are interested in investigating how to accurately localize the iris in video.

Author Contributions

Conceptualization, Q.X. and X.Z.; methodology, Q.X.; software, Q.X.; validation, J.S.; formal analysis, J.S.; investigation, Q.X. and X.Z.; resources, J.S.; data curation, Q.X.; writing—original draft preparation, Q.X.; writing—review and editing, Q.X. and J.S.; visualization, Q.X.; supervision, X.Z., X.W. and N.Q.; project administration, X.Z., X.W. and N.Q.; funding acquisition, X.Z., X.W. and N.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 61673316; the Natural Science Foundation of Hunan Province, grant number 2020JJ6062 and 2021JJ50137; and the Scientific Research Project of Hunan Provincial Department of Education, grant number 22A0484.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Publicly available datasets were analyzed in this study. The data can be found here: http://biometrics.idealtest.org/findTotalDbByMode.do?mode=Iris#/ (accessed on 30 November 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
YOLOyou only look once
APaverage precision
mAPmean AP
TPtrue positives
FPfalse positive
FNfalse negative
IoUintersection over union
CASIAChinese Academy of Sciences Institute of Automation
CNNconvolutional neural network
R-CNNregion-based CNN
SPPspatial pyramid pool
PANpath aggregation network

References

  1. Jain, A.K.; Ross, A.; Pankanti, S. Biometrics: A tool for information security. IEEE Trans. Inf. Forensics Secur. 2006, 1, 125–143. [Google Scholar] [CrossRef] [Green Version]
  2. Best-Rowden, L.; Jain, A. Longitudinal Study of Automatic Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 148–162. [Google Scholar] [CrossRef] [PubMed]
  3. He, R.; Tan, T.; Davis, L.; Sun, Z. Learning structured ordinal measures for video based face recognition. Pattern Recognit. 2018, 75, 4–14. [Google Scholar] [CrossRef] [Green Version]
  4. Xu, W.; Shen, Y.; Bergmann, N.; Hu, W. Sensor-Assisted Multi-View Face Recognition System on Smart Glass. IEEE Trans. Mob. Comput. 2018, 17, 197–210. [Google Scholar] [CrossRef]
  5. Cao, K.; Jain, A.K. Automated Latent Fingerprint Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 788–800. [Google Scholar] [CrossRef] [Green Version]
  6. Zhao, S.; Zhang, B.; Chen, C.P. Joint deep convolutional feature representation for hyperspectral palmprint recognition. Inf. Sci. 2019, 489, 167–181. [Google Scholar] [CrossRef]
  7. Zhao, S.; Zhang, B. Learning salient and discriminative descriptor for palmprint feature extraction and identification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5219–5230. [Google Scholar] [CrossRef]
  8. Xiong, Q.; Zhang, X.; He, S.; Shen, J. A Fractional-Order Chaotic Sparrow Search Algorithm for Enhancement of Long Distance Iris Image. Mathematics 2021, 9, 2790. [Google Scholar] [CrossRef]
  9. Xiong, Q.; Zhang, X.; Xu, X.; He, S. A modified chaotic binary particle swarm optimization scheme and its application in face-iris multimodal biometric identification. Electronics 2021, 10, 217. [Google Scholar] [CrossRef]
  10. Alwawi, B.K.O.C.; Althabhawee, A.F. Towards more accurate and efficient human iris recognition model using deep learning technology. TELKOMNIKA (Telecommun. Comput. Electron. Control.) 2022, 20, 817–824. [Google Scholar] [CrossRef]
  11. Drozdowski, P.; Rathgeb, C.; Busch, C. Computational workload in biometric identification systems: An overview. IET Biom. 2019, 8, 351–368. [Google Scholar] [CrossRef] [Green Version]
  12. Muroo, A. The human iris structure and its usages. Physica 2000, 39, 87–95. [Google Scholar]
  13. Bowyer, K.W.; Burge, M.J. Handbook of Iris Recognition; Springer: London, UK, 2016. [Google Scholar]
  14. Pillai, J.K.; Patel, V.M.; Chellappa, R.; Ratha, N.K. Secure and robust iris recognition using random projections and sparse representations. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1877–1893. [Google Scholar] [CrossRef] [Green Version]
  15. Wu, X.; Zhao, L. Study on iris segmentation algorithm based on dense U-Net. IEEE Access 2019, 7, 123959–123968. [Google Scholar] [CrossRef]
  16. Jan, F. Segmentation and localization schemes for non-ideal iris biometric systems. Signal Process. 2017, 133, 192–212. [Google Scholar] [CrossRef] [Green Version]
  17. Arsalan, M.; Kim, D.S.; Lee, M.B.; Owais, M.; Park, K.R. FRED-Net: Fully residual encoder–decoder network for accurate iris segmentation. Expert Syst. Appl. 2019, 122, 217–241. [Google Scholar] [CrossRef]
  18. Bazrafkan, S.; Thavalengal, S.; Corcoran, P. An end to end deep neural network for iris segmentation in unconstrained scenarios. Neural Netw. 2018, 106, 79–95. [Google Scholar] [CrossRef] [Green Version]
  19. Feng, X.; Liu, W.; Li, J.; Meng, Z.; Sun, Y.; Feng, C. Iris R-CNN: Accurate iris segmentation and localization in non-cooperative environment with visible illumination. Pattern Recognit. Lett. 2022, 155, 151–158. [Google Scholar] [CrossRef]
  20. Basit, A.; Javed, M.Y. Localization of iris in gray scale images using intensity gradient. Opt. Lasers Eng. 2007, 45, 1107–1114. [Google Scholar] [CrossRef]
  21. Mei-Sen, P.; Qi, X. A New Iris Location Method. Biomed. Eng. Appl. Basis Commun. 2020, 32, 2050046. [Google Scholar] [CrossRef]
  22. CASIA Iris Image Database. Available online: http://biometrics.idealtest.org/findTotalDbByMode.do?mode=Iris#/ (accessed on 30 November 2022).
  23. Peng, H.; Li, B.; He, D.; Wang, J. End-to-End Anti-Attack Iris Location Based on Lightweight Network. In Proceedings of the 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Dalian, China, 25–27 August 2020; pp. 821–827. [Google Scholar]
  24. Yang, K.; Xu, Z.; Fei, J. Dualsanet: Dual spatial attention network for iris recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2021; pp. 889–897. [Google Scholar]
  25. Susitha, N.; Subban, R. Reliable pupil detection and iris segmentation algorithm based on SPS. Cogn. Syst. Res. 2019, 57, 78–84. [Google Scholar] [CrossRef]
  26. Pan, M.S.; Xiong, Q. Iris location method based on mathematical morphology and improved hough transform. Biomed. Eng. Appl. Basis Commun. 2021, 33, 2150001. [Google Scholar] [CrossRef]
  27. Daugman, J. Statistical richness of visual phase information: Update on recognizing persons by iris patterns. Int. J. Comput. Vis. 2001, 45, 25–38. [Google Scholar] [CrossRef]
  28. Daugman, J. The importance of being random: Statistical principles of iris recognition. Pattern Recognit. 2003, 36, 279–291. [Google Scholar] [CrossRef] [Green Version]
  29. Wildes, R.P. Iris recognition: An emerging biometric technology. Proc. IEEE 1997, 85, 1348–1363. [Google Scholar] [CrossRef] [Green Version]
  30. Ma, L.; Tan, T.; Wang, Y.; Zhang, D. Personal identification based on iris texture analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1519–1533. [Google Scholar]
  31. Li, Y.H.; Huang, P.J.; Juan, Y. An efficient and robust iris segmentation algorithm using deep learning. Mob. Inf. Syst. 2019, 2019, 4568929. [Google Scholar] [CrossRef]
  32. Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
  33. Cui, Y.; Yang, L.; Liu, D. Dynamic proposals for efficient object detection. arXiv 2022, arXiv:2207.05252. [Google Scholar]
  34. Bochkovskiy, A.; Wang, C.Y.; Liao HY, M. Yolov4, Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  35. Cai, Y.; Luan, T.; Gao, H.; Wang, H.; Chen, L.; Li, Y.; Sotelo, M.A.; Li, Z. YOLOv4–5D: An effective and efficient object detector for autonomous driving. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
  36. Khasawneh, N.; Fraiwan, M.; Fraiwan, L. Detection of K-complexes in EEG signals using deep transfer learning and YOLOv3. Clust. Comput. 2022, 1–11. [Google Scholar] [CrossRef]
  37. Naranpanawa, D.N.U.; Gu, Y.; Chandra, S.S.; Betz-Stablein, B.; Sturm, R.A.; Soyer, H.P.; Eriksson, A.P. Slim-YOLO: A Simplified Object Detection Model for the Detection of Pigmented Iris Freckles as a Potential Biomarker for Cutaneous Melanoma. In Proceedings of the Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 29 November 2021–1 December 2021; pp. 1–8. [Google Scholar]
  38. Severo, E.; Laroca, R.; Bezerra, C.S.; Zanlorensi, L.A.; Weingaertner, D.; Moreira, G.; Menotti, D. A benchmark for iris location and a deep learning detector evaluation. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–7. [Google Scholar]
  39. Garea-Llano, E.; Morales-Gonzalez, A. Framework for biometric iris recognition in video, by deep learning and quality assessment of the iris-pupil region. J. Ambient. Intell. Humaniz. Comput. 2021, 1–13. [Google Scholar] [CrossRef]
  40. Lian, S.; Luo, Z.; Zhong, Z.; Lin, X.; Su, S.; Li, S. Attention guided U-Net for accurate iris segmentation. J. Vis. Commun. Image Represent. 2018, 56, 296–304. [Google Scholar] [CrossRef]
  41. Wang, C.; Muhammad, J.; Wang, Y.; He, Z.; Sun, Z. Towards complete and accurate iris segmentation using deep multi-task attention network for non-cooperative iris recognition. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2944–2959. [Google Scholar] [CrossRef]
  42. Li, Y.H.; Putri, W.R.; Aslam, M.S.; Chang, C.C. Robust iris segmentation algorithm in non-cooperative environments using interleaved residual U-Net. Sensors 2021, 21, 1434. [Google Scholar] [CrossRef] [PubMed]
  43. Loy, G.; Zelinsky ASandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2, Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  44. MathWorks Help Center: Getting Started with YOLO v4. Available online: https://ww2.mathworks.cn/help/vision/ug/getting-started-with-yolo-v4.html (accessed on 30 November 2022).
  45. Wu, Y. Research on Iris Location and Authentication. Bachelor Thesis, Xi’an Jiaotong University, Xi’an, China, 12 June 2018. [Google Scholar]
  46. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
Figure 1. Some typical iris images obtained in non-cooperative environments: (a) iris obscured by eyelids; (b) iris interfered by eyelashes; (c) iris obstruction due to hair; (d) glasses obstructing the iris; (e,f) off-angle iris; (g,h) iris with specular reflection [22].
Figure 1. Some typical iris images obtained in non-cooperative environments: (a) iris obscured by eyelids; (b) iris interfered by eyelashes; (c) iris obstruction due to hair; (d) glasses obstructing the iris; (e,f) off-angle iris; (g,h) iris with specular reflection [22].
Sensors 22 09913 g001
Figure 2. Flow chart of the localization algorithm.
Figure 2. Flow chart of the localization algorithm.
Sensors 22 09913 g002
Figure 3. Sample image of the dataset used to train YOLO V4. (a) Example image of CASIA-Iris-Thousand; (b,d) Manual label of the outer boundary of the iris; (c) Example image of CASIA-Iris-distance [22].
Figure 3. Sample image of the dataset used to train YOLO V4. (a) Example image of CASIA-Iris-Thousand; (b,d) Manual label of the outer boundary of the iris; (c) Example image of CASIA-Iris-distance [22].
Sensors 22 09913 g003aSensors 22 09913 g003b
Figure 4. The overall network structure of modified YOLO v4 [44].
Figure 4. The overall network structure of modified YOLO v4 [44].
Sensors 22 09913 g004
Figure 5. Schematic diagram of g θ , r and C θ , r .
Figure 5. Schematic diagram of g θ , r and C θ , r .
Sensors 22 09913 g005
Figure 6. Some results of typical iris images obtained in non-cooperative environments. (a) Off-angle iris; (b) iris obscured by eyelids; (c) iris interfered with by eyelashes; (d,e) iris with specular reflection; (f) off-angle iris at a long distance; (g) iris obstruction due to hair; (h) glasses obstructing the iris [22].
Figure 6. Some results of typical iris images obtained in non-cooperative environments. (a) Off-angle iris; (b) iris obscured by eyelids; (c) iris interfered with by eyelashes; (d,e) iris with specular reflection; (f) off-angle iris at a long distance; (g) iris obstruction due to hair; (h) glasses obstructing the iris [22].
Sensors 22 09913 g006
Figure 7. Precision recall curves of the two networks (IoU = 0.5).
Figure 7. Precision recall curves of the two networks (IoU = 0.5).
Sensors 22 09913 g007
Figure 8. Inner and outer boundaries of iris localization by improved operators under non-cooperative environments [22]. (a) Off-angle iris; (b) iris obscured by eyelids; (c) iris interfered with by eyelashes; (d,e) iris with specular reflection; (f) off-angle iris at a long distance; (g) iris obstruction due to hair; (h) glasses obstructing the iris [22].
Figure 8. Inner and outer boundaries of iris localization by improved operators under non-cooperative environments [22]. (a) Off-angle iris; (b) iris obscured by eyelids; (c) iris interfered with by eyelashes; (d,e) iris with specular reflection; (f) off-angle iris at a long distance; (g) iris obstruction due to hair; (h) glasses obstructing the iris [22].
Sensors 22 09913 g008
Table 1. mAP of two detection networks under different IoU.
Table 1. mAP of two detection networks under different IoU.
IoU0.50.60.70.8
mAP of YOLO v4-tiny (%)98.6694.8086.3160.44
mAP of the proposed method (%)99.8398.4990.5741.50
Table 2. Dataset Composition.
Table 2. Dataset Composition.
Dataset Number of Images without GlassesNumber of Images with Glasses
CASIA-Iris-Thousand4000500
CASIA-Iris-Distance500100
Table 3. Comparison results of iris localization at a short distance.
Table 3. Comparison results of iris localization at a short distance.
MethodImages without GlassesImages with Glasses
Location AccuracyTime Cost (s)Location AccuracyTime Cost (s)
Daugman’s operator94.98%0.21589.85%0.216
Proposed method97.72%0.22793.91%0.196
Table 4. Comparison results of iris localization at a long distance.
Table 4. Comparison results of iris localization at a long distance.
MethodImages without GlassesImages with Glasses
Location AccuracyTime Cost (s)Location AccuracyTime Cost (s)
Daugman’s operator78.46%2.1627%N/A
Proposed method98.32%2.21384%2.248
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xiong, Q.; Zhang, X.; Wang, X.; Qiao, N.; Shen, J. Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model. Sensors 2022, 22, 9913. https://doi.org/10.3390/s22249913

AMA Style

Xiong Q, Zhang X, Wang X, Qiao N, Shen J. Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model. Sensors. 2022; 22(24):9913. https://doi.org/10.3390/s22249913

Chicago/Turabian Style

Xiong, Qi, Xinman Zhang, Xingzhu Wang, Naosheng Qiao, and Jun Shen. 2022. "Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model" Sensors 22, no. 24: 9913. https://doi.org/10.3390/s22249913

APA Style

Xiong, Q., Zhang, X., Wang, X., Qiao, N., & Shen, J. (2022). Robust Iris-Localization Algorithm in Non-Cooperative Environments Based on the Improved YOLO v4 Model. Sensors, 22(24), 9913. https://doi.org/10.3390/s22249913

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop