1. Introduction
Nowadays, portrait drawing has gained significance in cultivating painting skills and personal sentiments [
1]. A portrait is commonly perceived as representing a human being’s features, which is an important theme in figurative art. Unfortunately, novices in portrait drawing might find it difficult to define the proportions of facial features at the beginning of a sketch. On the other hand, with great improvements in digital technologies, portrait drawing has become possible on electronic devices such as tablets and note PCs.
With this motivation, recently, we developed a Portrait Drawing Learning Assistant System (
PDLAS) to assist beginners to learn drawing portraits by themselves [
2,
3]. It provides auxiliary lines to help and guide users in drawing portraits. For this purpose, we proposed an auxiliary line generation algorithm to extract the facial features and contours of face parts from a given face photo, using OpenPose [
4] and OpenCV [
5].
To evaluate the effectiveness of the PDLAS, we asked students at Okayama University, Japan, to draw the portraits from given face photos using the PDLAS. However, no method for evaluating the drawing accuracy of the result is implemented in the PDLAS.
In this paper, we present a drawing accuracy evaluation method that calculates the Normalized Cross-Correlation (NCC) for each face component to quantify the similarity between a user’s drawing and its original face image. By localizing the range of the calculated NCC to each face part such as an eye, a nose, a mouth, and an eyebrow, the evaluation result can be improved. To extract a face part, the rectangular bounding box around the part is found first in our method. Then, we use the key points of facial features provided in OpenPose to define the area of the evaluation. The accuracy feedback in the proposed method not only enhances the interactivity and efficiency of portrait drawing learning but also offers valuable suggestions for improving drawing through precise similarity evaluations.
The preliminary application results of the drawing accuracy evaluation method found that the NCC of the nose is always low for any portrait result. After we analyze the reason, we improve the auxiliary line generation algorithm for the nose in this paper.
For evaluations, we asked 13 students in Okayama University, Japan, to draw portraits using the PDLAS and applied the proposed method to their output results. Then, from the obtained NCC scores, we analyzed issues in the portrait results and provided suggestions for improvements in drawing components as user feedback. Thus, the effectiveness of the proposed method was confirmed.
For the overall project, our research objectives are divided into two levels. For the whole-project level of the Portrait Drawing Learning Assistance System (PDLAS), the research objective is to present a system for improving the learning experiences of novices in portrait drawing. For the level of the contribution of the drawing accuracy evaluation method in this paper, the research objective is to present a method to provide feedback for improving specific parts in the drawn face. By improving them one by one, novices can eventually enhance learning experiences in portrait drawing. This research will help more people to learn drawing skills by themselves, especially in areas with fewer educational resources. It will contribute to the equalization of education, giving people in different regions and professions the opportunities to develop their artistic talents.
The rest of this paper is organized as follows:
Section 2 discusses related works in the literature.
Section 3 reviews the Portrait Drawing Learning Assistant System.
Section 4 discusses the proposed method for the drawing exactness evaluation method.
Section 5 discusses the application to existing drawing results.
Section 6 discusses the improvement of nose auxiliary line generation.
Section 7 discusses the evaluation results after improving nose auxiliary line generation.
Finally,
Section 8 concludes this paper with our future works.
2. Related Works in the Literature
In this section, we discuss related works to this paper in the literature.
2.1. Portrait Drawing
In [
6], Yi R. et al. proposed a novel asymmetric cycle mapping that enforces the reconstruction information to be visible (by a truncation loss) and only embedded in selective facial regions (by a relaxed forward cycle-consistency loss).
In [
7], Yi R. et al. proposed a novel method to automatically transform face photos to portrait drawings using unpaired training data with two new features; i.e., their method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data.
In [
8], Takagi S. et al. proposed a novel image-to-pencil translation method that could not only generate high-quality pencil sketches but also offer the drawing process. Existing pencil sketch algorithms are based on texture rendering rather than the direct imitation of strokes, making them unable to show the drawing process but only a final result.
In [
9], Looi L. et al. proposed a method to estimate drawing guidelines when given an image of a front-facing face. The portrait drawing process is complex and often requires an artist to break down a face into basic guidelines as a foundation for the artwork. They utilized a modified Histogram of Oriented Gradients and a Linear Support Vector Machine to identify the region of interest of a face in the image. An implementation of the ensemble of randomized regression trees was used to regress the location of key facial landmarks. Guideline estimation was achieved by using specific points from the facial landmark output in combination with information about the average proportions of a human head.
2.2. Drawing Assistant System
In [
10], Takagi S. et al. proposed a learning support system for beginner’s pencil drawing, which is the basis of pictures. The system receives a motif data set and a user’s sketch image and returns advice to the user. The processing is composed of four functions: feature extraction of motifs, feature extraction of sketches, error identification, and generation and presentation of advice. They developed and experimented with a prototype system limited to treating a basic motif and principal advice.
In [
11], Huang Z. et al. proposed dualFace, a portrait drawing interface to assist users with different levels of drawing skills to complete recognizable and authentic face sketches. Inspired by traditional artist workflows for portrait drawing, dualFace gives two stages of drawing assistance to provide global and local visual guidance. The former helps users draw contour lines for portraits (i.e., geometric structure), and the latter helps users draw details of facial parts, which conform to the user-drawn contour lines.
In [
12], Iarussi E. et al. presented an interactive drawing tool that provides automated guidance over model photographs to help people practice traditional drawing-by-observation techniques. The drawing literature describes a number of techniques to support this task and help people gain consciousness of the shapes in a scene and their relationships. They compiled these techniques and derived a set of construction lines that they automatically extract from a model photograph.
In [
13], Gao F. et al. presented a drawing robot which can automatically transfer a facial picture to a vivid portrait, and then draw it on paper in an average of two minutes. At the heart of their system is a novel deep learning-based portrait synthesis algorithm. Innovatively, they employed a self-consistency loss, which makes the algorithm capable of generating continuous and smooth brush-strokes. In addition, they proposed a componential sparsity constraint to reduce the number of brush-strokes over insignificant areas.
In [
14], Xie J. et al. presented PortraitSketch, an interactive drawing system that helps novices create pleasing, recognizable face sketches without requiring prior artistic training. As the user traces over a source portrait photograph, PortraitSketch automatically adjusts the geometry and stroke parameters (thickness, opacity, etc.) to improve the aesthetic quality of the sketch. They presented algorithms for adjusting both outlines and shading strokes based on important features of the underlying source image.
In [
15], Li S. et al. proposed AgeFace, an interactive drawing interface that assists users in creating facial features with age-specific features based on user input strokes. They evaluated the usability of AgeFace by a user experience experiment and a comparison experiment with baseline approaches. The results verified that AgeFace could achieve better performance in usability and better support in the creative process than baseline systems.
In summary, while the techniques and methods described above have greatly improved the image drawing process, they have focused primarily on improving the final output or providing tools to assist in drawing. They do not directly address and instruct novices in the learning process. They do not provide step-by-step instructions or basic training to help novices improve their drawing skills in a systematic way. This gap in the research background provides an opportunity for further research and development of the system designed to help novices learn the fundamentals of portrait drawing. Such a system could provide tailored instruction, feedback, and educational resources.
3. Review of Portrait Drawing Learning Assistant System
In this section, we review the Portrait Drawing Learning Assistant System (PDLAS) [
2].
3.1. System Overview
A
PDLAS has been designed and implemented to assist novices in drawing portraits using electronic devices such as a tablet or a personal computer with a digital pen, and a drawing software. In our preliminary implementation [
3], we adopted an iPad [
16] and Apple Pencil with the drawing software Procreate (version 5.3.10).
To guide drawing portrait by novices, an image of the auxiliary lines of the face is generated by an auxiliary line generation algorithm, and is inserted at a different layer from the one for drawing the portrait in Procreate. This algorithm extracts the facial features and contours of face components such as the eyes, eyebrows, nose, and mouth from a given face photo, using OpenPose and OpenCV.
OpenPose is used to find the coordinates of the keypoints that represent important locations to determine the structure of a human body. The auxiliary lines for the eyes, the mouth, the lower face contour, the nose, and the head top are extracted from them. The OpenCV library is used to extract the auxiliary lines for the hair, the eyebrows, and the eyeglasses.
3.2. OpenPose
OpenPose is a popular open-source software to furnish the coordinates for 70 facial keypoints, illustrated in
Figure 1. These keypoints denote the positions of the facial features.
3.3. Auxiliary Lines
The auxiliary lines serve to assist a user drawing a portrait by easily grasping the overall structure of the face and the position of each facial feature, which include the following:
Triangle auxiliary lines widely used in drawing methods [
17];
Outlines of the eyes, the shape of the mouth, and the lower face contour;
Three circle auxiliary lines for the nose;
Outlines of the hair, the eyebrows, and the eyeglasses.
The first three items are extracted from the corresponding keypoints by OpenPose, and the last item by OpenCV library functions.
Figure 2 illustrates auxiliary lines generated by OpenPose and by OpenCV, which will be combined into one image for the complete auxiliary lines.
Figure 3 illustrates the complete auxiliary lines.
3.4. Auxiliary Line Generation by OpenPose
First, we review the procedure of generating the auxiliary lines using keypoints from OpenPose in the auxiliary line generation algorithm.
Table 1 shows the keypoint indices that are used to generate the corresponding auxiliary lines.
3.5. Auxiliary Line Generation by OpenCV
Next, we review the procedure of generating the remaining auxiliary lines using functions in the OpenCV library. Actually, the functions for grayscale conversion, Gaussian blur, thresholded, color inversion, mean filtering, edge expansion, and the Canny algorithm are used to process the image for edge detections.
In this procedure, keypoint coordinates of the 17th~26th, 37th, 38th, 40th, 41th, 43th, 44th, 46th, and 47th keypoints from OpenPose are used to locate the positions of the eyebrows. In addition, they are used to eliminate unnecessary lines.
For the eyeglass in the image, first, we extract a specific area from the grayscale image as the Region of Interest (
ROI) for the eyeglass [
18]. Then, we apply mean filtering to blur the extracted eyeglass region to smooth the image. After blurring, we dilate the image to highlight the frame of the eyeglass. Finally, we use the Canny algorithm to detect and identify the edge of the eyeglass.
Figure 4 shows an example of generating auxiliary lines for the eyeglass.
4. Proposal of Drawing Exactness Evaluation Method
In this section, we propose the drawing accuracy evaluation method using
Normalized Cross-Correlation (
NCC) [
19] for the
PDLAS.
4.1. Normalized Cross-Correlation (NCC)
The proposed method adopts
Normalized Cross-Correlation (NCC) to evaluate the similarity of the user drawing result with the given face photo.
NCC can measure the similarity between two signals. It is particularly well suited for image processing [
20,
21,
22,
23,
24]. Correlation is widely used as an effective similarity measure in matching tasks, since it is the simplest but most effective method for similarity measurement [
25]. Technically,
NCC determines the matching point between the template and the image by searching the location of the maximum value in the image matrices [
26,
27].
NCC can provide a robust framework for evaluating likeness on a pixel-by-pixel basis by primarily reflecting the accuracy in replicating the shapes and positions of the facial features.
The
NCC calculation formula is given as follows [
28]:
4.1.1. Template Matching
In the
NCC formula, the term
represents the pixel value of the template image at the coordinates
.
is the pixel value of the input image.
is the offset relative to the template image. The core idea of this template matching is to find the parts of an input image that will match a template image [
29]. The template image
T slides across the input image
I at all the possible positions
while calculating the similarity score at each point [
30].
4.1.2. Numerator Calculation
The numerator in the NCC formula represents the direct correlation between the template and a specific area of the input image. It is calculated by summing the product of the pixel values at the corresponding positions in the template and the input image, considering the relative offset. This sum reflects the degree to which the two sets of pixel values co-vary. A high sum suggests a strong positive correlation, indicating a potential match.
4.1.3. Denominator Calculation
The denominator in the
NCC formula normalizes the values, considering the brightness range of each image [
31]. It serves to normalize this correlation, accounting for the brightness and contrast between the template images and input images. It does so by considering the standard deviations of pixel values in both the template and the examined region of the input image. This normalization ensures that the
NCC score is not biased by differences in image lighting or exposure, providing a more accurate measure of similarity. The
NCC score ranges from −1 to 1, where 1, −1, and 0 indicate complete similarity, complete dissimilarity, and no correlation, respectively.
4.2. NCC Score for Face Component
For accurate and detailed feedback to a user, the
NCC score is calculated for a specific face component such as an eye, a nose, an eyebrow, and a mouth in the proposed method. For this purpose, the bounding box surrounding each component with a rectangular frame is first extracted using 70 keypoints by OpenPose [
32]. Then, the
NCC score is calculated for the image inside the bounding box.
The following procedure describes the outline of the proposed NCC score calculation for each face component:
Select Region: we select the region in the face image that corresponds to the component to be evaluated.
Extract Feature Vector: we extract the pixel values of the region to form the feature vector. It may be necessary to pre-process the data to reduce the effects of noises and variations.
Normalize Feature Vector: we normalize the feature vector so that it has a zero mean and unit variance. This can reduce the effect of lighting or exposure differences between images.
Calculate Dot Product: we calculate the dot product of the two normalized feature vectors, where the result is a measure of their similarity.
Normalize Dot Product: we divide the dot product result by the product of the lengths of the feature vectors for the normalized correlation score to obtain the final NCC value.
4.3. Bounding Box for Face Component
The bounding box for each face component is extracted using the coordinates of the keypoints by OpenPose that are related to it. Since a bounding box is rectangular, the leftmost x-coordinate, the rightmost x-coordinate, the upmost y-coordinate, and the downmost y-coordinate should be extracted.
Table 2 shows the keypoints used to extract each face component’s bounding box.
5. Application to Existing Drawing Results
In this section, we apply the proposed method to the drawing results of the
PDLAS in [
33] that were obtained previously and discuss the results.
5.1. NCC Score Results
Table 3 shows the calculated
NCC scores by the proposed method for both eyes, the nose, the mouth, and the eyebrows in the drawing results of seven students at Okayama University, Japan.
The table indicates the following:
- (1)
Right eye gives the highest score while nose gives the lowest.
- (2)
User 1 gives the highest score while User 7 gives the lowest.
We will analyze these results in the following subsections.
5.2. Analysis of Application Result of Top-Score User
First, we discuss the application result of the portrait drawing result of User 1, which gives the highest average
NCC score.
Figure 5 illustrates the face photo and the drawing result.
The highest score is achieved for the right eye, suggesting that the user’s depiction of the right eye is more similar to the reference image compared to the left eye. This user might have captured the shape and position of the right eye better. The score for the mouth is 0.59, indicating a moderate level of similarity with the reference image, but the shape or position of the mouth does not perfectly match the original image.
Notably, the NCC score for the nose is zero. Observing the user’s rendition of the nose, we found that the auxiliary lines for the nose do not match the size and the proportion of the original nose image, due to the complexity of rendering the nose accurately among all the facial features.
5.3. Analysis of Application Result of Bottom-Score User
Next, we discuss the application result of the portrait drawing result of
User 7, which gives the lowest average
NCC score.
Figure 6 illustrates the face photo and the drawing result.
Analyzing the results for User 7, this user has achieved low scores across all features. Notably, negative scores were obtained for the eyes and the right eyebrow. The left eye and the right eye both show a low positive correlation. From the drawing result, we observed that the user’s drawing linework for the eyes is rather stiff, and the user has not made good use of the auxiliary lines provided by the system, resulting in the overall low score. Overall, we found the following issues in the user’s drawing:
- (i)
The user’s lines when drawing facial features are stiff.
- (ii)
There is a notable difference in the NCC scores between the left and right sides.
NCC Score Result
Analyzing the results of the second user, we notice that the NCC scores for the mouth and nose are the lowest. The score for the left eye is close to 0.5, which indicates a moderate level of similarity to the left eye in the reference image.
Similarly, the NCC score for the right eye is slightly higher than that for the left eye, suggesting that this user may have been more accurate or meticulous in depicting the right eye. The user’s representation of the nose shows a significant difference in proportion compared to the original image, which can also be seen in the picture. This may be due to the user’s lack of skill in using shadows.
We found the following issues in the user’s drawing:
The mastery of the proportions of the facial features, especially the width of the nostrils, needs improvement.
Similarly, the overall drawing level of the right side of the face is higher than that of the left side.
6. Improvement of Nose Auxiliary Line Generation
In this section, we present an improvement of the auxiliary line generation algorithm for nose. The NCC score is very low compared with the score for other face components.
6.1. Limitation of Current Auxiliary Lines
As shown in
Figure 7, the current auxiliary lines for nose cover only the bottom boundary of the nose. The other boundary is not covered due to the three-dimensional nature of the nose, where the boundary may not be clear and may be changed depending on the viewpoint [
34]. Then, in the drawing result, the size of the nose at the upper part becomes different from the original one in the photo image.
6.2. Auxiliary Lines by Bezier Curve
To solve this problem, in this section, we introduce the
Bezier curve [
35] for the auxiliary lines to represent the other boundary of the nose. The
Bezier curve is a parametric curve used in computer graphics and related fields. It needs contour points considering global shape information, with the curve passing through the first and last control points [
36]. In this paper, the following quadratic curve is adopted where
t represents a parameter, and
and
are the first and last control points, respectively.
For and of the Bezier curve for the left nose boundary, keypoint 21 at the left eyebrow and keypoint 31 at the left nose end are used. For and for the right nose boundary, keypoint 22 at the right eyebrow and keypoint 35 at the right nose end are used.
Figure 8 shows the improved auxiliary lines by the proposed method.
7. Evaluations After Improvement
In this section, we evaluate the improved auxiliary lines for the nose and the drawing accuracy evaluation method using Normalized Cross-Correlation (NCC).
7.1. Comparison of Nose NCC Score
First, to evaluate the improved auxiliary lines for the nose, we generated them and asked the same students to draw only the nose on the
PDLAS using the same face photos.
Table 4 compares the
NCC scores before and after the improvement of nose auxiliary lines. It shows that the
NCC score is improved by 222.22% on average.
7.2. Application to Drawing Results for Diverse Face Photos
Second, to evaluate the
drawing accuracy evaluation method under diverse face photos, we collected them from various countries with a range of ages and skin colors. Then, we asked new persons to draw the portraits using the
PDLAS. Afterward, we calculated the
NCC scores of the drawing results.
Table 5 shows the results. We found that users’ scores with the improved auxiliary lines were generally higher. Also, the average scores for both eyes are the same, showing no significant asymmetry problem.
Unlike
Table 3 and
Table 5 shows the results of the first use of the improved auxiliary line by the new user group. This helps to understand the initial adaptation of the new auxiliary line to new users. Despite the fact that the user groups are different, the overall mean scores of the new users in
Table 5 increased, indicating the general effectiveness of the improved auxiliary line. The NCC scores for different facial features also varied slightly due to the different drawing abilities of each user. This also suggests that the PDLAS is able to detect different users’ drawing abilities in different parts of the face through the scores, and also informs subsequent evaluations.
7.3. Evaluation of System Usability Scale
To evaluate the usability of the PDLAS and the drawing accuracy evaluation method, we asked the users to respond to the following 10 questions using a five-point scale: 1 (strongly disagree), 2 (disagree), 3 (neutral), 4 (agree), and 5 (strongly agree).
I think the auxiliary lines are easy to see.
I found the auxiliary lines do not match the image.
I think the auxiliary lines are useful for drawing.
I think this system is difficult to use.
It was enjoyable to use this system.
I feel it takes a long time to draw with this system.
I want to use the system to draw other faces.
I think the NCC scoring feedback is NOT helpful for improving drawing skills.
I am satisfied with the accuracy of the NCC scores for different features.
I think the size and position of each rectangle accurately reflect the area of each feature.
Then, the System Usability Scale (SUS) score is calculated from the answer results. The SUS score is an efficient and inexpensive tool for assessing the usability of a system, especially for small samples. It evaluates the system usability based on validity, frequency of use, and satisfaction. The SUS final score for each user is obtained through the following procedure:
Obtain the score by subtracting 1 from the answer for questions 1, 3, 5, 7, and 9.
Obtain the score by subtracting the answer from 5 for questions 2, 4, 6, 8, and 10.
Obtain the total score by summing all the scores.
Obtain the SUS final score by multiplying total score by .
Table 6 shows the questionnaire and
SUS score results. The highest score is
, the lowest score is
, and the average score is 84. Therefore, the proposed method has a good usability [
37].
7.4. Limitations of Proposed Method
Here, we discuss limitations of the proposed drawing accuracy evaluation method for the PDLAS that we experienced through experiments and evaluations.
7.4.1. Sensitive to Local Change
The adopted NCC algorithm relies on the local similarity between two images. This means that even small displacements, scaling, or rotations of a component in an image can result in a large drop in the NCC score. In particular, this can easily happen for face components with a small size such as the eyes and nose.
7.4.2. Insufficient Shading Evaluation
The
NCC algorithm is sensitive to changes in the image brightness and contrast [
38]. In particular, for a component with a three-dimensional structure such as the nose,
NCC cannot evaluate the shading of it correctly.
7.4.3. Single Evaluation Dimension
The NCC algorithm focuses on structural similarities between two images, and is weak at evaluating other artistic dimensions such as technical accuracy, creativity, and emotional expressions. Adopting AI, evaluations from professional artists should be considered for other dimensions of the artwork, which will be outlined in future works.
8. Conclusions
This paper presented a drawing exactness assessment method using a Localized Normalized Cross-Correlation (NCC) algorithm in a Portrait Drawing Learning Assistant System (PDLAS). The PDLAS has been developed as a self-study tool to make art education more accessible and interesting for drawing beginners. It simplifies the process of learning portrait drawing and integrates technologies into traditional education.
The proposed algorithm gives the similarity score with the correlation between the original face photo and drawing result images for each face component. In addition, the auxiliary lines for the nose were improved.
For evaluations, the proposed algorithm was applied to the drawing results of students in Okayama University, Japan, using the PDLAS, where the results validated the effectiveness of suggesting improvements in drawing components. The system usability was confirmed through the questionnaire with the SUS score.
However, our study of the PDLAS still has some limitations. Firstly, the drawing results that are filled in color have higher scores than those that use lines only. Coloring or not has some effect on the results. Secondly, evaluations of other features of the face have not yet been realized, such as glasses and accessories. Thirdly, the system has been used by a limited number of young students from Asia. It should be used by a wider population including different ethnicities, cultures, and ages. Moreover, in the system, no user feedback is available currently. Specific suggestions on how to improve the drawing should be provided. In the future, we will continue the PDLAS study to solve these limitations.
Author Contributions
Y.Z. wrote the code according to the research objectives, and was mainly responsible for writing part of the auxiliary line algorithm section and the evaluation section, and collecting user results. N.F. led the research from the beginning and developed the research objectives and the system functionality. Z.K. wrote the code according to the research objectives, and was mainly responsible for writing the auxiliary line algorithm section. C.-C.H. refined the paper writing. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The authors would like to assure readers that all participants involved in this research were informed, and all photographs used were supplied voluntarily by the participants themselves. The authors have ensured that all ethical considerations, particularly regarding participant consent and data protection, have been rigorously adhered to.
Informed Consent Statement
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement
No new data were created or analyzed during this study.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Blake, W.; Lawn, J. Portrait Drawing; Watson-Guptill Publications Inc.: New York, NY, USA, 1981; pp. 4–5. [Google Scholar]
- Kong, Z.; Zhang, Y.; Funabiki, N.; Huo, Y.; Kuribayashi, M.; Harahap, D.P. A Proposal of Auxiliary Line Generation Algorithm for Portrait Drawing Learning Assistant System Using OpenPose and OpenCV. In Proceedings of the 2023 IEEE 12th Global Conference on Consumer Electronics (GCCE), Nara, Japan, 10–13 October 2023. [Google Scholar]
- Zhang, Y.; Kong, Z.; Huo, Y.; Funabiki, N.; Harahap, D.P. A Preliminary Evaluation of Portrait Drawing Learning Assistant System. In Proceedings of the 2023 IEEE 12th Global Conference on Consumer Electronics (GCCE), Nara, Japan, 10–13 October 2023. [Google Scholar]
- CMU Perceptual Computing Lab. Available online: https://cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/index.html (accessed on 30 June 2024).
- OpenCV. Available online: https://opencv.org/ (accessed on 30 June 2024).
- Yi, R.; Liu, Y.; Lai, Y.; Rosin Paul, L. Unpaired Portrait Drawing Generation via Asymmetric Cycle Mapping. In Proceedings of the IEEE CVPR, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Yi, R.; Liu, Y.-J.; Lai, Y.-K.; Rosin, P.L. Quality Metric Guided Portrait Line Drawing Generation From Unpaired Training Data. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 905–918. [Google Scholar] [CrossRef] [PubMed]
- Tong, Z.; Chen, X.; Ni, B.; Wang, X. Sketch Generation with Drawing Process Guided by Vector Flow and Grayscale. arXiv 2020, arXiv:2012.09004. [Google Scholar] [CrossRef]
- Looi, L.; Green, R. Estimating Drawing Guidelines for Portrait Drawing. In Proceedings of the IVCNZ, Tauranga, New Zealand, 9–10 December 2021. [Google Scholar]
- Takagi, S.; Matsuda, N.; Soga, M.; Taki, H.; Shima, T.; Yoshimoto, F. An educational tool for basic techniques in beginner’s pencil drawing. In Proceedings of the PCGI, Tokyo, Japan, 9–11 July 2003. [Google Scholar]
- Huang, Z.; Peng, Y.; Hibino, T.; Zhao, C.; Xie, H. DualFace: Two-stage drawing guidance for freehand portrait sketching. Comp. Visual Media 2022, 8, 63–77. [Google Scholar] [CrossRef]
- Iarussi, E.; Bousseau, A.; Tsandilas, T. The drawing assistant: Automated drawing guidance and feedback from photographs. In Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, St. Andrews, Scotland, UK, 8–11 October 2013. [Google Scholar]
- Gao, F.; Zhu, J.; Yu, Z.; Li, P.; Wang, T. Making Robots Draw A Vivid Portrait In Two Minutes. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 24 October 2020–24 January 2021. [Google Scholar]
- Xie, J.; Hertzmann, A.; Li, W.; Winnemöller, H. PortraitSketch: Face sketching assistance for novices. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, Honolulu, HI, USA, 5–8 October 2014. [Google Scholar]
- Li, S.; Xie, H.; Yang, X.; Chang, C.-M.; Miyata, K. A Drawing Support System for Sketching Aging Anime Faces. In Proceedings of the International Conference on Cyberworlds (CW), Kanazawa, Japan, 27–29 September 2022. [Google Scholar]
- Hebsi, M.A.A.; Akre, V.L. A framework for integrating iPAD as a pedagogical device in higher education. In Proceedings of the ICEDSA, Ras Al Khaimah, United Arab Emirates, 6–8 December 2016. [Google Scholar]
- Hiromasa, U. Hiromasa’s Drawing Course: How to Draw Faces; Kosaido Pub.: Tokyo, Japan, 2014. [Google Scholar]
- OpenCV: Open Source Computer Vision. Available online: https://docs.opencv.org/4.x/d3/df2/tutorial_py_basic_ops.html (accessed on 30 June 2024).
- Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing with Matlab; Gatesmark Publishing: Knoxville, TN, USA, 2020; pp. 311–315. [Google Scholar]
- Sarvaiya, J.; Patnaik, S.; Bombaywala, S. Image Registration by Template Matching Using Normalized Cross-Correlation. In Proceedings of the International Conference on Advances in Computing, Control, and Telecommunication Technologies, Bangalore, India, 28–29 December 2009. [Google Scholar]
- Pele, O.; Werman, M. Robust Real-Time Pattern Matching Using Bayesian Sequential Hypothesis Testing. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1427–1443. [Google Scholar] [CrossRef] [PubMed]
- Li, G. Stereo matching using normalized cross-correlation in LogRGB space. In Proceedings of the International Conference on Computer Vision in Remote Sensing, Xiamen, China, 16–18 December 2012. [Google Scholar]
- Dawoud, N.N.; Samir, B.; Janier, J. N-mean kernel filter and normalized correlation for face localization. In Proceedings of the IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, Malaysia, 4–6 March 2011. [Google Scholar]
- Chandran, S.; Mogiloju, S. Algorithm for Face Matching Using Normalized Cross-Correlation. Int. J. Eng. Adv. Technol. 2013, 2, 2249–8958. [Google Scholar]
- Feng, Z.; Qingming, H.; Wen, G. Image matching by normalized cross-correlation. In Proceedings of the ICASSP, Toulouse, France, 14–19 May 2006. [Google Scholar]
- Zhang, B.; Yang, H.; Yin, Z. A region-based normalized cross correlation algorithm for the vision-based positioning of elongated IC chips. IEEE Trans. Semicond. Manuf. 2015, 28, 345–352. [Google Scholar] [CrossRef]
- Hisham, M.B.; Yaakob, S.N.; Raof, R.A.A.; Nazren, A.B.A.; Wafi, N.M. Template matching using sum of squared difference and normalized cross correlation. In Proceedings of the SCOReD, Kuala Lumpur, Malaysia, 13–14 December 2015. [Google Scholar]
- Ban, K.D.; Lee, J.; Hwangm, D.H.; Chung, Y.K. Face image registration methods using normalized cross correlation. In Proceedings of the 2008 International Conference on Control, Automation and Systems, Seoul, Republic of Korea, 14–17 October 2008. [Google Scholar]
- Fouda, Y.; Ragab, K. An efficient implementation of normalized cross-correlation image matching based on pyramid. In Proceedings of the iCAST, 2013 & UMEDIA, Aizu-Wakamatsu, Japan, 2–4 November 2013. [Google Scholar]
- Cui, Z.; Qi, W.; Liu, Y. A Fast Image Template Matching Algorithm Based on Normalized Cross Correlation. J. Phys. Conf. Ser. 2020, 1693, 012163. [Google Scholar] [CrossRef]
- Baloch, S.; Ghazali, F.; Mir, N.; Farrukh, F.; Ismail, H.; Qadri, M.Y. Hardware realization of locally normalized cross correlation algorithm. In Proceedings of the ICSAI, Shanghai, China, 15–17 November 2014. [Google Scholar]
- Gao, Q.; Chen, H.; Yu, R.; Yang, J.; Duan, X. A robot portraits pencil sketching algorithm based on face component and texture segmentation. In Proceedings of the ICIT, Melbourne, VIC, Australia, 13–15 February 2019. [Google Scholar]
- Zhang, Y.; Kong, Z.; Funabiki, N.; Puspitaningayu, P. A Drawing Accuracy Evaluation Method with Localized Normalized Cross-Correlation for Portrait Drawing Learning Assistant System. In Proceedings of the ITET, Tottori, Japan, 10–12 May 2024. [Google Scholar]
- Civardi, G. Drawing Portraits: Faces and Figures, 1st ed.; Search Press Limited: Tunbridge Wells, Kent, UK, 2002; p. 20. [Google Scholar]
- Hearn, D.; Baker, P.M. Computer Graphics, C Version, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1997; pp. 27–331. [Google Scholar]
- Lee, Y.-H.; Lee, Y.; Park, J.-H.; Han, W.; Kim, Y. Emotional Recognition System Using Curve Fitting with Bezier Cubics. In Proceedings of the 2013 International Conference on Information Science and Applications (ICISA), Jeju Island, Republic of Korea, 18–22 June 2013; pp. 1–2. [Google Scholar]
- Sharfina, Z.; Santoso, H.B. An Indonesian adaptation of the System Usability Scale (SUS). In Proceedings of the ICACSIS, Malang, Indonesia, 15–16 October 2016. [Google Scholar]
- Abdulsada, N.T.; Ali, S. Automatic face recognition using normalized cross correlation (NCC) function with variable template size. AIP Conf. 2022, 2437, 020116. [Google Scholar]
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).