Next Article in Journal
ScAlN PMUTs Based on Flexurally Suspended Membrane for Long-Range Detection
Previous Article in Journal
Effect of Hot Junction Size on the Temperature Measurement of Proton Exchange Membrane Fuel Cells Using NiCr/NiSi Thin-Film Thermocouple Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image Processing Approach to Quality Control of Drop-on-Demand Electrohydrodynamic (EHD) Printing

1
Mechanical Engineering Department, Iowa State University, Ames, IA 50011, USA
2
Department of Mechanical Engineering, College of Engineering and Computer Sciences, Jazan University, Jazan 45142, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Micromachines 2024, 15(11), 1376; https://doi.org/10.3390/mi15111376
Submission received: 16 October 2024 / Revised: 7 November 2024 / Accepted: 10 November 2024 / Published: 14 November 2024

Abstract

:
Droplet quality in drop-on-demand (DoD) Electrohydrodynamic (EHD) inkjet printing plays a crucial role in influencing the overall performance and manufacturing quality of the operation. The current approach to droplet printing analysis involves manually outlining/labeling the printed dots on the substrate under a microscope and then using microscope software to estimate the dot sizes by assuming the dots have a standard circular shape. Therefore, it is prone to errors. Moreover, the dot spacing information is missing, which is also important for EHD DoD printing processes, such as manufacturing micro-arrays. In order to address these issues, the paper explores the application of feature extraction methods aimed at identifying characteristics of the printed droplets to enhance the detection, evaluation, and delineation of significant structures and edges in printed images. The proposed method involves three main stages: (1) image pre-processing, where edge detection techniques such as Canny filtering are applied for printed dot boundary detection; (2) contour detection, which is used to accurately quantify the dot sizes (such as dot perimeter and area); and (3) centroid detection and distance calculation, where the spacing between neighboring dots is quantified as the Euclidean distance of the dot geometric centers. These stages collectively improve the precision and efficiency of EHD DoD printing analysis in terms of dot size and spacing. Edge and contour detection strategies are implemented to minimize edge discrepancies and accurately delineate droplet perimeters for quality analysis, enhancing measurement precision. The proposed image processing approach was first tested using simulated EHD printed droplet arrays with specified dot sizes and spacing, and the achieved quantification accuracy was over 98% in analyzing dot size and spacing, highlighting the high precision of the proposed approach. This approach was further demonstrated through dot analysis of experimentally EHD-printed droplets, showing its superiority over conventional microscope-based measurements.

1. Introduction

Electrohydrodynamic (EHD)-inkjet printing is a non-contact, additive printing technique that uses electrical force to drive the printing liquid [1,2]. The ink is expanded and injected as tiny drops onto the printing substrate for the drop-on-demand (DoD) printing process [3,4,5]. The benefits of this process range from flexibility and functionality to lower downtimes and mass personalization. EHD DoD is one of the promising AM techniques with excellent resolution [4,6,7] since the produced ink droplets are smaller, making it appropriate to fabricate micro/nanoscale designs such as solar cells, micro-array sensors, and micro-LED displays [8,9,10].
The precision of dot profiles in EHD printing is substantial, significantly impacting the quality and functionality of manufactured products [11]. In EHD printing, a high voltage is used to create a jet of liquid that can deposit high-resolution materials [5]. The size of the dot, which is the basic unit of material deposited onto the substrate, is crucial because it directly affects the final product’s resolution and smoothness. Smaller droplets enable higher resolution, producing micro- and nanoscale structures with finer details, such as microarray sensors [12]. Moreover, the consistency in droplet size across the printing process ensures uniformity, which is crucial for applications like micro-LED displays where any variation can affect the overall efficiency and performance [13]. However, the spacing between the dots in DoD EHD printing must be carefully managed to optimize the quality of the final product. Insufficient spacing may lead to overlapping or gaps, compromising the overall structural integrity and disrupting the mechanical and/or electrical connectivity when used in circuitry applications [10,14].
Current approaches to dot analysis in DoD EHD printing mostly depend on manual inspection and estimation utilizing microscope software, a procedure fraught with limitations and difficulties [15]. These methods involve manually identifying and tracing the boundaries of the printed dots on the substrate using a microscope. The analysis assumes that these dots are completely round in order to estimate their sizes, such as diameter and area. These assumptions provide a large potential for inaccuracy because actual droplets in printing can differ from perfect circular forms due to many factors, such as the qualities of the surface they land on, the way the fluid moves, or the environmental temperature. In addition, these approaches frequently exclude essential information regarding the spacing between dots, which is necessary to guarantee precise DoD printing functionality. The lack of accurate and automated measurement tools in these conventional processes obstructs the capability to replicate prints of superior quality consistently and restricts the thorough examination needed to advance printing techniques, especially in applications that demand precise and uniform results, such as the production in micro/nanoscale. For example, quantifying the size of micro-LED dots for DoD EHD printing faces significant limitations using these techniques, which include low resolution, which restricts the ability to detect minute size variations, poor dot shape, and uniformity assessment, which are critical for achieving consistent brightness and color across the display, as well as challenges in accurately quantifying overlay registration [16,17,18]. These quantification challenges can compromise the final product’s overall performance and aesthetic quality.
Furthermore, the manual process of current dot analysis approaches leads to extra errors due to human error and subjective interpretations regarding the estimation of a dot’s boundary. The manual approach is also time-consuming, restricting the production throughput in DoD EHD manufacturing. Therefore, there is a vital requirement for advanced and automated image processing methods that can precisely detect and analyze the printed droplets without human operators’ subjective biases and limits. This demand is vital for applications that require high-resolution and flawless printing, hence expanding the capability of EHD inkjet technology. In recent years, recognition applications in the image processing domain have become essential in multiple fields, performing as practical tools for automated monitoring, automated methods for monitoring and quality control, and real-time decision-making. In civil engineering applications, Structural Health Monitoring (SHM) systems, with the help of vision computation, can detect various structural events or phenomena, such as cracks or deformations, on bridges and other vital structures to enhance safety and maintenance efficiency [19]. Similarly, in agriculture fields, lightweight object detection models with restricted modifications in YOLO (You Only Look Once) are implemented to detect and classify crops like pitaya under different lighting conditions to optimize and enhance yield estimation and harvesting processes [20]. In the medical field, recognition techniques help detect tumors, abnormal tissue, and other health signs to improve the diagnosis period and the treatment strategies [21]. Robotics also incorporates object recognition algorithms used to navigate or manage robotic tasks in different complex environments, such as warehouses or disaster response situations, which enhance operations and safety [22]. These recognition advancements support industries that emphasize the need for development in automated image analysis in high-precision applications concerning the quality needed in DoD EHD manufacturing and other specialized domains.
Advances in computer vision techniques, such as edge detection, pattern recognition object detection, and image segmentation, have been broadly used in image processing applications and can potentially address the issues mentioned above [23,24,25,26]. Therefore, this work aims to develop an automatic quality analysis approach for DoD EHD printing. Considering the fact that the EHD DoD printed dots do not have fixed shapes (e.g., circles or ellipses), dot recognition in the proposed method does not rely on techniques that are designed for fixed shape recognition, such as Fast Radial Symmetry Transform [27,28] and Hough transform [29]. Specifically, we propose a feature-extraction technique that combines edge and contour detection in image processing to address the challenges associated with dot measurement in DoD EHD printing. Edge detection serves to identify intensity and texture discontinuities that distinguish droplet boundaries from the background [30,31]. At the same time, contour detection traces the entire closed shape of droplet outer boundaries. By employing these techniques, it is possible to enhance the analysis and detection of significant structural boundaries of individual EHD droplets and visualize them within printed patterns. Using edge and contour representations enables accurate measurement of essential features such as droplet sizes, morphology, spacing, and overall pattern configuration. Developing integrated methods for detecting edges and contours helps overcome resolution and precision limitations. Therefore, this work focuses on developing methods to improve the sensing accuracy. The accuracy of the EHD printing droplet is enabled by using the edge detection algorithm to enhance edges and the contour algorithm to detect outlines to determine the shape and position of the droplets in the image and to analyze the printed droplet during operations.
The proposed image processing method consists of three key steps: image pre-processing with Canny edge detection, boundary detection and localization, and dot distance calculation and categorization. In the pre-processing stage, images of DoD EHD printing are first filtered for noise reduction and then processed with a Canny edge detector to find the edges of the printed dots. Then, the detected edges are used as the input for boundary detection to identify the centroid of each dot for dot size calculation. Finally, dot spacings are calculated based on the distance of the identified centroids and categorized using pre-chosen distance thresholds. This approach is entirely software-based and thus does not involve manual handling. The proposed method was first tested using simulated EHD printed droplet arrays with specified dot sizes and spacing, and the achieved quantification accuracy was over 98% in analyzing dot size and spacing, highlighting the high precision of the proposed approach. This approach was further demonstrated through dot analysis of experimentally EHD-printed droplets, showing its superiority over conventional microscope-based measurements.
The paper is structured as follows: The methodology is explained in Section 2. Section 3 applies to the results of experimentation and discussion. The validation is presented in Section 5, while the conclusion is provided in Section 6.

2. Methodology

This study utilized feature extraction to identify and analyze microscale shapes of DoD EHD printing, as well as to compute distance by integrating OpenCV packages with Python 3.10 and executing the code with the PyCharm command prompt. The framework is illustrated in Figure 1.

2.1. Pre-Processing with Canny Edge Detection

The initial phase is to enhance the image’s features and reduce noise in order to facilitate the detection and analysis of the printed dots within the image, as illustrated in Figure 2. First, the image contrast and brightness were adjusted through GUI trackbars [32,33,34] based on user requirements, followed by the pre-processing procedures below.
Assumptions of Proposed Detection Techniques: The proposed detection techniques assume that the printing environment is controlled and that image quality is sufficient for accurate edge and contour detection. In particular, it is assumed that the images used in the analysis have low noise levels and good lighting conditions to get better canny edge detection and contour tracing results. Furthermore, the technique assumes that the printed droplets exhibit consistent contrast against the substrate, which edge detection algorithm can accurately differentiate between the edges of the dots and the background.
Noise Reduction via Image Filtering: This step implements an image filter, such as a Gaussian filter, in order to achieve image smoothing, which is essential for minimizing noise and eliminating irrelevant details from the input. A Gaussian filter is chosen as the Gaussian noise, which is uniformly spread across the image and better reflects the actual noise usually encountered in EHD printing images. Gaussian filtering provides effective noise suppression while maintaining the integrity of the dot edges. The following Gaussian filter formula is used to produce an output image with less noise [35].
G ( x , y ) = 1 2 π σ 2 exp x 2 + y 2 2 σ 2 ,
where x and y represent the distances from the origin along the horizontal and vertical axes, respectively, and  σ  represents the standard deviation of the Gaussian distribution.
Edge Detection for Identifying Object Edges: Following image smoothing, the edge detection approach is applied to identify the edges of the printed dots precisely. This step is crucial in the overall edge detection process and is achieved using the Canny Edge Detection method [36,37,38], which consists of multiple steps, including the following:
  • Apply the Sobel operator [39] to calculate the gradient’s magnitude (G) and direction ( θ ) at every pixel.
    G = G x 2 + G y 2 ,
    θ = tan 1 G y G x ,
    where the gradients in the horizontal and vertical directions, denoted as  G x  and  G y , respectively, are calculated by the Sobel operator.
  • Use Non-maximum Suppression [40] to narrow edge widths to a single pixel, retaining only those pixels at the peak of the gradient magnitude.
  • Apply Hysteresis thresholding to distinguish strong, weak, and non-edge pixels. This step ensures that only strong and weak pixels connected to well-defined edges are identified, i.e., true edges.
The threshold settings directly determine the edge detection sensitivity, influencing the algorithm’s ability to distinguish true and false edges. They can be determined using a trial-and-error method or histogram analysis. For this step, the input is the blurred image. After processing, the output will have sharper and more connected edges. This is achieved by applying non-maximum suppression and hysteresis thresholding techniques, as shown in Figure 2c.
Enhancement of Detected Features Through Morphological Dilation and Closing: Dilation and closure are two morphological processes employed for image enhancement [41], as depicted in Figure 2d. Dilation is used to increase the size of the edges by adding pixels, which enhances the visibility of the image’s features. Subsequently, the closing method applies morphological closure to connect small gaps and fill in holes in the detected edges, leading to a more cohesive representation of the image’s features.
Algorithm 1 provides a clear plan for executing pre-processing procedures iteratively. image by image. The final output of the pre-processing is an image with patterns with well-defined edges and reduced noise, setting the foundation for detailed analysis next.
Algorithm 1: Pre-possessing
 procedure Analysis(image, feature extraction)
        import OpenCv Libraries
       GUI trackars for Canny thresholds
        Create placeholder windows for image
        Create a trackbar to tune image edges
         Create Pre-Processing Function
       Define Pre-Processing function
              Noise reduction
              Edge detection
             Dilation
             Morphological closing
       return closed
       while  T r u e  do
             if webcam is capturing video from cameras then
                   load video
             else if webcam is not capturing video then
                   load image from file
                   Pre-Processing (image)

2.2. Boundaries Extraction and Localization

This phase executes dot analysis and localization by utilizing contour detection and moment calculation. The principal aim of this enhanced methodology is to identify, analyze, and approximate the centroid of dot shapes by employing their boundaries with significant measure features. To achieve accurate identification and analysis of dot features, the contour detection procedures (as shown in Algorithm 2) contain a combination of the following steps.
Algorithm 2: Boundaries and Localization
Apply Contours in the Image Preprocessed
Find Contours
Loop Over to Analyze Each Contour
for  each detected contour  do do
       Contour area
       Draw contour
       Contour perimeter
       Contour approximation
       Calculate Centroid
       Apply moments function to find shapes center
       append shapes center coordinates
Dot identification through finding contour points: This step is to identify regions of shapes or boundaries in the image improved from pre-processing. The “Find Contours” function is used to extract the edges of dots within an image [42]. It involves extracting the boundaries of the dot and saving them as an array that contains the coordinates of the vertices. This enables the rendering of contours and the quantification of the dimensions of each individual dot. By selecting the retrieval mode, the algorithm emphasizes the detection of the most obvious external outlines. This is crucial for distinguishing individual dots without considering their internal features.
Additionally, a contour approximation method is used to accurately capture every point that lies on the boundary of the contour. This is essential for identifying all boundaries in order to subsequently calculate the area of each dot. The contours are stored as a Python list of NumPy arrays containing (x, y) coordinates that collectively define the entire contour. This setup facilitates the process of identifying and preparing dots within the image for further actions.
Dot area analysis through stored contour points: Analysis is then performed for each detected contour. Specifically, a contour area function is used to calculate the area of each dot, as indicated by its vertex coordinates, which are commonly located along the boundaries of the dots. The dot area is calculated using the following Shoelace formula, which entails summing the cross-products of consecutive vertex coordinates and dividing the result by half [43].
A = 1 2 i = 1 n ( x i y i + 1 x i + 1 y i ) ,
where A represents the area of the contour. The points  ( x i y i ) and  ( x i + 1 y i + 1 ) are consecutive vertices of the polygon, and the summation is performed over all vertices of the polygon. The absolute value is taken to ensure the calculated area is positive. This method is particularly advantageous when it comes to precisely quantifying the areas of dots that possess asymmetrical shapes.
Moreover, the “Draw Contour” function is applied to trace and outline the dots (by connecting the counter vertices using eye-catching colors, as shown in Image Output in Figure 1). This add-on feature is to enhance the demonstration of the proposed method visually.
Dot centroid through estimation of its center: Once the dot area is determined, the next is to estimate the shape of the dot identified in an image and calculate the centroid. This stage first uses the contour perimeter function to determine the perimeter of a contour identified. Contour approximation is then applied for shape approximation of the contour, resulting in a simpler dot shape with fewer vertices. An approximation parameter, essentially a fraction of the contour’s perimeter, is used to determine how closely the approximated shape should adhere to the contour [44].
Next, the centroids of the validated shapes are computed using the moments function. The moment’s function is a statistical measurement that outputs the center, e.g., the geometric center, of any validated shapes by calculating the coordinates of the center ( c x c y ), as follows:
c x = M 10 M 00 ,
c y = M 01 M 00 ,
where  M 10 M 01 , and  M 00  are the spatial moments of the shape.

2.3. Distance Estimation and Categorization

The quality of DoD EHD printing is closely related to the spacing of the printed dot. The calculated centroids of the validated dots can be used for quantifying the dot spacing and quality classification. Before proceeding with the printing quality analysis, the centroids are sorted based on their coordinates and then stored in a list. The sorting process makes sure the distances calculated are those of neighboring dots. The algorithm (see Algorithm 3) then iterates through the sorted centroids, computes distances between them, and outputs a table containing the distances between each pair of calculated distances.
Algorithm 3: Distance and Categorization
The order of the shapes center on the list
Sort shapes center to list
for (i, center) in sorted centers do
       Print (i + 1, center)
Calculate Distance Between Centroids
for   i in range ( length of sorted centers 1 )   do
       C 1 = (sorted centers[ i + 1 ][0] − sorted centers[i][0])
       C 2  = (sorted centers[ i + 1 ][1] − sorted centers[i][1])
       d i s t a n c e = ( C 1 2 C 2 2 )
      Distance Categorization
      if  distance < 100   μ m then
           Classify as close distance
      else if  distance > 200   μ m then
           Classify as far distance
    else
           Classify as satisfied distance
Stack and display processed images in each stage
Apply image stacked function
Used image display function
Dot distance calculation: The primary analytical part of this step computes the Euclidean distance between consecutive centroids from the sorted list. The calculation is executed within a loop that iterates through the sorted centroids, utilizing the mathematical formula for Euclidean distance, i.e.,
d i j = ( c x i c x j ) 2 + ( c y i c y j ) 2 .
Dot Distance categorization: The calculated distances are categorized into three distinct groups: close, far, and satisfactory. These categories are determined by user-selected thresholds. If the distance is less than the chosen threshold’s lower bound, it is considered to be close. If the distance remains within the threshold range, it is considered satisfactory. If the distance exceeds the upper bound of the threshold, it is considered far. These categories provide a simple yet effective way to present the printing quality and can help in the printing parameter optimization. For example, too many “Close”/“Far” distances indicate slow/fast lateral stage movements or high/low jetting frequencies during the printing, respectively.
Displaying the quantification results: Besides the aforementioned dot analysis, the proposed algorithm also displays the results for visual inspection and interpretation, which include both the identified shape contour and centroid for each dot on the same image.

3. Experiments

The proposed algorithm was initially validated using artificially generated test images, which contained dots with known sizes (areas) and spacing information. SolidWorks was used to create these images (for two different printing patterns) of varied dot sizes and spacings. The first test image (as shown in Figure 3), mimicking a line-by-line printing pattern, contained 100 dots with different sizes and distances. Another test image used a circular printing pattern, which had 12 dots, all with different sizes. The centroids of neighboring dots along the printing direction were selected to determine the dot distances. Moreover, for demonstration, the proposed approach was compared with the broadly used microscope-based manual quantification method for analyzing experimental EHD DoD printing results in terms of speed and accuracy.
In the experiment, a GaussianBlur filter with a kernel size of 7 × 7 was applied to the images for optimal performance. We also used GUI trackers with pixel gradients ranging from 220 to 255 for upper and lower thresholds. Note that the Canny threshold settings can be determined using a trial-and-error method or histogram analysis. For dot distance analysis, the distance threshold values were chosen as if the distance is less than 100  μ m, it is classified as “close”; if the distance is greater than 200  μ m, it is classified as “far”; and for distances between 100  μ m and 200  μ m, the classification is “satisfactory”. These distance values were determined based on the desired printing requirement.

4. Results

Image Analysis Algorithm Validation

Validation for line pattern test image:Figure 3 shows the first test image, which contains 100 circular dots with different sizes (radii ranging from 15 to 60  μ m) and distances, which mimics the lines printed by EHD DoD printing. Each column represents a printed line, with sequential numbering from top to bottom. Figure 3a presents the original image pattern, while Figure 3b displays the image processed by the proposed analysis algorithm. Once feature extraction was applied, the dots were identified and annotated, along with their boundaries. This annotation helps in visualizing and verifying the accuracy of the proposed approach and provides clear comparison information between the true and estimated dot areas. Sizes of the dots were quantified as the dot areas, and the dot distances were estimated as that of the centroids of neighboring dots along the printing direction (top–down direction).
The proposed approach was able to quantify the dot sizes accurately. The overall quantification accuracy for all 100 dots was 92.5 ± 3.9%. For example, Table 1 presents data on generated dots with various radii ranging from 15 to 60  μ m. It lists the true area (i.e.,  π × radius 2 ) and the calculated values using the proposed approach, along with the calculation accuracy with respect to the true values. For instance, a circle with a radius of 35  μ m has a true area of 3848.45  μ m 2 , and the estimated area of 3846.09  μ m 2  yields an accuracy rate of 99.94%. For most of the dot sizes, the image processing algorithm yielded similar area estimation accuracy. However, the area quantification error became notably larger for smaller dots, such as the one with a radius of 20  μ m. This discrepancy is primarily due to the pixel-based detection mechanism of the algorithm, where the pixel-to-feature size ratio significantly impacts the accuracy of smaller features. In such cases, even a single miscounted pixel can lead to a notable percentage estimation error. This can be easily avoided by capturing images with high-resolution cameras.
The distance of the neighboring dots in each line (i.e., column) was quantified for analyzing dot space as the assumed printing direction was vertical. The overall distance quantification accuracy was high, 98 ± 1.6%. For example, Table 2 shows the distance information of the dots labeled in Figure 3. The algorithm also classified the distance based on pre-chosen criteria: dot spacing in the range of 100 to 200  μ m was considered satisfactory, for example. Furthermore, Figure 4 illustrates the dot distance analysis results, which reflect the overall printing quality. Of the ninety distances quantified (ten lines and ten dots in each line), sixty-eight of them satisfied the chosen criteria, four of them were too close, and eighteen of them were too far. This information is important in optimizing the EHD printing parameters, such as jetting frequency and stage moving speed.
Validation for circular pattern test image: Figure 5 shows the second test image, with 12 dots, which mimics the circular pattern of EHD DoD printing (counterclockwise direction). Figure 5a presents the original image pattern, while Figure 5b displays the image processed by the proposed analysis algorithm. Sizes of the dots were quantified as the dot areas, and the dot distances were estimated as that of the centroids of neighboring dots along the printing direction (counterclockwise direction).
The proposed approach was able to quantify the dot sizes accurately. The overall quantification accuracy for all 12 dots was 98.8 ± 0.7%. For example, Table 3 presents data on generated dots with various radii ranging from 6 to 12  μ m. It lists the true area (i.e.,  π × radius 2 ) and the calculated values using the proposed approach, along with the calculation accuracy with respect to the true values. For instance, a dot with a radius of 10  μ m has a true area of 314.159  μ m 2 , and the estimated area of 313.2576  μ m 2  yields an accuracy rate of 99.7%. The proposed algorithm yielded similar area estimation accuracy for all dot sizes.
The distance of the neighboring dots along the counterclockwise printing direction was quantified for analyzing dot spacing. The overall distance quantification accuracy was 94.7 ± 1.2%. For example, Table 4 shows the distance information of the dots labeled in Figure 5. As can be seen in Table 4, the distance quantification also achieved high accuracy.

5. Discussion

After the accuracy of the proposed algorithm was validated, we demonstrated the efficacy of this approach in analyzing experimental EHD DoD printed results and compared it with the conventional manual microscope analysis method. The EHD DoD printed dots are shown in Figure 6.
Traditionally, microscope analysis of EHD DoD printing is accomplished by manually outlining the dot contours using default circular shapes, allowing the radius and area calculations as shown in Figure 6b. For the same image of printed dots, the proposed analysis approach found the exact shape of the dots automatically using feature extraction and edge detection methods to extract and trace boundaries, as shown in Figure 6c. During the edge detection step, the image was processed to highlight the edges between the dots and the background. Following edge detection, the borders of the dots were detected and tracked. The contour points of the edges were then sorted into a vertex list containing the boundary coordinates. The dot areas were then calculated. Meanwhile, a contour shape was drawn to visualize the detected boundaries surrounding all connecting curves. It is clear that the proposed approach is based on the true shape of the printed dots rather than assuming circular shapes as in the conventional manual microscope method.
A detailed comparison of the two methods is shown in Figure 6 and Table 5. For example, ten dots were selected from the EHD DoD printing results image. The size calculation comparison of these two methods is shown in Table 5. The manual microscope results were generated by carefully outlining each dot from the experimental list and then computing the area of the outline circles. Using the manual microscope result as a reference, the dot areas quantified by the proposed approach are very close (see the last column of Table 5). The manual approach could achieve accuracy because the experimental list drew the circular outline to match the dot edge as closely as possible. However, there are cases where the precision cannot be guaranteed.
As highlighted in Figure 6d, the yellow contour shows the undetected portion of the dot on which the red circle was the manual detection result. This is because the accuracy of this approach entirely relies on human eye inspection, and the detection is only restricted to round circles. Therefore, the manual process may lead to an inaccurate estimation of the dot shape and size. Moreover, such a detection usually takes at least several seconds for each dot, thereby at least several minutes for the entire image. On the other hand, the proposed approach detected the dot based on its actual shape with no pre-defined profile, and the detected contour precisely matched the edge of the dot. The close match of the outline with the printed dot edge across all samples indicates high precision and reliability. Analyzing the entire image only took a few milliseconds.
Therefore, the proposed approach outperforms the conventional manual microscope analysis approach in terms of both accuracy and efficiency. To further improve the capability of the proposed method, we will optimize the algorithm by exploring more shape detection options, such as RANSAN [45], J-Linkage [46], and CNNs [47], to further improve its quantification efficiency. It is noted that the proposed method is not limited to EHD DoD printing results analysis. It can be easily adapted for any applications that involve isolated pattern detection and size quantification.
Although the proposed method is demonstrated to have high accuracy under controlled conditions, several limitations may impact its effectiveness in broader applications. One major limitation is the dependence on high-quality imaging. Changes in lighting conditions, focus inconsistencies, or picture resolution limits can affect the edge and contour detection processes, potentially reducing accuracy. External conditions, for example, temperature or humidity of the environment, could also further influence the formation of the ink droplets, resulting in the shape of droplets that could make it difficult for an algorithm to trace boundaries and centroid identification accurately.

6. Conclusions

This paper presented an algorithm that uses feature extraction techniques to address the printing result analysis challenges of EHD DoD manufacturing. The proposed technique, which employs edge and contour detection methods, facilitates significant improvements in detecting printed dot boundaries, resulting in high accuracies in dot size and spacing quantification. The proposed approach was first validated using simulated EHD DoD dot arrays. The achieved high accuracy in dot area and distance quantification demonstrated the reliability of our approach. Comparison with the conventional manual microscope-based method on EHD DoD-printed images further demonstrated the efficacy of the proposed approach.
For future applications, the proposed method can be integrated into real-time EHD printing systems to improve printing quality analysis. For example, the dot quantification results obtained from the proposed algorithm can be used as the feedback data for closed-loop EHD printing system control. The difference between the desired dot specifications and the quantified results will be used by the controller to adjust the printing parameters (such as jetting frequency/voltage, printing nozzle-substrate distance, and printing stage moving speed) to optimize the printing performance in real time. Potential challenges involve computational speed optimization for achieving real-time processing and accurate modeling of the printing parameters vs. dot specification relation. Future work will focus on seeking both software and hardware approaches to address these challenges toward real-time EHD DoD printing optimization.

Author Contributions

Conceptualization, Y.T. and J.R.; methodology, Y.T., J.R. and C.S.; software, Y.T.; validation, Y.T. and C.S.; formal analysis, Y.T.; investigation, Y.T.; resources, Y.T., J.R. and C.S.; data curation, Y.T. and J.R.; writing—original draft preparation, Y.T.; writing—review and editing, Y.T., J.R. and C.S.; visualization, Y.T.; supervision, J.R.; project administration, J.R.; funding acquisition, Y.T. and J.R. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the funding of the Deanship of Graduate Studies and Scientific Research at Iowa State University and Jazan University, Saudi Arabia, through Project Number: GSSRD-24.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EHDElectrohydrodynamic
DoDDrop-on-Demand
AMAdditive Manufacturing
GUIGraphical User Interface
OpenCVOpen Source Computer Vision Library
SolidWorksSolid Modeling Computer-aided Design and Computer-aided Engineering Program

References

  1. Raje, P.V.; Murmu, N.C. A review on electrohydrodynamic-inkjet printing technology. Int. J. Emerg. Technol. Adv. Eng. 2014, 4, 174–183. [Google Scholar]
  2. Yan, Q.; Dong, H.; Su, J.; Han, J.; Song, B.; Wei, Q.; Shi, Y. A review of 3D printing technology for medical applications. Engineering 2018, 4, 729–742. [Google Scholar] [CrossRef]
  3. Cummins, G.; Kay, R.; Terry, J.; Desmulliez, M.P.; Walton, A.J. Optimization and characterization of drop-on-demand inkjet printing process for platinum organometallic inks. In Proceedings of the 2011 IEEE 13th Electronics Packaging Technology Conference, Singapore, 7–9 December 2011; pp. 256–261. [Google Scholar]
  4. Khan, S.; Lorenzelli, L.; Dahiya, R.S. Technologies for printing sensors and electronics over large flexible substrates: A review. IEEE Sens. J. 2014, 15, 3164–3185. [Google Scholar] [CrossRef]
  5. Shah, M.A.; Lee, D.G.; Lee, B.Y.; Hur, S. Classifications and applications of inkjet printing technology: A review. IEEE Access 2021, 9, 140079–140102. [Google Scholar] [CrossRef]
  6. Li, H.; Yang, W.; Duan, Y.; Chen, W.; Zhang, G.; Huang, Y.; Yin, Z. Residual oscillation suppression via waveform optimization for stable electrohydrodynamic drop-on-demand printing. Addit. Manuf. 2022, 55, 102849. [Google Scholar] [CrossRef]
  7. He, Y.; Chen, H.; Li, L.; Liu, J.; Guo, M.; Su, Z.; Duan, B.; Zhao, Y.; Sun, D.; Hai, Z. Electrohydrodynamic Printed Ultra-micro AgNPs Thin Film Temperature Sensor. IEEE Sens. J. 2023, 23, 21018–21028. [Google Scholar] [CrossRef]
  8. Jang, Y.; Kim, J.; Byun, D. Invisible metal-grid transparent electrode prepared by electrohydrodynamic (EHD) jet printing. J. Phys. D Appl. Phys. 2013, 46, 155103. [Google Scholar] [CrossRef]
  9. Park, J.U.; Lee, J.H.; Paik, U.; Lu, Y.; Rogers, J.A. Nanoscale patterns of oligonucleotides formed by electrohydrodynamic jet printing with applications in biosensing and nanomaterials assembly. Nano Lett. 2008, 8, 4210–4216. [Google Scholar] [CrossRef]
  10. Fan, X.; Yang, X.; Kong, X.; Zhang, T.; Wang, S.; Lin, Y.; Chen, Z. Recent progresses on perovskite quantum dots patterning techniques for color conversion layer in micro-LED displays. Next Nanotechnol. 2024, 5, 100045. [Google Scholar] [CrossRef]
  11. Kim, H.; Lin, Y.; Tseng, T.L.B. A review on quality control in additive manufacturing. Rapid Prototyp. J. 2018, 24, 645–669. [Google Scholar] [CrossRef]
  12. Han, Y.; Dong, J. Electrohydrodynamic printing for advanced micro/nanomanufacturing: Current progresses, opportunities, and challenges. J. Micro-Nano-Manuf. 2018, 6, 040802. [Google Scholar] [CrossRef]
  13. Ding, K.; Avrutin, V.; Izyumskaya, N.; Özgür, Ü.; Morkoç, H. Micro-LEDs, a manufacturability perspective. Appl. Sci. 2019, 9, 1206. [Google Scholar] [CrossRef]
  14. Park, J.; Kim, B.; Kim, S.Y.; Hwang, J. Prediction of drop-on-demand (DOD) pattern size in pulse voltage-applied electrohydrodynamic (EHD) jet printing of Ag colloid ink. Appl. Phys. A 2014, 117, 2225–2234. [Google Scholar] [CrossRef]
  15. Zhang, X.; Lies, B.; Lyu, H.; Qin, H. In-situ monitoring of electrohydrodynamic inkjet printing via scalar diffraction for printed droplets. J. Manuf. Syst. 2019, 53, 1–10. [Google Scholar] [CrossRef]
  16. Huang, Y.; Hsiang, E.L.; Deng, M.Y.; Wu, S.T. Mini-LED, Micro-LED and OLED displays: Present status and future perspectives. Light. Sci. Appl. 2020, 9, 105. [Google Scholar] [CrossRef]
  17. Wu, Y.; Ma, J.; Su, P.; Zhang, L.; Xia, B. Full-color realization of micro-LED displays. Nanomaterials 2020, 10, 2482. [Google Scholar] [CrossRef]
  18. Huang, Y.M.; Chen, J.H.; Liou, Y.H.; James Singh, K.; Tsai, W.C.; Han, J.; Lin, C.J.; Kao, T.S.; Lin, C.C.; Chen, S.C.; et al. High-uniform and high-efficient color conversion nanoporous GaN-based micro-LED display with embedded quantum dots. Nanomaterials 2021, 11, 2696. [Google Scholar] [CrossRef]
  19. Flah, M.; Nunez, I.; Ben Chaabene, W.; Nehdi, M.L. Machine learning algorithms in civil structural health monitoring: A systematic review. Arch. Comput. Methods Eng. 2021, 28, 2621–2643. [Google Scholar] [CrossRef]
  20. Li, H.; Gu, Z.; He, D.; Wang, X.; Huang, J.; Mo, Y.; Li, P.; Huang, Z.; Wu, F. A lightweight improved YOLOv5s model and its deployment for detecting pitaya fruits in daytime and nighttime light-supplement environments. Comput. Electron. Agric. 2024, 220, 108914. [Google Scholar] [CrossRef]
  21. Jia, J.; Li, Y. Deep learning for structural health monitoring: Data, algorithms, applications, challenges, and trends. Sensors 2023, 23, 8824. [Google Scholar] [CrossRef]
  22. Zhang, J.; Chen, Z.; Yan, G.; Wang, Y.; Hu, B. Faster and Lightweight: An Improved YOLOv5 Object Detector for Remote Sensing Images. Remote Sens. 2023, 15, 4974. [Google Scholar] [CrossRef]
  23. Ganesan, P.; Sajiv, G. A comprehensive study of edge detection for image processing applications. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–6. [Google Scholar]
  24. Azzopardi, G.; Petkov, N. Trainable COSFIRE filters for keypoint detection and pattern recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 490–503. [Google Scholar] [CrossRef] [PubMed]
  25. Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111, 257–276. [Google Scholar] [CrossRef]
  26. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef]
  27. Loy, G.; Zelinsky, A. A fast radial symmetry transform for detecting points of interest. In Proceedings of the Computer Vision—ECCV 2002: 7th European Conference on Computer Vision, Copenhagen, Denmark, 28–31 May 2002, Proceedings, Part I 7; Springer: Berlin/Heidelberg, Germany, 2002; pp. 358–368. [Google Scholar]
  28. Ni, J.; Singh, M.K.; Bahlmann, C. Fast radial symmetry detection under affine transformations. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 932–939. [Google Scholar] [CrossRef]
  29. Shriwas, R.; Bodkhe, Y.; Mane, A.; Kulkarni, R. Overview of Canny Edge Detection and Hough Transform for Lane Detection. In Proceedings of the 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0, Raigarh, India, 5–7 June 2024; pp. 1–5. [Google Scholar] [CrossRef]
  30. Melin, P.; Gonzalez, C.I.; Castro, J.R.; Mendoza, O.; Castillo, O. Edge-detection method for image processing based on generalized type-2 fuzzy logic. IEEE Trans. Fuzzy Syst. 2014, 22, 1515–1525. [Google Scholar] [CrossRef]
  31. Mittal, M.; Verma, A.; Kaur, I.; Kaur, B.; Sharma, M.; Goyal, L.M.; Roy, S.; Kim, T.H. An efficient edge detection approach to provide better edge connectivity for image analysis. IEEE Access 2019, 7, 33240–33255. [Google Scholar] [CrossRef]
  32. Hesar, M.E.; Masouleh, M.; Kalhor, A.; Menhaj, M.; Kashi, N. Ball tracking with a 2-DOF spherical parallel robot based on visual servoing controllers. In Proceedings of the 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 15–17 October 2014; pp. 292–297. [Google Scholar]
  33. Du, Y.; Mallajosyula, B.; Sun, D.; Chen, J.; Zhao, Z.; Rahman, M.; Quadir, M.; Jawed, M.K. A low-cost robot with autonomous recharge and navigation for weed control in fields with narrow row spacing. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 3263–3270. [Google Scholar]
  34. Revathi, A.; Modi, N.A. Comparative analysis of text extraction from color images using tesseract and opencv. In Proceedings of the 2021 8th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 17–19 March 2021; pp. 931–936. [Google Scholar]
  35. Kumain, S.C.; Singh, M.; Singh, N.; Kumar, K. An efficient Gaussian noise reduction technique for noisy images using optimized filter approach. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 15–17 December 2018; pp. 243–248. [Google Scholar]
  36. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
  37. Cheng, C.Y.; Lin, Y.B.; Wang, K. Using a canny-edge-detection based method to characterize in-plane micro-actuators. In Proceedings of the 2012 7th IEEE International Conference on Nano/Micro Engineered and Molecular Systems (NEMS), Kyoto, Japan, 5–8 March 2012; pp. 729–732. [Google Scholar]
  38. Qu, Z.; Yang, Z.; Ru, C. Edges Detection of Nanowires and Adaptively Denoising with Deep Convolutional Neural Network from SEM Images. In Proceedings of the 2020 IEEE 20th International Conference on Nanotechnology (IEEE-NANO), Montreal, QC, Canada, 29–31 July 2020; pp. 146–149. [Google Scholar]
  39. Jin-Yu, Z.; Yan, C.; Xian-Xiang, H. Edge detection of images based on improved Sobel operator and genetic algorithms. In Proceedings of the 2009 International Conference on Image Analysis and Signal Processing, Linhai, China, 11–12 April 2009; pp. 31–35. [Google Scholar]
  40. Al-Furaiji, O.J.; Anh Tuan, N.; Tsviatkou, V.Y. A new fast efficient non-maximum suppression algorithm based on image segmentation. Indones. J. Electr. Eng. Comput. Sci. 2020, 19, 1062–1070. [Google Scholar] [CrossRef]
  41. Sreedhar, K.; Panlal, B. Enhancement of images using morphological transformation. arXiv 2012, arXiv:1203.2514. [Google Scholar] [CrossRef]
  42. Mandaliana, K.A.; Harsono, T.; Sigit, R. 3D Visualization and Reconstruction of Lung Cancer Images using Marching Cubes Algorithm. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019; pp. 143–147. [Google Scholar]
  43. Bresch, E.; Narayanan, S. Region segmentation in the frequency domain applied to upper airway real-time magnetic resonance images. IEEE Trans. Med. Imaging 2008, 28, 323–338. [Google Scholar] [CrossRef]
  44. Li, L.; Jiang, W. An improved Douglas-Peucker algorithm for fast curve approximation. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; Volume 4, pp. 1797–1802. [Google Scholar]
  45. Xu, B.; Chen, Z.; Zhu, Q.; Ge, X.; Huang, S.; Zhang, Y.; Liu, T.; Wu, D. Geometrical Segmentation of Multi-Shape Point Clouds Based on Adaptive Shape Prediction and Hybrid Voting RANSAC. Remote Sens. 2022, 14, 2024. [Google Scholar] [CrossRef]
  46. Amerini, I.; Ballan, L.; Caldelli, R.; Del Bimbo, A.; Del Tongo, L.; Serra, G. Copy-move forgery detection and localization by means of robust clustering with J-Linkage. Signal Process. Image Commun. 2013, 28, 659–669. [Google Scholar] [CrossRef]
  47. Adorni, G.; D’Andrea, V.; Destri, G.; Mordonini, M. Shape searching in real world images: A CNN-based approach. In Proceedings of the 1996 Fourth IEEE International Workshop on Cellular Neural Networks and Their Applications Proceedings (CNNA-96), Seville, Spain, 24–26 June 1996; pp. 213–218. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed framework for DoD EHD printing analysis. (a) The proposed method takes a microscope image of printed droplets as the input, outputs the detection results, and calculates the spacing of the dot size (area). (b) Visual analysis of the printing quality provides size distribution of the printed dots. The x-axis represents the three categories of dot size range, and the y-axis indicates the percentage within each size category.
Figure 1. Overview of the proposed framework for DoD EHD printing analysis. (a) The proposed method takes a microscope image of printed droplets as the input, outputs the detection results, and calculates the spacing of the dot size (area). (b) Visual analysis of the printing quality provides size distribution of the printed dots. The x-axis represents the three categories of dot size range, and the y-axis indicates the percentage within each size category.
Micromachines 15 01376 g001
Figure 2. An image (a) is used as input for the pre-proposed method; (b) is the image after applying a Gaussian filter; (c) is the image after applying Canny edge detection; (d) is the image after applying dilation and morphological closing.
Figure 2. An image (a) is used as input for the pre-proposed method; (b) is the image after applying a Gaussian filter; (c) is the image after applying Canny edge detection; (d) is the image after applying dilation and morphological closing.
Micromachines 15 01376 g002
Figure 3. Line dot printing pattern. (a) original pattern. (b) processed pattern.
Figure 3. Line dot printing pattern. (a) original pattern. (b) processed pattern.
Micromachines 15 01376 g003
Figure 4. Distance Quantification Analysis.
Figure 4. Distance Quantification Analysis.
Micromachines 15 01376 g004
Figure 5. Circular dot printing pattern. (a) original pattern. (b) processed pattern.
Figure 5. Circular dot printing pattern. (a) original pattern. (b) processed pattern.
Micromachines 15 01376 g005
Figure 6. Experimental EHD DoD print result analysis. (a) The origin image of the printed droplet. (b) The printed droplet was analyzed offline using a manual microscope approach. (c) The analyzed image using the the proposed approach. (d) Detailed comparison of the two analysis methods.
Figure 6. Experimental EHD DoD print result analysis. (a) The origin image of the printed droplet. (b) The printed droplet was analyzed offline using a manual microscope approach. (c) The analyzed image using the the proposed approach. (d) Detailed comparison of the two analysis methods.
Micromachines 15 01376 g006
Table 1. Dot area calculation accuracy for simulated EHD DoD line pattern printing.
Table 1. Dot area calculation accuracy for simulated EHD DoD line pattern printing.
DotRadius ( μ m)True Area ( μ m 2 )Estimated Dot Area ( μ m 2 )Accuracy %
1302827.42880.198.1
2353848.43846.199.9
3405026.54952.798.5
4456361.76159.296.7
5507853.97491.695.1
6251963.52081.194.3
7201256.61396.489.9
815706.8837.894.3
9559503.38959.593.9
106011,309.710,581.793.2
Table 2. Dot distance calculation accuracy and categorization for simulated EHD DoD line pattern printing.
Table 2. Dot distance calculation accuracy and categorization for simulated EHD DoD line pattern printing.
Distance from Center to CenterTrue Distance ( μ m)Estimated Distance ( μ m)Distance ProfileOverall Accuracy %
1–283.982.1Close97.8
2–397.096.1Close98.9
3–4100.399.2Close98.8
4–5120.1117.8Satisfactory98.0
5–6123.3120.9Satisfactory97.9
6–790.489.9Close99.3
7–880.679.0Close98.0
8–9118.4116.2Satisfactory98.1
9–10156.2153.4Satisfactory98.1
11–12136.7134.8Satisfactory98.5
Average 98.3 ± 0.4
Table 3. Dot size calculation accuracy for simulated EHD DoD circular pattern printing.
Table 3. Dot size calculation accuracy for simulated EHD DoD circular pattern printing.
DotRadius ( μ m)True Area ( μ m 2 )Estimated Dot Area ( μ m 2 )Accuracy %
112452.389446.11498.6
211380.1375.998.9
310.5346.3342.798.9
410314.1313.299.7
59.5283.5282.799.7
69254.4255.999.4
78.5226.9227.799.6
88201.0202.199.4
97.5176.7178.499.0
107153.9155.998.6
116.5132.7136.297.3
126113.1115.897.5
Table 4. Dot distance calculation accuracy for simulated EHD DoD circular pattern printing.
Table 4. Dot distance calculation accuracy for simulated EHD DoD circular pattern printing.
Distance from Center to CenterLocation ( μ m)True Distance ( μ m)Estimated Distance ( μ m)Overall Accuracy %
1–2(364, 438)37.0639.0294.7
2–3(436, 671)30.5332.0395.0
3–4(625, 737)28.1029.5394.9
4–5(807, 768)28.3929.6695.5
5–6(992, 756)34.3136.0994.8
6–7(1201, 671)39.0540.8695.3
7–8(1291, 432)38.8740.7395.2
8–9(1184, 201)25.9027.1995.0
9–10(1033, 123)29.0030.2995.5
10–11(844, 112)34.0535.7495.0
11–12(621, 125)30.1531.6395.0
12–1(446, 217)41.6437.7190.5
Average 94.7 ± 1.2
Table 5. Size Detection of EHD Printed dots.
Table 5. Size Detection of EHD Printed dots.
DotMicroscope Radius ( μ m)Microscope Area ( μ m 2 )Area from Proposed
Algorithm ( μ m 2 )
Difference in Percentage %
147.907208.756885.9095.3
256.8010,135.6510,422.8397.1
359.8611,255.5911,800.9595.1
463.5412,682.6012,537.0198.8
563.5912,704.6912,875.6398.6
654.969490.4010,051.1394.4
757.6110,428.2010,419.6899.9
858.9710,856.4010,790.3399.3
972.9416,715.5317,123.4097.5
1078.1419,182.8918,756.6897.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tawhari, Y.; Shukla, C.; Ren, J. An Image Processing Approach to Quality Control of Drop-on-Demand Electrohydrodynamic (EHD) Printing. Micromachines 2024, 15, 1376. https://doi.org/10.3390/mi15111376

AMA Style

Tawhari Y, Shukla C, Ren J. An Image Processing Approach to Quality Control of Drop-on-Demand Electrohydrodynamic (EHD) Printing. Micromachines. 2024; 15(11):1376. https://doi.org/10.3390/mi15111376

Chicago/Turabian Style

Tawhari, Yahya, Charchit Shukla, and Juan Ren. 2024. "An Image Processing Approach to Quality Control of Drop-on-Demand Electrohydrodynamic (EHD) Printing" Micromachines 15, no. 11: 1376. https://doi.org/10.3390/mi15111376

APA Style

Tawhari, Y., Shukla, C., & Ren, J. (2024). An Image Processing Approach to Quality Control of Drop-on-Demand Electrohydrodynamic (EHD) Printing. Micromachines, 15(11), 1376. https://doi.org/10.3390/mi15111376

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop