Next Article in Journal
Assessing the Impact of Prolonged Sitting and Poor Posture on Lower Back Pain: A Photogrammetric and Machine Learning Approach
Previous Article in Journal
Deep Learning for Predicting Attrition Rate in Open and Distance Learning (ODL) Institutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net

Laboratory of Equipment Design, Department of Biochemical and Chemical Engineering, TU Dortmund University, Emil-Figge-Straße 68, 44227 Dortmund, Germany
*
Author to whom correspondence should be addressed.
Computers 2024, 13(9), 230; https://doi.org/10.3390/computers13090230
Submission received: 24 July 2024 / Revised: 4 September 2024 / Accepted: 11 September 2024 / Published: 13 September 2024

Abstract

:
Miniaturization promotes the efficiency and exploration domain in scientific fields such as computer science, engineering, medicine, and biotechnology. In particular, the field of microfluidics is a flourishing technology, which deals with the manipulation of small volumes of liquid. Dispersed droplets or bubbles in a second immiscible liquid are of great interest for screening applications or chemical and biochemical reactions. However, since very small dimensions are characterized by phenomena that differ from those at macroscopic scales, a deep understanding of physics is crucial for effective device design. Due to small volumes in miniaturized systems, common measurement techniques are not applicable as they exceed the dimensions of the device by a multitude. Hence, image analysis is commonly chosen as a method to understand ongoing phenomena. Artificial Intelligence is now the state of the art for recognizing patterns in images or analyzing datasets that are too large for humans to handle. X-ray-based Computer Tomography adds a third dimension to images, which results in more information, but ultimately, also in more complex image analysis. In this work, we present the application of the U-Net neural network to extract certain states during droplet formation in a capillary, which forms a constantly repeated process that is captured on tens of thousands of CT images. The experimental setup features a co-flow setup that is based on 3D-printed capillaries with two different cross-sections with an inner diameter, respectively edge length of 1.6 mm. For droplet formation, water was dispersed in silicon oil. The classification into different droplet states allows for 3D reconstruction and a time-resolved 3D analysis of the present phenomena. The original U-Net was modified to process input images of a size of 688 × 432 pixels while the structure of the encoder and decoder path feature 23 convolutional layers. The U-Net consists of four max pooling layers and four upsampling layers. The training was performed on 90% and validated on 10% of a dataset containing 492 images showing different states of droplet formation. A mean Intersection over Union of 0.732 was achieved for a training of 50 epochs, which is considered a good performance. The presented U-Net needs 120 ms per image to process 60,000 images to categorize emerging droplets into 24 states at 905 angles. Once the model is trained sufficiently, it provides accurate segmentation for various flow conditions. The selected images are used for 3D reconstruction enabling the 2D and 3D quantification of emerging droplets in capillaries that feature circular and square cross-sections. By applying this method, a temporal resolution of 25–40 ms was achieved. Droplets that are emerging in capillaries with a square cross-section become bigger under the same flow conditions in comparison to capillaries with a circular cross section. The presented methodology is promising for other periodic phenomena in different scientific disciplines that focus on imaging techniques.

1. Introduction

In the 1970s, it became possible for the first time to create machines that combine electrical and mechanical elements below the millimeter scale. These are referred to as Micro Electromechanical Systems (MEMS). When the field expanded in the 1990s and started to include fluids, it gave rise to the field of microfluidics [1]. Although microfluidics have been advancing for several decades, the development of real-world applications has long been slow due to the lack of a physical understanding of flow and interfacial phenomena present, which is crucial for efficient device design. Recently, the COVID-19 pandemic demonstrated the importance of fast and accessible diagnostic methods, where microfluidic applications have been at the forefront of these developments, enabling the rapid development of SARS-CoV-2 antibody kits with high sensitivity and specificity [2,3].
An important field in microfluidics is the formation and flow behavior of multiphase flows, where at least two immiscible fluids, which are referred to as the dispersed phase and the continuous, wetting phase, flow as segments through small channels or capillaries [2,4]. These multiphase flows are characterized by large surface-to-volume ratios and good mixing to promote transport phenomena. Among the most important applications, the screening of chemical reactions [5,6,7], drug efficacy [8,9], or antibodies [10,11] are investigated by many research groups. To establish highly periodic segmented flows two or more immiscible liquids need to be brought into contact while flowing at a constant velocity in such a way that the dispersed phase is exposed to the shear forces of the continuous phase.
Multiphase flows in microchannels can be categorized into several flow regimes, namely, droplet flow, slug flow, and parallel flow [4]. The emerging flow regime depends on fluid properties, flow parameters, the wetting behavior of the fluids, and the setup itself. Due to the segmented flow and its high controllability, droplet flow and slug flow are the preferred flow regimes in most microfluidic applications. The velocity and the ratio of the involved fluids promote either of these two flow regimes [12]. In droplet flow, the dispersed phase forms droplets that are smaller than the channel dimensions, whereas slugs have a length that exceeds the channel diameter.
Achieving a specific flow within a capillary with defined volume of the dispersed phase and spacing between segments is of great interest. The volume of the segments of both the dispersed phase and the continuous phase depends on the volumetric flow rate and the phase ratio of both phases as well as geometry factors, which are associated with the given flow system. When the dispersed phase is emerging as a segment in the continuous phase stream, it begins to grow simultaneously in the radial and axial directions to the continuous flow, while the continuous phase continues to bypass the forming segment (filling stage). When the segment has grown to the size of the channel and is thus blocking the entire cross-section, it keeps growing in the axial direction until the force acting on the dispersed phase is great enough to form a neck on the segment and pinch it off (necking stage). The stable and reproducible slug flow formation with only negligible deviations make it especially well-suited for a multitude of applications in which high precision is demanded. Another decisive advantage that can only be achieved with slug flow is the presence of inner vortices, known as Taylor vortices, in each segment, which further promote mixing and consequently heat and mass transfer during flow [13].
Common experiments and evaluations of microfluidics are based on image analysis. However, the phenomena that can be observed during multiphase flow formation are of a three-dimensional nature [12,14]. The reliable periodicity of the process makes it suitable for transference to a temporally resolved analysis, as can be seen in Figure 1.
Simple two-dimensional investigations and the assumption of symmetry are not able to adequately represent the complex process and important details may not be noticed [15]. This applies in particular to devices with rectangular channels, which are popular due to established manufacturing techniques. X-ray-based Computed Tomography (CT) is an indispensable imaging method in modern clinical routines. With the ever-growing interest in miniaturization and more detailed information, desktop CT scanners for insights into technical, anthropomorphic, forensic, and archeological, as well as paleontological, applications were developed around the turn of the millennium. These applications extend the use of the method as a versatile diagnostic tool for non-destructive analysis and three-dimensional imaging in the micrometer (µ-CT) and even nanometer range (nano-CT). Unlike in clinical scanners, in desktop scanners, the X-ray source and the detector stay in place during a scan while the specimen is rotated. The X-ray source emits X-rays that are attenuated when passing through the investigated object and their remaining intensity is measured by a detector, which in modern CTs, is usually a solid-state scintillator [16]. This process results in 2D X-ray projection images that are acquired at different rotation angles ω until a range of at least 180 is covered. These sets of 2D X-ray images are then reconstructed into a virtual volume consisting of voxels, the three dimensional equivalent of pixels. The resulting 3D representation provides a comprehensive view with a high spatial resolution of the sample without intrusive or destructive methods, enabling the creation of cross-sectional images at different angles [14].
Artificial Intelligence (AI) has been thriving in the last decade in the detection of certain patterns in thousands or even millions of images, a task that is relatively straightforward for computers but intellectually difficult for humans [17]. To succeed at these tasks, computers had to be taught to teach themselves, a capability known as Machine Learning (ML). In ML, the selection and weight of patterns become part of the training process. This can be further promoted by adding more layers, which are capable of selecting features from the previous ones, which allows the computer to build complex systems out of very simple building blocks—an approach that is known as Deep Learning (DL). ML algorithms are commonly trained with datasets labeled by humans, which also serve as the ground truth. The main goal of this supervised learning is to create a model that is capable of growing more accurate the more data it is given. Due to the high level of parallelization in these algorithms, graphics processing units (GPUs) are commonly used to execute them [18].
Computer vision tasks are usually performed by Convolutional Neural Networks (CNNs). One of the first object detection tasks using CNNs was proposed in 1995, where the authors developed a four-layer CNN to detect nodules in X-ray images [19]. CNNs are DL architectures that draw their inspiration from the mechanisms of the visual perception of living creatures. This architecture, derived from the functionality of neurons, is divided into different layers performing a multitude of operations. These layers are connected in a sequential manner in which the output of one layer is the input of the next. The exact order in which these layers are stacked depends on the specific model. The input layer holds the pixel values of the processed image. Convolutional layers compute the neurons’ output based on the scalar product of the layers’ weights and the connected input as the layer slides across the spatial dimensions of the image with a specific stride and applies each of its functions elementwise to the selected input matrix. The network will learn kernels, known as activations, that ’fire’ when a specific feature is present at a specific spatial position. The amount of different features to be detected is determined by the depth of the convolutional layer. Pooling layers perform a downsampling operation along the dimensions of the given input resulting in the reduction of parameters and computational complexity. Fully-connected layers produce class scores from the activations. These class scores are used to classify either the image in its entirety or segments of it. They are directly connected to the neurons in the adjacent layers, while going through these layers the input is transformed from the original input to produce class scores, which reflect the AI’s confidence in the classification of the image or its segments. This represents the final layer, the output layer.
For a long time, the main use for CNNs has been classification with a single class as the output. However, in many visual tasks, the desired output includes localization. Instead of just knowing what is in the picture, the position of the objects is just as important as is the case for medical images, for example. For this purpose, the U-Net was developed at the University of Freiburg [20]. The U-Net is a fully CNN architecture providing a fast and precise object segmentation in both 2D and 3D images. Its robustness even for small training data makes it to one of the state-of-the-art CNNs for segmentation, while a classical CNN is limited to a contracting network of layers, the U-Net supplements this contraction path by successive layers, where pooling operators are replaced by upsampling operators and as a result, the resolution of the output is increased. Localization is achieved by combining the high-resolution features from the contracting path with the upsampled output. Based on this information, a successive convolution layer can learn to assemble a more precise output. Although the U-Net was originally developed for segmentation tasks in biomedical images [21,22], the authors claim that it can be useful in other areas [23,24,25,26,27].
In this work, we present the usage of the U-Net architecture to transfer dynamic, but highly periodic flow phenomena in capillaries into a quasi-stationary observation. Projection images, which are captured with a micro-CT, show emerging droplets that are generated in a co-flow configuration. The high spatial resolution of the µ-CT (∼10 µm) paired with the application of the U-Net enables time-resolved 3D analysis of relevant droplet quantities. Although a similar approach was proposed by Schuler et al. [15], the application of the U-Net ties in with the state of the art in image segmentation and outperforms the vgg16 net used in previous studies. The presented Angular Resolve Method (ARM) is a post-acquisition synchronization approach that is also known from biomedical imaging [28,29,30] and technical applications [31]. The focus of this work is slug flow formation as it is a robust and controllable process.

2. Materials and Methods

Due to the U-Net’s capabilities of fast and precise object detection and localization, a wide range of applications in computer vision tasks becomes available. Datasets of X-ray images, in the tens of thousands per scan, are significantly too large for personal evaluation. In this work, the U-Net, which was originally developed to process biomedical images, was adapted to recognizing certain states in a repeated process at different angles, which are than sorted and reconstructed to enable the temporally resolved evaluation of dynamic phenomena. In this way, a dynamic process can be turned into a quasi-stationary observation at a certain number of time steps. These capabilities are utilized to evaluate segmented flow formation in microfluidic channels.

2.1. Experimental Setup

To gain a closer insight into the formation of segmented flows, two different geometries of microfluidic channels were used, one with a square cross-section and the second with a circular cross-section. The mechanisms of flow formation in microchannels with these cross-sections have been previously investigated in [12,15]. By considering the two most popular cross-sections used in microfluidics, we aim to understand how geometry affects flow formation. Since the dispersed phase always contracts to a round shape due to its surface tension, the corners of the square geometry enable bypassing flow of the continuous phase and thus change the forces acting on the dispersed phase during its formation in radial direction.
To guarantee comparability and rapid prototyping with minimum tolerances, the capillaries are manufactured using stereo-lithographic 3D printing (SLA) [32,33]. The used Formlabs Form 3+ (Somerville, MA, USA) SLA printer features X-axis and Y-axis resolution as well as a layer height of 25 µm and thus, provides the necessary precision to fabricate microfluidic devices. The capillaries are made from Formlabs Clear v4 resin, while the X-ray images do not depend on optical visibility, troubleshooting is enhanced by the material choice to find stable flow conditions. To fit the entire droplet-generator setup inside the µ-CT, the capillaries are kept to a length of 50 mm. The circular channel has an inner diameter of 1.6 mm, while the square channel has an equal edge length resulting in a hydraulic diameter of 1.6 mm. After printing the channels, a thorough cleaning is essential. Due to the small internal diameter and ultimately high capillary forces, the cleaning of the desired structure from excessive uncured resin requires multiple cleaning steps, as described in detail in [34]. Since the resin is of a rather hydrophilic nature, the inner surfaces of the capillaries are treated so that the contact angle of the dispersed phase is increased and stable fluid flow can be established. For this purpose, a selective fluorination process for SLA-printed microfluidic devices proposed by Catterton et al. is used [35]. In this process, the 3D-printed capillaries are filled with a 10% (v/v) solution of tridecafluoro-1,1,2,2-tetrahydrooctyl dissolved in FC-40 oil for 30 min. After the silanization, the capillaries are rinsed using 95% (v/v) ethanol containing deionized water and dried with pressurized nitrogen.
The segmented flow formation is realized using a co-flow setup. A polyurethane medical intravenous canula (B. Braun Medical Inc., Melsungen, Germany) with an inner diameter of 0.8 mm and a material strength of 0.15 mm is inserted straight through a polyetheretherketone (PEEK) T-junction with an inner diameter of 1.6 mm (IDEX corporation, Northbrook, IL, USA). The dispersed phase is introduced via the canula and the continuous phase enters in perpendicular direction. The capillary, which is printed with a thread to enable simple arrangement of laboratory peripherals, is screwed on top of the T-junction. The canula reaches into the channel, keeping the dispersed and continuous phases separate until its tip. To ensure a reproducible experimental procedure, small guides were printed into the capillary to center the canula during each assembly. The outlet tube is screwed on top of the capillary to drain both phases out of the µ-CT and into a beaker. Deionized water is used as the dispersed phase and the continuous, wetting phase is polydimethylsiloxane (PDMS) (PDMDS-1cSt, ELBESIL-Öle B, L. Böwig GmbH, Hofheim, Germany).
The µ-CT scanner that was used to perform the experiments is a Bruker Skyscan 1275 (RJL Micro & Analytic GmbH, Karlsdorf-Neuthard, Germany) that has been modified to contain routing fluorinated ethylene propylene (FEP) tubes with an inner diameter of 1.6 mm, allowing for continuous fluid supply and drainage while scanning. The scanner and its peripherals are shown in Figure 2a.
The specimen is centered between the X-ray source (100 W, 20–100 kV) and the distortion-free 3 Mp flat panel detector (1944 × 1536 px). The total specimen size is limited to a diameter of 96 mm and a height of 120 mm. The attenuation of X-rays follows Beer–Lambert’s Law [36], where I is the X-ray intensity, μ is the attenuation coefficient, and δ is the specimen thickness. The attenuation coefficient results from scattering and absorption, whereas the latter is mainly dependent on material density and the atomic number of the atoms in the specimen.
I ( μ , δ ) = I 0 · e x p ( μ · δ )
Two syringe pumps (VIT-FIT, Lambda Instruments GmbH, Baar, Switzerland) supply the dispersed and continuous phases separately so that the relevant volumetric flow rates in the range of 0.1–0.3 mL min 1 can be set independently and different volumetric flow rate ratios can be achieved. The pumps were calibrated gravimetrically. To ensure regular droplet formation, the continuous phase is supplied first, until complete wetting of the capillary, before the dispersed phase is introduced. The water and the PDMS are equilibrated for at least 24 h before conducting the experiments. The tubes are fastened to the rotation stage that holds the specimen, allowing for the full range of motion of 227 necessary to create a complete scan.
The capillary that is to be analyzed is installed in the experimental setup, as can be seen in Figure 2b. Before the scan is started, it is ensured that regular droplet formation is observable. A single scan takes roughly three hours, depending on the scanning parameters, which are the angular step width of 0.25 , the amount of images obtained per angle N i p a = 65, and the exposure time of 30 ms. The most time-consuming factor during the scans is the time it takes to save the images, which accounts for almost 85% of the scanning time. For the optimum image contrast, a voltage of 30 kV and a current of 210 µA are set for the X-ray source. The resolution of the scans is 14 µm, which represents a trade-off between the installation of the experimental setup, the size of the observation window, and the resolution itself. A flat-field correction, which calibrates the background image, is updated before each experimental run to avoid poor imaging.

2.2. U-Net Implementation

Technically, the temporal resolution of tomographic imaging is low as a whole set of aligning projection images is required to perform a 3D reconstruction, limiting the application to only static or slow processes. Since it is not possible to ‘freeze’ an emerging droplet and then start a scan of a stationary state, the U-Net is implemented as a post-processing step to segment the projection images of emerging droplets. The segmented images that capture the droplets at a certain state for each angle of the scan are then automatically selected to enable 3D reconstruction and a temporally resolved evaluation.
The mechanism of droplet formation is known to be of great periodicity and consecutively emerging droplets pass though the same stages of formation again and again. As a consequence, there will always be a CT image that pictures the droplet at a certain state of its formation if the number of images per angle ( N i p a ) is large enough to capture it. For a given rotational step width of 0.25 for 227 in total rotation and N i p a = 65, the procedure generates roughly 60,000 images per scan. These images contain almost every possible stage of droplet formation for each rotational step view.
The U-Net, which has achieved state-of-the-art quality in X-ray image segmentation, is trained to detect the position of the emerging droplet by searching for its vertex and determine its distance to the tip of the canula l d . All other already detached droplets are neglected as they are not part of the region of interest (ROI). In this way, the U-Net classifies the states of the emerging droplets from the beginning of their formation to fully formed and detached droplets. Inadequate segmentation would result in large offsets between the droplet states, which, in consequence, would cause incorrect representations after the reconstruction process. The temporal resolution results from the total number of steps into which the U-Net divides the droplet formation. In the case of this study, a temporal resolution of about 25–40 ms is obtained as the U-Net is set to differentiate between 24 steps and the formation of each droplet takes roughly 0.6–1.0 s.
The U-Net architecture that was applied for the ARM is given in Figure 3a, and an example CT image, the ground truth, and the U-Net output are given in Figure 3b. The U-Net model consists of an encoder–decoder architecture with four contractions and four expansions for pixelwise segmentation. The decoder layers are used to map the low-resolution features of the encoder layers to the initial input resolution. This enables a 2D mask to be recovered that contains the droplet segmentation with the same size as the input image. A detailed description of the U-Net implementation and the training is given in Appendix A.
The encoder (downsampling) module is a contracting network that consists of a sequence of 3 × 3 convolutional layers followed by a Rectified Linear Unit (ReLU) activation function and a 2 × 2 max-pooling operation with stride = 2 for each downsampling. The number of feature maps is doubled after each segment. The decoder (upsampling) module is characterized by 2 × 2 upsampling, whereas the number of feature maps is halved after each segment and concatenated with the corresponding feature map from the encoder part so that the spatial resolution is doubled. Afterward, a 3 × 3 convolution and a ReLU activation function are applied. A 1 × 1 convolution is used as a final layer, followed by a sigmoid activation function to associate each component feature vector to the desired number of classes.
Figure 3c emphasizes the working principle of the ARM as a whole. The datasets that are fed to the U-Net consist of 65 images per angular step with a stepwidth of 0.25 , covering 227 in total. Based on the U-Net output, the image that represents the droplet for a defined state according to its length l d for each angle is selected. As a result, the selected images for one step capture the droplet at a certain state from each angular position, enabling the 3D reconstruction and a quasi-stationary observation of the droplet. Ultimately, a temporal dimension is added to obtain temporally resolved 3D analysis of the emerging droplets.

2.3. Image Analysis

By applying the ARM to the total amount of images obtained during one experimental run inside the µ-CT, a temporal resolution is added to the µ-CT’s high spatial resolution to observe droplet formation that covers the emerging droplet from the very first appearance till its detachment. It is worth mentioning that, in fact, an averaged droplet is analyzed, as the scanner takes 65 images in succession before the sample stage is rotated to the next angular position. These 65 images capture several emerging droplets at various states, covering the whole range of droplet formation. Since the formation mechanism results in droplets of the same dimensions with negligible tolerances, we obtain a quasi-stationary state out of a periodic process. Each of the 24 steps holds 905 images showing the droplet in the same state at 905 different angular positions, so that a 3D reconstruction of the µ-CT images is enabled. The 3D reconstruction is done using NRecon (Bruker corperation, Billerica, MA, USA), a commercially available reconstruction software that applies a filtered back-projection algorithm, which was developed by Feldkamp, David, and Kress (FDK) [37]. During the reconstruction process, scanning artifacts, particularly ring artifacts and beam-hardening artifacts, are compensated. The reconstruction provides 688 slices, which is equal to the number of pixels of the ROI in the Y-direction with a resolution of 14 µm, which is equivalent to the edge length of the reconstructed voxels.
As the next post-processing step, the gray-scale image slices are converted according to their gray values into binary images, in which the emerging droplet is assigned a value of 1 and all other phases are assigned a value of 0. For an ideal case, multi-class segmentation of both fluids, the capillary material, and the background is required. However, for the sake of less computational effort, binary segmentation is applied, so that the pixels either belong to the dispersed phase or to the background. For the conversion into binary images, several approaches exist, for example, threshold-based methods [38], edge detection [39], and watershed segmentation [40,41], to name just a few.
As the different phases (water, oil, air, and resin) are quite easy to distinguish, the watershed algorithm has proven to be suitable for binarization. Watershed segmentation is a technique in image processing used to identify and separate different regions in an gray-scale image by treating the image as a topographic map in which the pixel values represent elevation. The algorithm simulates the process of water filling valleys starting from the local maxima in pixel values. The boundaries where water from different valleys would meet define the segmentation lines. After applying the watershed segmentation, the binary image only holds the information about the emerging droplet to keep the computational effort as low as necessary to perform the dimensional analysis since each 3D array contains roughly 130 million voxels per step per experiment.
Each reconstructed 3D image is 432 × 432 × 688 voxels with 8 bits per voxel bit depth. By converting the dataset into an array, it can be treated as a set of 688 matrices 432 × 432 in size containing values that are either 0 or 1. Based on the binary images, relevant quantities that characterize the mechanism of droplet formation and that are crucial to any application of segmented flows in microchannels are calculated and tracked over time. These include the length, the minimum and maximum diameter, the volume, and the interfacial area. Another characteristic quantity is the necking volume that accumulates behind the droplets diameter and shears of the emerging segment. In addition, the formation mechanism can be shown in a three-dimensional view and the change in a droplet us shape during its emergence can be pursued.

3. Results

To quantify the performance of image segmentation and classification by CNNs, several metrics exist. The most used metrics include precision, recall, accuracy, dice coefficient, and Intersection over Union (IoU). All metrics are a measure of how well the applied model performs in the intended image segmentation task. Figure 4 shows the accuracy and the IoU of the applied U-Net in detecting emerging droplets versus the number of epochs used in the model training. It can be seen that high accuracies for the training (0.934) and the validation (0.932) are reached right after the first epoch. After six epochs, they significantly increase, as the validation accuracy reaches a value of 0.963 and the training accuracy a value of 0.982. Afterward, both metrics converge to a constant value greater than 0.99, before they drop to a local minimum at epoch 22. Finally, both accuracies increase again, whereas the validation accuracy fluctuates around a mean value of 0.984.
The shape of the curve is to be interpreted in such a way that there is a strong overfitting at the beginning due to the simplicity of the task. The AI then starts to recognize the actual pattern as it finds a way to regularize itself. This can be attributed to the dropout layers. Dropout is a regularization technique that is often used in neural networks to prevent overfitting and improve the generalization capability of the model. Dropout randomly deactivates a certain number of neurons in the network during the training process. This means that during each training epoch, some neurons and their connections are not active and in consequence a slightly different network is trained for each epoch. Each neuron is deactivated with a certain probability, the so-called dropout rate. By randomly deactivating neurons during training, dropout prevents the model from being adapted to certain features of the training data. This improves the model’s ability to generalize to new, unseen data [42]. The small fluctuations at the end are an indication that the model is more robust to different states contained in the input data. However, due to the definition of the accuracy function, the high accuracy values need to be viewed with great caution:
a c c u r a c y = T P + T N T P + T N + F P + F N
The accuracy is calculated by the ratio of the sum of True Positive (TP) and True Negative (TN) assigned pixels and the sum of the TP, TN, False Positive (FP), and False Negative (FN) pixels. Simply put, it outputs the number of correct predictions divided by the total number of predictions. For images with a very high proportion of background, the accuracy will always be high, as it takes the correct predicted background pixels into account. So for images where the object of interest is small compared to the whole image size, this metric is also high when the object is not segmented correctly, which is emphasized in Appendix B. Nonetheless, based on the progression of the accuracy function, it can be concluded that the AI has really standardized the task and performs reliably after the second convergence interval. In images where imbalanced classes between the background and the ROIs are present, like in medical images, the accuracy metric is less useful as it gives equal weight to the model’s ability to predict all classes. In X-ray images, the IoU is one of the most used state-of-the-art metrics for semantic segmentation because it penalizes false positives, which is a common factor in highly class-imbalanced datasets. IoU is calculated be dividing the TP pixels by the sum of the TP, FP, and FN pixels.
I o U = T P T P + F P + F N
The IoU, also known as the Jaccard similarity coefficient [43], rewards the overlap between the mask of ground truth and the predicted output mask. Predicted bounding boxes that heavily overlap with the ground-truth bounding boxes have higher scores than those with less overlap. This makes the IoU an excellent metric for evaluating custom object detectors as it takes into account how exact the X- and Y-coordinates match. In Figure 4, it can be seen that the IoU score is equal or close to 0 for the first 10 epochs, even though the accuracy is high. After 20 epochs of training the IoU reaches 0.587, indicating that the U-Net recognizes the emerging droplet as a pattern in the images. The IoU score increases to over 0.612 (30 epochs) until it reaches 0.732 (50 epochs), which can be considered a very good result for the given segmentation task. It takes the U-Net roughly 120 ms per image to process 60,000 images to categorize the emerging droplet into 24 states at 905 different angles.
Once trained, the model provides accurate segmentation for various flow conditions and geometries without adjustment of any parameters. Figure 5a gives 2D representations and Figure 5b gives 3D representations of the emerging droplet after the U-Net segmentation, reconstruction process, and image analysis. As can be seen in the 2D plots, it is possible to track the development of a droplet us shape based on its averaged contour, with rotational symmetry around the X-axis. It can be seen that during the filling stage (light-gray dotted lines), the droplet expands equally in the radial and axial directions. Once the necking stage (dark-gray dashed lines) has been reached, the droplet continues to grow in the axial direction until it detaches (solid black line). On a first view, it can be stated from the 2D plots that the droplets in circular capillaries are roughly 1 mm shorter than in rectangular capillaries under the same flow conditions and for the same capillary dimensions. The rear part of the detached droplet is at a constant position, indicating that the pinch-off of the droplet happens at the same axial coordinate. The reason for the different droplet sizes can be derived from the geometry of the capillaries. As the dispersed phase tends to contract to a minimum surface area due to its surface tension, a round shape of the droplet will always be present. This enables the continuous phase to bypass the emerging droplet in the corners of the square capillary that are not filled with the dispersed phase. Hence, the forces that are acting on the droplet and are responsible for the pinch-off are reduced in this geometry.
For validation of the analyzed data, the capillary walls at r d = 0.8 mm are added to the diagram to show that the determined droplet radii are accurate and hence the dimensional analysis of the droplets is valid. In the square capillary, the droplet diameter exceeds the actual capillary wall, which can be explained by the fact that the droplet has rounded corners, but on the straight edges of the channel, the drop adheres to the wall apart from a thin wall film. Since the radii are calculated based on the droplet area in each slice, the droplet us radius appears a little larger in this case.
Furthermore, when looking at the foremost point of the emerging droplet, it is noticeable that the step size provided by the U-Net is not entirely uniform. Possible reasons for this could be the final IoU score of 0.732 and the associated deviation from the droplet state or the fact that the U-Net searches for discrete points, as it divides the distance between the detached droplet us foremost point and the canula tip into 24 equal distances, but the conversion is limited due to the pixel size. However, this is not an issue in terms of the dimensional analysis of the droplets as each state can be tracked back to a certain point in time.
As previously mentioned, image acquisition with a µ-CT enables 3D reconstruction and analysis of the droplets, as can be seen in Figure 5b. Ultimately, not only one-dimensional variables such as radii or lengths can be determined but also two- and three-dimensional variables such as the volume and surface area of the droplets. The latter are of great interest in droplet-based microfluidic applications as transport phenomena are promoted for large surface-to-volume ratios. The 3D plots also help to emphasize the differences in droplet formation mechanisms for varying capillary geometries as the filling stage, the necking stage, and the pinch-off of the droplet can be resolved and presented in a comprehensible manner.

4. Conclusions

This contribution presents the successful application of a U-Net architecture to transfer periodic flow phenomena into a quasi-stationary observation. For the image acquisition of highly periodic droplet formation, µ-Computed Tomography was used. The droplet formation was realised in a co-flow setup where a canula was inserted into 3D-printed capillaries with two different cross-sections, one circular and one square. Both feature an inner diameter and thus a hydraulic diameter, of 1.6 mm. Other than the cross-section of the capillaries, all other influential parameters were kept constant. Water was dispersed in silicone oil, which was used as the continuous phase. The droplet formation was captured on 65 X-ray images at 905 different angular positions resulting in about 60,000 images, showing the droplet in different states during its formation. The proposed Angular Resolve Method selects the X-ray images for each angular position that match one of 24 desired droplet states. As a result, 24 sets of images that capture the emerging droplet in its different states are generated and reconstructed afterwards. The reconstructed X-ray images hold 3D information of the droplet and the application of the ARM adds a temporal resolution of about 25–40 ms. An Intersection over Union score of 0.732 was reached by the applied U-Net. The proposed method enables the fast and precise 4D evaluation of process-relevant droplet quantities and helps to make the present phenomena during droplet formation more comprehensive.
Future work will focus on the evaluation of different droplet quantities to resolve the droplet formation mechanism and the effect of capillary geometry and size. Furthermore, the Angular Resolve Method will be applied to different material systems such as gas–liquid flows to analyze the underlying physics of bubble formation. Another focus is to transfer the U-Net to the resulting cross sectional images after reconstruction, enabling faster and more automated 3D image evaluation. The Angular Resolve Method could be easily transferred to similar technical investigations in which a certain periodicity is present. For example, pool boiling or flow boiling experiments that rely on AI-assisted image detection or chemical and biochemical reactions in which a phase change happens frequently. In addition, the Angular Resolve Method could be used in other research fields that meet the necessary criteria such as contracting heart cells or inflating and deflating lungs. As cells-on-chips and organs-on-chips are trending, a 4D evaluation could especially help to add new information to ongoing phenomena. However, depending on the application, a new specified training of the U-Net might be necessary.

Author Contributions

Conceptualization, B.O.; methodology, B.O. and P.W.; software, P.W. and B.O.; validation, B.O., P.W., and N.K.; investigation, B.O. and P.W.; data curation, B.O. and P.W.; writing—original draft preparation, B.O.; writing—review and editing, N.K.; visualization, B.O.; supervision, N.K.; funding acquisition, N.K. All authors have read and agreed to the published version of the manuscript.

Funding

The Bruker Skyscan 1275 µ-CT scanner that was used for this research was funded by the German Research Foundation (DFG, grant number INST 212/397-1).

Data Availability Statement

The associated codes for this article are available on GitHub at https://github.com/BasOld/Angular-Resolve-Method, accessed on 23 July 2024.

Acknowledgments

The authors thank RJL Micro & Analytic, Karlsdorf-Neuthard, Germany and C. Schrömges (Laboratory of Equipment Design, TU Dortmund University, Germany) for technical support and advice. B.O. thanks the networking program Sustainable Chemical Synthesis 2.0 (SusChemSys 2.0) for the support and valuable discussions across disciplines.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
APIApplication Programming Interface
ARMAngular Resolve Method
CNNConvolutional Neural Network
CTComputed Tomography
DLDeep Learning
FDKFeldkamp, Davis, and Kress algorithm
FEPFluorinated ethylene propylene
FNFalse negative
FPFalse positive
GPUsGraphics processing units
IoUIntersection over Union
JSONJavaScript Object Notation
MEMSMicro Electromechanical Systems
MLMachine Learning
PDMSPolydimethylsiloxane
PEEKPolyetheretherketone
ReLURectified linear unit
ROIRegion of interest
SLAStereolithography
TNTrue negative
TPTrue positive
VRAMVideo Random Access Memory

Appendix A

To implement the training and the final U-Net architecture in Python (version 3.11), TensorFlow (version 2.16.1) serves as the main ML library. This library is focused on training deep neural networks and offers a flexible ecosystem and a huge amount of community resources. Keras is the Application Programming Interface (API) that comes with TensorFlow, allowing for the creation and modification of models. For image handling and to support interoperability with other image processing libraries, Scikit-image and Scikit-learn are implemented as well. Before the actual training process of the network can start, human-labeled data are necessary. For this purpose, the online labeling platform Kili (Kili Technology, Paris, France) was used. Labeling images for segmentation involves manually delineating the boundaries of the emerging droplets within the µ-CT images. A total of 492 images showing different states of droplet formation and at different angles were labeled semantically and the resulting labels for all images are exported as a JavaScript Object Notation (JSON) file. The JSON file is used to create masks, representing the true value of the segmentation. The images responding to the files are loaded as well and, together with the masks, are converted to NumPy arrays and stacked in a single dataset. It is important to ensure that the images and masks are correctly aligned and that images and their corresponding masks are in the same position in the dataset. The pixel value for the images and for the masks are normalized and rescaled to a value between 0 and 1. This is done since neural networks usually converge faster when using normalized input data. The images themselves are rescaled to a resolution of 688 × 432 pixels. As every MaxPooling operation halves the amount of pixels in both dimensions, the image must be evenly divisible by 2 4 to ensure that four MaxPooling operations can be applied to the height and width of the image. Scikit-learn, which also allows the randomization of the order in which the data are fed to the learning model, is used to split the data into training and validation data with a ratio of 90/10. The total training time for 50 epochs was about 13 min and 13.5 s/epoch in average after the first epoch had a training time of 116.2 s. The learning parameters are given in Table A1. Data augmentation was not performed, as the training data contains droplets at different stages and rotation angles.
Table A1. Learning parameters with which the U-Net was trained.
Table A1. Learning parameters with which the U-Net was trained.
ParameterValue
Learning rate0.001
Batch size16
OptimizerAdam
Momentum rate0.99
Weight decaynone
Dropout rate0.1, 0.2, 0.3
Epochs1, 5, 10, 20, 30, 50
The Keras model.fit function allows the batch and epoch size to be determined and was used to train the model. The batch size is the number of images and mask sets the model is fed per cycle, while larger batches speed up the learning process, they necessitate more Video Random Access Memory (VRAM). The epoch size determines how often the entire dataset is iterated over. Before the actual segmentation can be performed, the input data needs to be preprocessed. The images are loaded and processed in batches much like during the training. Pyvips and NumPy are used to load the images and convert them into a single stacked NumPy array per batch. The images that are fed to the model for segmentation need to be the same size as the images used for training, thus the images are resized to a ROI that measures 688 × 432 pixels. The training and the actual application of the U-Net are performed on a personal computer with two Intel(R) Xeon(R) Gold 6130 CPU 2.10 GHz processors with 128 GB RAM (Intel Corporation, Santa Clara, CA, USA) and an NVIDIA Quadro P5000 GPU (Nvidia Corporation, Santa Clara, CA, USA).

Appendix B

Figure A1. Comparison of the input image, the ground truth, and the U-Net output for 1, 5, 10, 20, 30, and 50 epochs of training.
Figure A1. Comparison of the input image, the ground truth, and the U-Net output for 1, 5, 10, 20, 30, and 50 epochs of training.
Computers 13 00230 g0a1

References

  1. Tai, C.M.H.Y.C. Micro-Electro-Mechanical-Systems (MEMS) and Fluid Flows. Annu. Rev. Fluid Mech. 1998, 30, 579–612. [Google Scholar]
  2. Tabeling, P. Introduction to Microfluidics; Includes Bibliographical References and Index; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  3. Jamiruddin, M.R.; Meghla, B.A.; Islam, D.Z.; Tisha, T.A.; Khandker, S.S.; Khondoker, M.U.; Haq, M.A.; Adnan, N.; Haque, M. Microfluidics Technology in SARS-CoV-2 Diagnosis and Beyond: A Systematic Review. Life 2022, 12, 649. [Google Scholar] [CrossRef] [PubMed]
  4. Kashid, M.; Kiwi-Minsker, L. Quantitative prediction of flow patterns in liquid–liquid flow in micro-capillaries. Chem. Eng. Process. Process Intensif. 2011, 50, 972–978. [Google Scholar] [CrossRef]
  5. Dinter, R.; Willems, S.; Hachem, M.; Streltsova, Y.; Brunschweiger, A.; Kockmann, N. Development of a two-phase flow reaction system for DNA-encoded amide coupling. React. Chem. Eng. 2023, 8, 1334–1340. [Google Scholar] [CrossRef]
  6. Jankowski, P.; Kutaszewicz, R.; Ogończyk, D.; Garstecki, P. A microfluidic platform for screening and optimization of organic reactions in droplets. J. Flow Chem. 2019, 10, 397–408. [Google Scholar] [CrossRef]
  7. Churski, K.; Korczyk, P.; Garstecki, P. High-throughput automated droplet microfluidic system for screening of reaction conditions. Lab Chip 2010, 10, 816. [Google Scholar] [CrossRef]
  8. Sun, J.; Warden, A.R.; Ding, X. Recent advances in microfluidics for drug screening. Biomicrofluidics 2019, 13, 061503. [Google Scholar] [CrossRef]
  9. De Stefano, P.; Bianchi, E.; Dubini, G. The impact of microfluidics in high-throughput drug-screening applications. Biomicrofluidics 2022, 16, 031501. [Google Scholar] [CrossRef]
  10. Sun, H.; Hu, N.; Wang, J. Application of microfluidic technology in antibody screening. Biotechnol. J. 2022, 17, 2100623. [Google Scholar] [CrossRef]
  11. Gérard, A.; Woolfe, A.; Mottet, G.; Reichen, M.; Castrillon, C.; Menrath, V.; Ellouze, S.; Poitou, A.; Doineau, R.; Briseno-Roa, L.; et al. High-throughput single-cell activity-based screening and sequencing of antibodies using droplet microfluidics. Nat. Biotechnol. 2020, 38, 715–721. [Google Scholar] [CrossRef]
  12. Korczyk, P.M.; van Steijn, V.; Blonski, S.; Zaremba, D.; Beattie, D.A.; Garstecki, P. Accounting for corner flow unifies the understanding of droplet formation in microfluidic channels. Nat. Commun. 2019, 10, 2528. [Google Scholar] [CrossRef] [PubMed]
  13. Angeli, P.; Gavriilidis, A. Taylor Flow in Microchannels. In Encyclopedia of Microfluidics and Nanofluidics; Springer: Boston, MA, USA, 2008; pp. 1971–1976. [Google Scholar] [CrossRef]
  14. Kockmann, N.; Schuler, J.; Oldach, B. X-ray-Based Investigations on Multiphase Capillary Flows. In Handbook of Multiphase Flow Science and Technology; Springer: Singapore, 2023; pp. 1145–1181. [Google Scholar] [CrossRef]
  15. Schuler, J.; Neuendorf, L.M.; Petersen, K.; Kockmann, N. Micro-computed tomography for the 3D time-resolved investigation of monodisperse droplet generation in a co-flow setup. AIChE J. 2021, 67, e17111. [Google Scholar] [CrossRef]
  16. Buzug, T. Computed Tomography; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar] [CrossRef]
  17. Manakitsa, N.; Maraslidis, G.; Moysis, L.; Fragulis, G. A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision. Technologies 2024, 12, 15. [Google Scholar] [CrossRef]
  18. Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 6999–7019. [Google Scholar] [CrossRef] [PubMed]
  19. Lo, S.; Lou, S.; Lin, J.S.; Freedman, M.; Chien, M.; Mun, S. Artificial convolution neural network techniques and applications for lung nodule detection. IEEE Trans. Med. Imaging 1995, 14, 711–718. [Google Scholar] [CrossRef]
  20. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
  21. Arora, A.; Jayal, A.; Gupta, M.; Mittal, P.; Satapathy, S.C. Brain tumor segmentation of MRI images using processed image driven u-net architecture. Computers 2021, 10, 139. [Google Scholar] [CrossRef]
  22. Gavade, A.B.; Nerli, R.; Kanwal, N.; Gavade, P.A.; Pol, S.S.; Rizvi, S.T.H. Automated Diagnosis of Prostate Cancer Using mpMRI Images: A Deep Learning Approach for Clinical Decision Support. Computers 2023, 12, 152. [Google Scholar] [CrossRef]
  23. Seong, J.H.; Ravichandran, M.; Su, G.; Phillips, B.; Bucci, M. Automated bubble analysis of high-speed subcooled flow boiling images using U-net transfer learning and global optical flow. Int. J. Multiph. Flow 2023, 159, 104336. [Google Scholar] [CrossRef]
  24. Varfolomeev, I.; Yakimchuk, I.; Safonov, I. An application of deep neural networks for segmentation of microtomographic images of rock samples. Computers 2019, 8, 72. [Google Scholar] [CrossRef]
  25. Siavashi, J.; Mahdaviara, M.; Shojaei, M.J.; Sharifi, M.; Blunt, M.J. Segmentation of two-phase flow X-ray tomography images to determine contact angle using deep autoencoders. Energy 2024, 288, 129698. [Google Scholar] [CrossRef]
  26. Espinosa-Bernal, O.A.; Pedraza-Ortega, J.C.; Aceves-Fernandez, M.A.; Martínez-Suárez, V.M.; Tovar-Arriaga, S.; Ramos-Arreguín, J.M.; Gorrostieta-Hurtado, E. Quasi/Periodic Noise Reduction in Images Using Modified Multiresolution-Convolutional Neural Networks for 3D Object Reconstructions and Comparison with Other Convolutional Neural Network Models. Computers 2024, 13, 145. [Google Scholar] [CrossRef]
  27. Adoui, M.E.; Mahmoudi, S.A.; Larhmam, M.A.; Benjelloun, M. MRI breast tumor segmentation using different encoder and decoder CNN architectures. Computers 2019, 8, 52. [Google Scholar] [CrossRef]
  28. Habib Geryes, B.; Calmon, R.; Donciu, V.; Khraiche, D.; Warin-Fresse, K.; Bonnet, D.; Boddaert, N.; Raimondi, F. Low-dose paediatric cardiac and thoracic computed tomography with prospective triggering: Is it possible at any heart rate? Phys. Medica 2018, 49, 99–104. [Google Scholar] [CrossRef] [PubMed]
  29. Hsieh, J.; Londt, J.; Vass, M.; Li, J.; Tang, X.; Okerlund, D. Step-and-shoot data acquisition and reconstruction for cardiac X-ray computed tomography. Med. Phys. 2006, 33, 4236–4248. [Google Scholar] [CrossRef]
  30. Schoenhagen, P. Back to the future: Coronary CT angiography using prospective ECG triggering. Eur. Heart J. 2007, 29, 153–154. [Google Scholar] [CrossRef]
  31. Bieberle, A.; Neumann, M.; Hampel, U. Advanced process-synchronized computed tomography for the investigation of periodic processes. Rev. Sci. Instruments 2018, 89, 073111. [Google Scholar] [CrossRef]
  32. Au, A.K.; Huynh, W.; Horowitz, L.F.; Folch, A. 3D-Printed Microfluidics. Angew. Chem. Int. Ed. 2016, 55, 3862–3881. [Google Scholar] [CrossRef]
  33. Bhattacharjee, N.; Urrios, A.; Kang, S.; Folch, A. The upcoming 3D-printing revolution in microfluidics. Lab Chip 2016, 16, 1720–1742. [Google Scholar] [CrossRef]
  34. Oldach, B.; Chiang, Y.Y.; Ben-Achour, L.; Chen, T.J.; Kockmann, N. Performance of different microfluidic devices in continuous liquid–liquid separation. J. Flow Chem. 2024, 14, 547–557. [Google Scholar] [CrossRef]
  35. Catterton, M.A.; Montalbine, A.N.; Pompano, R.R. Selective Fluorination of the Surface of Polymeric Materials after Stereolithography 3D Printing. Langmuir 2021, 37, 7341–7348. [Google Scholar] [CrossRef]
  36. Beer. Bestimmung der Absorption des rothen Lichts in farbigen Flüssigkeiten. Ann. Der Phys. 1852, 162, 78–88. [Google Scholar] [CrossRef]
  37. Feldkamp, L.; Davis, L.; Kress, J. Practical cone-beam algorithm. J. Opt. Soc. Am. 1984, 1, 612. [Google Scholar] [CrossRef]
  38. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man, Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
  39. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  40. Vincent, L.P.S. Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef]
  41. Laurent Najman, M.S. Geodesic Saliency of Watershed Contours and Hierarchical Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 1163–1173. [Google Scholar] [CrossRef]
  42. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  43. Jaccard, P. Lois de distribution florale dans la zone alpine. Bull. Société Vaudoise Des Sci. Nat. 1902, 38, 69–130. [Google Scholar]
Figure 1. Principle sketch of how a periodic process like repeated slug formation can be classified according to the state of droplet formation. The left side shows a regular slug flow, which is typical for multiphase flows in capillaries, and the droplet formation mechanism with the steps from l 1 to l 5 . On the right side, the repeated slug flow formation is temporally resolved for the steps l 1 to l 5 to obtain a series of stationary states that enable 3D analysis.
Figure 1. Principle sketch of how a periodic process like repeated slug formation can be classified according to the state of droplet formation. The left side shows a regular slug flow, which is typical for multiphase flows in capillaries, and the droplet formation mechanism with the steps from l 1 to l 5 . On the right side, the repeated slug flow formation is temporally resolved for the steps l 1 to l 5 to obtain a series of stationary states that enable 3D analysis.
Computers 13 00230 g001
Figure 2. (a) The µ-CT used for the experiments and its surrounding peripherals. (b) A close-up view of the specimen chamber with an installed capillary under investigation.
Figure 2. (a) The µ-CT used for the experiments and its surrounding peripherals. (b) A close-up view of the specimen chamber with an installed capillary under investigation.
Computers 13 00230 g002
Figure 3. (a) The U-Net architecture that was used for this work with an input image and the classification of the image as the output. (b) An example CT image of an emerging droplet, the human-labeled ground truth and the outlet of the U-Net that was trained for 50 epochs. (c) A sketch of the ARM. The datasets are fed to the U-Net and are classified according to the defined states. Projection images are acquired at each angular position from 0 to 227 in 0.25 increments. The U-Net is applied to select one image for each angular position that captures a desired droplet state. The selected projection images are then used to reconstruct a 3D volume for image analysis.
Figure 3. (a) The U-Net architecture that was used for this work with an input image and the classification of the image as the output. (b) An example CT image of an emerging droplet, the human-labeled ground truth and the outlet of the U-Net that was trained for 50 epochs. (c) A sketch of the ARM. The datasets are fed to the U-Net and are classified according to the defined states. Projection images are acquired at each angular position from 0 to 227 in 0.25 increments. The U-Net is applied to select one image for each angular position that captures a desired droplet state. The selected projection images are then used to reconstruct a 3D volume for image analysis.
Computers 13 00230 g003
Figure 4. The training accuracy (dotted gray line) and validation accuracy (solid black line) for 50 epochs can be tracked on the left Y-axis. The corresponding IoU (red markers) is given on the left Y-axis for 1, 5, 10, 20, 30, and 50 epochs of training.
Figure 4. The training accuracy (dotted gray line) and validation accuracy (solid black line) for 50 epochs can be tracked on the left Y-axis. The corresponding IoU (red markers) is given on the left Y-axis for 1, 5, 10, 20, 30, and 50 epochs of training.
Computers 13 00230 g004
Figure 5. (a) The 2D droplet contours tracked over the 24 steps provided by the U-Net classification by plotting droplet radii over the droplet length. The diagram emphasizes the droplet evolution starting at the filling stage (dotted light-gray lines) and over the necking stage (dashed dark-gray line), until the droplet detaches (solid black line) in the circular capillary (top) with an inner diameter d c , i of 1.6 mm and for the square capillary (bottom) with d h = 1.6 mm. (b) The reconstructed 3D representation of the different droplet states in the circular (top) and square (bottom) capillary for the filling stage (left), necking stage (middle), and the detached droplet (right) for a constant Weber number W e .
Figure 5. (a) The 2D droplet contours tracked over the 24 steps provided by the U-Net classification by plotting droplet radii over the droplet length. The diagram emphasizes the droplet evolution starting at the filling stage (dotted light-gray lines) and over the necking stage (dashed dark-gray line), until the droplet detaches (solid black line) in the circular capillary (top) with an inner diameter d c , i of 1.6 mm and for the square capillary (bottom) with d h = 1.6 mm. (b) The reconstructed 3D representation of the different droplet states in the circular (top) and square (bottom) capillary for the filling stage (left), necking stage (middle), and the detached droplet (right) for a constant Weber number W e .
Computers 13 00230 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oldach, B.; Wintermeyer, P.; Kockmann, N. Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net. Computers 2024, 13, 230. https://doi.org/10.3390/computers13090230

AMA Style

Oldach B, Wintermeyer P, Kockmann N. Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net. Computers. 2024; 13(9):230. https://doi.org/10.3390/computers13090230

Chicago/Turabian Style

Oldach, Bastian, Philipp Wintermeyer, and Norbert Kockmann. 2024. "Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net" Computers 13, no. 9: 230. https://doi.org/10.3390/computers13090230

APA Style

Oldach, B., Wintermeyer, P., & Kockmann, N. (2024). Transfer of Periodic Phenomena in Multiphase Capillary Flows to a Quasi-Stationary Observation Using U-Net. Computers, 13(9), 230. https://doi.org/10.3390/computers13090230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop