Next Article in Journal
Locating Electrified Aircraft Service to Reduce Urban Congestion
Next Article in Special Issue
Improving the Classification of Unexposed Potsherd Cavities by Means of Preprocessing
Previous Article in Journal
Whisper40: A Multi-Person Chinese Whisper Speaker Recognition Dataset Containing Same-Text Neutral Speech
Previous Article in Special Issue
E-MuLA: An Ensemble Multi-Localized Attention Feature Extraction Network for Viral Protein Subcellular Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning

by
Eduardo Morales-Vargas
1,
Hayde Peregrina-Barreto
2,*,
Rita Q. Fuentes-Aguilar
1,
Juan Pablo Padilla-Martinez
3,
Wendy Argelia Garcia-Suastegui
3 and
Julio C. Ramirez-San-Juan
2,*
1
Tecnológico de Monterrey, Institute of Advanced Materials for Sustainable Manufacturing, Av. Gral. Ramón Corona No 2514, Zapopan 45201, Mexico
2
Instituto Nacional de Astrofísica, Óptica y Electrónica, Luis Enrique Erro 1, Santa Maria Tonantzintla, San Andres Cholula 72840, Mexico
3
Instituto de Ciencias, Benemérita Universidad Autónoma de Puebla, Puebla 72000, Mexico
*
Authors to whom correspondence should be addressed.
Information 2024, 15(4), 185; https://doi.org/10.3390/info15040185
Submission received: 22 January 2024 / Revised: 23 February 2024 / Accepted: 5 March 2024 / Published: 29 March 2024
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)

Abstract

:
Microvasculature analysis is an important task in the medical field due to its various applications. It has been used for the diagnosis and threat of diseases in fields such as ophthalmology, dermatology, and neurology by measuring relative blood flow or blood vessel morphological properties. However, light scattering at the periphery of the blood vessel causes a decrease in contrast around the vessel borders and an increase in the noise of the image, making the localization of blood vessels a challenging task. Therefore, this work proposes integrating known information from the experimental setup into a deep learning architecture with multiple inputs to improve the generalization of a computational model for the segmentation of blood vessels and depth estimation in a single inference step. The proposed R-UNET + ET + LA obtained an intersection over union of 0.944 ± 0.065 and 0.812 ± 0.080 in the classification task for validation (in vitro) and test sets (in vivo), respectively, and a root mean squared error of 0.0085 ± 0.0275 μ m in the depth estimation. This approach improves the generalization of current solutions by pre-training with in vitro data and adding information from the experimental setup. Additionally, the method can infer the depth of a blood vessel pixel by pixel instead of in regions as the current state of the art does.

1. Introduction

Microvasculature analysis is essential due to its relevance in the medical field in areas such as ophthalmology [1,2,3], dermatology [4], and neurosciences [5,6,7,8], among others. It helps to visualize and measure the blood flow to diagnose and treat some medical conditions such as port-wine stains and retinopathy, monitor lesions, and assess the effectiveness of techniques such as photodynamic therapy [9,10,11,12] or evaluate the correct skin suturing [13]. Current research ignores some limitations of the technique, such as blood flow measurement due to the ergodicity and depth problem, aiming to study the blood flow during neurosurgical interventions to monitor blood flow and eliminate disturbances to increase its efficacy [14,15]. In this aspect, the visualization and localization of blood vessels in Laser Speckle Contrast Imaging (LSCI) may be important in attacking pathologies such as tumors [16,17,18], malignant strokes [19] and aneurysms [20,21]. Several techniques help to perform the microvascular analysis, such as multiphoton tomography [22,23], Doppler fluxometry, optical coherence tomography [24], and magnetic resonance [5,25]. However, these techniques are expensive, invasive, or have low spatial resolution [26,27]. On the other hand, the LSCI technique improves spatial resolution, but the depth resolution is low because surrounding tissue to the blood vessels can interact with the light [28], introducing noise and combining information from the two regions, which are called combined pixels. This technique relies on the statistics of bright and dark speckles formed by the random interference of coherent light with the tissue, capturing their movement as a blurred pattern in a Raw Speckle Image (RSI) due to the momentum transfer phenomenon [29], where the improved spatial resolution and the simple instrumentation are its main advantages.
The blurred pattern in the RSIs is commonly studied by a contrast analysis, obtaining aContrast Image (CI). The resulting contrast maps associate a relative blood flow value with a contrast value using a point-to-point analysis of the RSI [30,31]. The calculation of the CI is performed by an analysis window that computes the statistical values of a sample in the image. However, this process reduces the spatial resolution because the information between blood vessels and the tissue could be combined if the analysis window is large enough to sample both regions. As a result, the calculation of the CI affects the main feature of the technique, making it difficult to determine some characteristics of the blood vessel, such as its diameter and blood flow within its limits.
The adaptive processing of the RSI is one of the most studied approaches to deal with the problem of loss of spatial resolution in the CI besides noise reduction by adding invasive agents to the bloodstream [32,33,34,35,36], or through filtering [37,38]. Adaptive methods aim to select suitable analysis windows for the morphology of the blood vessels in the image, obtaining a more representative and larger sample to compute the statistics [39,40,41,42,43]. Although adaptive methods have improved the quality of CIs, it is necessary to improve the pixel selection process. But, the segmentation of blood vessels in LSCI is difficult because the high amount of noise in the RSIs limits the robustness of current models, making it an open problem both in adaptive processing and in the medical area to measure microvasculature properties [44,45,46].
Due to the various elements and conditions that this problem covers, several computational methods have been proposed. There are two types of algorithms for segmenting blood vessels in LSCI: traditional and deep learning approaches, being the main difference in the type of computational models used.
Traditional methods either have many false positives or over-segment the peripheral blood vessels due to the combined information in this region. This results in an image with segmentation problems perceived as small blobs corresponding to the speckles. Most common methods in the literature include the global thresholding that assumes a bimodal distribution in the image to discriminate between the blood vessel and the static region [47]. This k-means method can be either performed on the contrast image or by using features from it such as the range, standard deviation, or entropy (k-means of features) [48,49]. On the other hand, the morphological approach combines a two-step methodology that first segments the blood vessels and then applies a process to decrease the blobs in the image by using morphological operations [44]. These works are interesting because of the lack of data to train deep learning approaches such as Convolutional Neural Networks (CNNs); labeling in vivo samples is tedious and often does not receive enough attention and research effort.
While there are numerous efforts of Deep Learning (DL) approaches to locate blood vessels in CIs, these works often explore complex architectures and techniques such as weakly supervised learning or domain transfer to mitigate the lack of labeled data problem [45]. Although CNNs are powerful tools for image classification and semantic segmentation, they require a large amount of data to train effectively and avoid overfitting, leading to poor generalization performance on unseen samples. In contrast, the study of DL in LSCI added new capabilities to the technique, enabling the 3D blood flow reconstruction in transmissive LSCI with some drawbacks such as the need to use two models: one for blood vessel segmentation and another for depth regression [46].
This study explores the potential of training a segmentation model with in vitro data to obtain a pre-trained model that later can be fine-tuned without using large amounts of data. In addition, experiments with multi-input networks are presented to explore how fusing the acquisition parameters can affect the segmentation of blood vessels: exposure time and lens aperture. It was hypothesized that the information from the image acquisition could be exploited using deep learning and feature fusion to improve current segmentation rates and estimate their depth using a single DL model. The proposal is presented to alleviate the use of two models (UNet + ResNet) for semantic segmentation of blood and depth regression. The results suggest that our approach improves the generalization of current models by pre-training with in vitro data, feature fusion, and a regression network, achieving higher classification rates and the ability to estimate the blood vessel depth for each pixel in the image.

2. Laser Speckle Contrast Imaging

If an object with regions containing particles in movement, such as blood vessels, is illuminated by a coherent light source, the movement on it produces a random pattern of bright and dark speckles. A momentum transfer phenomenon occurs if the movement is fast enough, generating faster oscillations between bright and dark speckles in the regions with movement. Consequently, a blurred pattern is formed when a camera captures the laser light with an exposure time slower than the fastest oscillation due to the averaging of the speckles. In this way, it is possible to detect and measure blood flow in tissue through the obtained image called RSI. The generated image can be useful to locate blood vessels and measure their relative blood flow after an analysis of the RSI to generate a contrast representation (k). The standard deviation ( σ ) and the mean are calculated from the neighboring pixels of x,y in a radius r with a sample size of n or d 2 , and the contrast value (Equation (1)) is assigned to the coordinates in the output representation. The contrast representation can take values between 0 and 1, where a value close to 0 (low contrast) is associated with higher speeds (movement on a blood vessel) and values close to 1 to low speeds (tissue).
K x , y = σ x , y μ x , y
There are two approaches to computing the contrast representation from the RSI: traditional and adaptive. Traditional methods obtain the statistics with a squared analysis window with size d × d around the analyzed pixel W p . The main difference between methods is the dimension in which the sample is obtained: Spatial (Spatial Contrast (sK), defined in Equation (2)), temporal (Temporal Contrast (tK), defined in Equation (3)) or combined (Spatial–Temporal Contrast (stK)) [2,50,51]. In the sK and the stK, the size of the analysis window plays an important role in the quality of the CI. A 5 × 5 analysis window gives the best relation between spatial resolution and noise reduction, while an analysis window of 7 × 7 improves the noise attenuation. The main drawback is the loss of spatial resolution and the imprecision of the contrast value in the periphery of the blood vessels when the sample contains pixels from static and dynamic regions [52].
s K x , y = i = x r x + r j = y r y + r 1 d 2 [ R S I ( i , j ) μ x , y s ] 2 μ x , y s ,
where
μ x , y s = i = x r x + r j = y r y + r 1 d 2 R S I ( i , j ) ,
r = d 1 2 .
On the other hand, the tK uses an analysis window of 1 × d along n frames (at least n 15 [50]), maximizing the spatial resolution but reducing the temporal resolution. tK is the noisier method, but in contrast with the sK and the stK, it has an improved visualization of the smaller blood vessels.
t K x , y , n = f = 1 n 1 n [ R S I ( x , y , r i ) μ x , y t ] 2 μ x , y t
μ x , y , n t = r i = 1 d 2 1 n R S I ( x , y , r i )
The most used methods for visualization and relative blood flow estimation are the sK and stK with a d = 5 , but recent works suggest that bigger analysis windows with a more representative selection of the sample improve the quality of CIs [39,43,53], temporal resolution and localization of blood vessels in them. The most common methods are the Anisotropic Contrast (aK), Space Directional Contrast (sdK), and Adaptive Window Contrast (awK). The aK and the sdK use an analysis window of 1 × n in a direction a and select it according to a criterion. The aK estimates the direction of the blood flow by minimizing the contrast along a set of given directions { 0 ° , 45 ° , 90 ° , 135 ° } . Equation (4) depicts how the angle in which the contrast is going to be calculated with the anisotropic analysis windows is selected, where a is the analyzed angle, and l the set of pixels p that belongs to that direction and p 0 the central pixel with coordinates x , y [43].
arg [ a 0 ] = argmin a 0 ° 180 ° [ p l ( K p K P 0 ) 2 ]
On the other hand, the sdK performs the selection by maximizing the variance over the set V of directional windows (Equation (5)). The analysis is performed frame by frame of the RSI, improving blood vessel visualization by increasing the contrast between regions. However, artifacts with line appearance are introduced; this effect also can be seen in the aK [53].
a = argmax v i V ( v a r ( v i ) )
The problem of artifacts was solved by the awK with a region-growing process. This allows a bigger and more representative sample selection, reducing the noise and obtaining a more stable contrast representation in the periphery of the blood vessels [39]. The region-growing process is performed over a reference image, a grouped version of a CI. Then, the pixels are selected in each analysis window using Equation (6), where R is a reference image used to select the pixels involved in the statistics calculation. One of the motivations to study the segmentation of the blood vessels is to select a better reference image, taking advantage of supervised over unsupervised learning and DL, as explained before.
S p d = 1 , if R ( p 0 ) equals R ( p i ) 0 , otherwise

3. Experimental Setup for Data Acquisition

Two databases were used in this study to train and validate the segmentation models: one with in vitro and the other with in vivo images. The experimental setup used to acquire the in vitro RSIs consists of a CCD camera (Retiga 2000R, Qimaging, Burnaby, BC, Canada) equipped with a zoom lens (NAV ITAR ZOOM 700, New York, NY, USA) with a polarizing filter in front of the zoom lens. The lens was mounted perpendicularly to the polarization of the incident light to mitigate specular reflectance from the samples illuminated by a He-Ne laser at 632.8 nm with 15 mW. A skin phantom of epoxy resin with titanium dioxide ( TiO 2 1.45 mg/mL) was used to simulate the dermis, and a silicone layer of polydimethylsiloxane ( TiO 2 powder 2 mg/mL) was used for the dermis. An infusion pump Model NE-500 (New Era Pump System Inc., New York, NY, USA) simulates the blood flow passing a solution (1% concentration) through a glass capillary (thinXXS Microtechnology AG, Zweibrücken, Germany) with an inner diameter of 700 μ m. On the other hand, the in vivo images were acquired from one male rat (Rattus norvegicus) with weight of 120 g anesthetized intraperitoneally with Xylazine and Ketamine hydrochloride in doses of 0.1 and 0.7 mg/100 g body weight. A wound on the dorsal skin of the rat was created using sterile surgical scissors and forceps and outlined by a circular plate of 1 cm in diameter; the wound was made down to the fascial layer to expose the blood vessels. Then, the wound was sandwiched between two aluminum plates with a perforation of 1.1 cm in diameter coinciding with the plate’s perforation area, and saline solution was applied periodically to the sub dermal layer to prevent dehydration. Consequently, a beam laser (633 nm wavelength) can pass through the wound so that the blood vessels of the rat can be projected on the CCD camera.

4. Experiments and Results

The images used in the experiments contain in vivo and in vitro RSIs with straight and bifurcated blood vessels obtained with the experimental setups introduced in the Section 3.
The in vitro set consists of images with depth d p   =   { 0, 190, 200, 310, 400, 500, 510, 600, 700, 900} μ m, exposure times e t   =   { 70, 138, 256, 500, 980, 1883, 3949, 5908, 8204, 11,062, 12,200, 20,885, 26,481, 31,760, 32,789} ms and flow velocity v   =   {3, 6, 9, 12, 15, 18} mm/s. The images were processed using different contrast methods ( contrast = { sK 3 , sK 5 , aK , sdK , awk 9 , awk 11 } ). Patch sizes of 192 × 192 were obtained from the original images, and data augmentation generated more variability in the images to avoid overfitting. The data augmentation strategies include random X reflection (0.5%), random Y reflection (0.5%), random rotation in the range [−180, 180], random X translation, X shear and Y shear in the range [−15, 15]. Furthermore, patches with the size of the nearest integer value obtained in 192 r n × 192 r n were acquired, where r n is in the range [1, 2]. These patches were resized with bicubic interpolation to 192 × 192 , generating greater blood vessel sizes and diameter variability. Thus, the dataset of in vitro images used for training and validation contain 19,390 images with variable contrast representation (3435 ± 82 samples), lens aperture (2945 ± 1208 samples), blood flow velocity (1527 ± 108 samples), mote size (1553 ± 95 samples), and depth (1717.917 ± 4776). On the other hand, the in vivo dataset for validation contains 1225 samples with uncontrolled depth and blood flow and variable exposure time (306 ± 37) and lens aperture of 4. Each sample contains an image representing the raw data, contrast, depth, exposure time, flow velocity, lens aperture, and segmentation mask to use as input or output for the computer methods; examples of representative images are depicted in Figure 1.
Four deep learning models were explored to perform two tasks: to assign a label for each pixel in the image (blood vessel: 1, tissue: 0) and to assign the depth (mm) to pixels belonging to blood vessels in a CI. These problems in the speckle domain have been addressed by image processing techniques (global thresholding, morphological approach), traditional machine learning (k-means, featured k-means), and DL via UNet and other architectures such as CycleGAN and FURNet [45]. The UNet architecture was used as the base of the DL models because it is well-known and widely used in the medical field; its foundations were taken for comparison purposes; it is important to remark that the objective is not to find the best model that maximizes theIntersection over Union (IoU) but to test if the addition of the acquisition parameters to the model can improve the generalization. An experiment was performed to optimize the hyperparameters of the UNet for this application; they were selected after training and testing 600 models searching to maximize the IoU with a Bayesian optimization strategy. The experiment used the Adam optimizer, a 32 mini-batch size, validation patience of 3, and 10 max epochs. The data set was split into 70% for training and 30% for validation. The parameters involved in the experiments were the number of filters at the first layer (numFilters), the encoder depth of the architecture (encoderDepth), the filter size (filterSize), the learning rate drop factor to control the learning of general and specific features (learnRateDropFactor), the initial learning rate (initialLearnRate) and the loss function that includes the cross-entropy, weighted cross-entropy for unbalanced classes and the dice loss. See Table 1 for a summary of the studied values for each parameter.
The statistical analysis of the results with a factorial design suggests a model with two interactions, as seen in Figure 2. The loss type is the factor with more effect size (F = 35.45, p = 0.000), followed by the filter size (F = 8.20, p = 0.001) and the encoder depth (F = 5.03, p = 0.009). On the other hand, the filter size does not significantly impact the IoU, but a size of 7 improves the precision of the model, seen as a reduction in the standard deviation (Figure 3).
Then, a statistical response optimization method that maximizes the IoU for the repetitions of the experiments was used to select the hyperparameters. A filter size of 7, an encoder depth of 6, 16 initial filters, a cross-entropy loss, an initial learning rate of 0.02, and a learning drop factor of 0.85 maximize the evaluation metric with a fit standard error of 0.00680, as can be seen from Figure 3. This model and hyperparameters will be used for subsequent experiments because they maximize the IoU and minimize the standard deviation.
The UNet architecture was modified to expand the basic UNet by incorporating extra data that may influence the segmentation results, such as exposure time and lens aperture, willing to improve the generalization of the model. The rationale behind this is that the mean value of the regions in the image and the difference between them can change depending on the acquisition parameters. The model receives the exposure time ( e t ) and the lens aperture ( l a ) in the second stage of the encoder, allowing the backbone to extract dependent and independent features of the added information. The architecture (shown in Figure 4a) receives a CI image, the acquisition data, and outputs an image with labels for each pixel.
Furthermore, this manuscript introduces a regression UNet (R-UNet) (Figure 4b) designed to address two critical challenges identified in previous studies. First, it aims to streamline the tasks of segmenting blood vessels in an image and estimating its depth, improving the overall efficiency of current solutions by using only one architecture instead of two separate models. Second, it addresses the issue of spatial resolution when using patch-based methods, i.e., instead of a 32 × 32 patch to infer only one depth, the depth can be inferred pixel by pixel. The R-UNet is based on UNet, but a regression layer replaced the softmax and final classification layers. The output that the R-NET aims to regress is a depth map in which each pixel belonging to a blood vessel contains the depth obtained from the simulated blood vessels of the experimental setup. Each pixel that is not a blood vessel contains 1 . Thus, if the regression output is close to 1 , it is assumed that the specific pixel can be considered tissue; otherwise, the pixel is a blood vessel, and the value in the coordinate is its corresponding depth. A thresholding process converts the depth map (D) into a segmentation map, an image containing the two labels specified in the task: blood vessel and tissue. This process allows the assignment of a label to each pixel p using Equation (7) through a thresholding process.
S p = blood vessel : 1 , if D p t r tissue : 0 , otherwise
The main factor that determines the quality of the segmentation is the threshold t r . A permissive threshold can lead to over-segmentation, and a restrictive one can lead to infra-segmentation. Thus, an ROC curve analysis was performed to select the best threshold that maximizes the segmentation quality (see Equation (8)). Maximizing Youden’s J index allows the selection of a threshold that is in balance with both, infra-and over-segmentation. The analysis was performed in the training set, and the obtained threshold was used for validation and testing. A more appropriate threshold can be selected if the analysis is repeated and selected according to the partition, but it was selected in the training set to test the generalization capabilities.
t r = arg max t r ( S e n s i t i v i t y + E s p e c i f i c i t y 1 )
The parameters and training hyperparameters of the UNet for the comparisons (UNet: classical architecture, UNet + ET +LA: multiple inputs with classical architecture, R-UNet: classical architecture with regression at the output, and R-UNet + ET +LA: regression multiple input architecture) were selected according to the application. A mini-batch size of 32 and a training split of 0.7 for training and 0.30 for validation were selected. On the other hand, the regression task was trained with a cross-entropy loss and Adam optimizer.
Table 2 shows the results in the validation and in the test set, which consists of a set of in vivo CIs unseen in the model training stage. It is important to note that the training and validation set consists of in vitro images. The methods compared include global thresholding (GT), k-means, morphological approach, UNet, and modified UNet. The results in the validation step improved the IoU in multiple input networks and the regression model, increasing from 0.8228 ± 0.075 to 0.844 ± 0.065 for the multiple input regression model. On the other hand, more experiments were performed to assess the generalization of the segmentation models. R-UNet + ET + LA achieved the highest performance with a mean score of 0.812 ± 0.080 for the test set. In general terms, the addition of exposure time and lens aperture helps to improve IoU in segmentation, but the regression task with them significantly increased the quality of segmentation compared to the UNet (Multiple comparisons, δ = 0.05, F-value = 6.48, p-value = 0.00).
Representative in vivo images of the dataset used for validation of the segmentation methods are shown in Figure 5. This qualitative analysis is performed in the test set because in the in vitro images, the difference is less noticeable because the models were trained on it. This analysis is performed to know the generalization capabilities in this set unseen by the models during the training. As can be seen, the traditional image processing methods, global and k-means, often over-segment the image because of the noise and the combined information in the periphery of the blood vessel, for the case of the morphological approach. On the other hand, machine learning and deep learning methods alleviated the blob problem but with low sensibility as its main drawback. It is important to note that the trained models did not use images of tissue surrounded by blood vessels, as in the case shown in Figure 5, but the R-UNet + ET + LA segmentation was learned to locate blood vessels, improving the spatial resolution in the segmentation results.
Further analyses were performed to determine the effectiveness of the regression model in estimating the depth of the blood vessel. Table 3 shows the comparative analysis with different contrast representations. The R-UNet with multiple inputs obtained a mean Root Mean Squared Error (RMSE) of 0.0061 ± 0.0163, which is the lowest error compared with the single input network that obtained 0.0144 ± 0.0423 for all the CIs that feed the models (Multiple comparisons, δ = 0.005 , F = 26.05, p = 0.00). On the other hand, the adaptive method aK with a d = 11 obtained the lowest RMSE (0.0095 ± 0.0281). This may be because the method generates more stable images where the contrast values in the blood vessel change less in its center, as discussed in [49]. In addition, the reference that the ET and LA provide to the network may cause an improvement in RMSE. The linear regression between the estimated blood vessel depth and the ground truth of the validation set obtained R 2 of 0.8299 for the single input and 0.9295 for the multiple input.

5. Conclusions and Future Work

The experiments presented in this work focused on enhancing blood vessel segmentation in CI of LSCI. The UNet, UNet + ET + LA, R-UNET, and R-UNET + ET + LA were evaluated on in vitro and in vivo datasets to address challenges in blood vessel segmentation and depth estimation, improving the overall effectiveness of current state-of-the-art architectures that do not include data from the acquisition setup such as exposure time or lens aperture or LA as an input. In this sense, the regression models (R-UNET and R-UNET + ET + LA) obtained a competitive RMSE in-depth estimation. In addition, the integration of Exposure Time (ET) and Lens Aperture (LA) in the classification tasks enhanced the ability of the model to generalize when training with the in vitro and testing with the in vivo images, highlighting the potential of use multi-input architectures to combine information with the CI to enhance the accuracy of blood vessel segmentation. Also, the results suggest that it is possible to pre-train models with in vitro images to initialize the weights of a DL architecture to perform transfer learning, reducing the necessity of high amounts of data to properly train these types of models.
Two main limitations of the LSCI technique are the inherent noise present in the resulting CIs and the depth at which the blood vessel is [54,55]. The first limitation can be tackled by selecting larger analysis windows than 3 × 3 to compute the representation at the cost of losing spatial resolution or by selecting an appropriate representation, i.e., using an adaptive method, as seen in Table 3. On the other hand, the depth of the blood vessel increases the noise, and the combined information in the image does. This problem is inherent in this technique because tissue interacts with the light before acquisition; the scattered light by the tissue changes its spatial localization and is sensed with the information from other regions. An analysis of the variance (ANOVA) was performed to analyze the depth over the evaluation metric, affecting the results with a significant contribution for the segmentation (F-value = 152.04, p-value = 0.00) and the regression (F-value = 270.22, p-value) problem.
Although the results were not comparable with previous works because the differences in spatial resolution will make an unfair comparison, they provided a baseline for future works in this task. Future work may extend the work to explore methods more complex than the UNet architecture to test if the hypothesis of including additional information is independent of the DL architecture. In addition, it will be interesting to use the developed supervised models to select the pixels involved in computing the CIs, which was stated as the principal motivation for this research group to develop this model.

Author Contributions

Conceptualization, J.C.R.-S.-J.; Data curation, J.P.P.-M., W.A.G.-S. and J.C.R.-S.-J.; Formal analysis, E.M.-V.; Investigation, E.M.-V.; Methodology, E.M.-V.; Project administration, H.P.-B.; Resources, H.P.-B., R.Q.F.-A., J.P.P.-M., W.A.G.-S. and J.C.R.-S.-J.; Software, E.M.-V.; Supervision, H.P.-B. and J.C.R.-S.-J.; Validation, E.M.-V.; Visualization, R.Q.F.-A.; Writing—original draft, E.M.-V.; Writing—review and editing, H.P.-B., R.Q.F.-A., J.P.P.-M., W.A.G.-S. and J.C.R.-S.-J. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the financial support of Intel Rise ID 78663-2023.

Institutional Review Board Statement

All animal procedures were performed following the Mexican Norm NOM-062-ZOO-1999, and the experimental protocol (GASW/VIEP/497/2021) was approved by the Internal Committee for Care and Use of Laboratory Animals (CICUAL) at Benemérita Universidad Autónoma de Puebla (BUAP).

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study may be available when it is publicly released.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nagahara, M.; Tamaki, Y.; Araie, M.; Umeyama, T. The acute effects of stellate ganglion block on circulation in human ocular fundus. Acta Ophthalmol. Scand. 2001, 79, 45–48. [Google Scholar] [CrossRef] [PubMed]
  2. Paul, J.S.; Luft, A.R.; Yew, E.; Sheu, F.S. Imaging the development of an ischemic core following photochemically induced cortical infarction in rats using Laser Speckle Contrast Analysis (LASCA). NeuroImage 2006, 29, 38–45. [Google Scholar] [CrossRef] [PubMed]
  3. Basak, K.; Manjunatha, M.; Dutta, P.K. Review of laser speckle-based analysis in medical imaging. Med. Biol. Eng. Comput. 2012, 50, 547–558. [Google Scholar] [CrossRef] [PubMed]
  4. Moy, W.J.; Patel, S.J.; Lertsakdadet, B.S.; Arora, R.P.; Nielsen, K.M.; Kelly, K.M.; Choi, B. Preclinical in vivo evaluation of Npe6-mediated photodynamic therapy on normal vasculature. Lasers Surg. Med. 2012, 44, 158–162. [Google Scholar] [CrossRef] [PubMed]
  5. Smith, M.S.D.; Packulak, E.F.; Sowa, M.G. Development of a laser speckle imaging system for measuring relative blood flow velocity. In Proceedings of the SPIE—The International Society for Optical Engineering, Quebec City, QC, Canada, 5–8 June 2006; Developpement Economique Canada: Montreal, QC, Canada; TeraXion: Quebec City, QC, Canada, 2006; Volume 6343. [Google Scholar] [CrossRef]
  6. Kozlov, I.O.; Stavtcev, D.D.; Konovalov, A.N.; Grebenev, F.V.; Piavchenko, G.A.; Meglinski, I. Real-Time Mapping of Blood Perfusion during Neurosurgical Interventions. In Proceedings of the 2023 IEEE 24th International Conference of Young Professionals in Electron Devices and Materials (EDM), Novosibirsk, Russia, 29 June–3 July 2023; pp. 1460–1463. [Google Scholar] [CrossRef]
  7. Piavchenko, G.; Kozlov, I.; Dremin, V.; Stavtsev, D.; Seryogina, E.; Kandurova, K.; Shupletsov, V.; Lapin, K.; Alekseyev, A.; Kuznetsov, S.; et al. Impairments of cerebral blood flow microcirculation in rats brought on by cardiac cessation and respiratory arrest. J. Biophotonics 2021, 14, e202100216. [Google Scholar] [CrossRef] [PubMed]
  8. Konovalov, A.; Grebenev, F.; Stavtsev, D.; Kozlov, I.; Gadjiagaev, V.; Piavchenko, G.; Telyshev, D.; Gerasimenko, A.Y.; Meglinski, I.; Zalogin, S.; et al. Real-time laser speckle contrast imaging for intraoperative neurovascular blood flow assessment: Animal experimental study. Sci. Rep. 2024, 14, 1735. [Google Scholar] [CrossRef]
  9. Flammer, J.; Orgül, S.; Costa, V.P.; Orzalesi, N.; Krieglstein, G.K.; Serra, L.M.; Renard, J.P.; Stefánsson, E. The impact of ocular blood flow in glaucoma. Prog. Retin. Eye Res. 2002, 21, 359–393. [Google Scholar] [CrossRef]
  10. Postnov, D.D.; Tuchin, V.V.; Sosnovtseva, O. Estimation of vessel diameter and blood flow dynamics from laser speckle images. Biomed. Opt. Express 2016, 7, 2759. [Google Scholar] [CrossRef]
  11. Bernard, C.; Wenbin, T.; Wangcun, J. The Role of Laser Speckle Imaging in Port-Wine Stain Research: Recent Advances and Opportunities. IEEE J. Sel. Top. Quantum Electron. 2017, 4, 6800812. [Google Scholar] [CrossRef]
  12. Sharif, S.A.; Taydas, E.; Mazhar, A.; Rahimian, R.; Kelly, K.M.; Choi, B.; Durkin, A.J. Noninvasive clinical assessment of port-wine stain birthmarks using current and future optical imaging technology: A review. Br. J. Dermatol. 2012, 167, 1215–1223. [Google Scholar] [CrossRef]
  13. Carlson, A.P.; Denezpi, T.; Akbik, O.S.; Mohammad, L.M. Laser speckle imaging to evaluate scalp flap blood flow during closure in neurosurgical procedures. Surg. Neurol. Int. 2021, 12, 632. [Google Scholar] [CrossRef] [PubMed]
  14. Konovalov, A.; Gadzhiagaev, V.; Grebenev, F.; Stavtsev, D.; Piavchenko, G.; Gerasimenko, A.; Telyshev, D.; Meglinski, I.; Eliava, S. Laser Speckle Contrast Imaging in Neurosurgery: A Systematic Review. World Neurosurg. 2023, 171, 35–40. [Google Scholar] [CrossRef] [PubMed]
  15. Spetzler, R.F.; Yashar, M.; Kalani, S.; Nakaji, P. Neurovascular Surgery; Georg Thieme Verlag: New York, NY, USA, 2015. [Google Scholar] [CrossRef]
  16. Richards, L.M.; Towle, E.L.; Fox, D.J.; Dunn, A.K. Intraoperative laser speckle contrast imaging with retrospective motion correction for quantitative assessment of cerebral blood flow. Neurophotonics 2014, 1, 1. [Google Scholar] [CrossRef] [PubMed]
  17. Parthasarathy, A.B.; Weber, E.L.; Richards, L.M.; Fox, D.J.; Dunn, A.K. Laser speckle contrast imaging of cerebral blood flow in humans during neurosurgery: A pilot clinical study. J. Biomed. Opt. 2010, 15, 066030. [Google Scholar] [CrossRef] [PubMed]
  18. Ideguchi, M.; Kajiwara, K.; Yoshikawa, K.; Goto, H.; Sugimoto, K.; Inoue, T.; Nomura, S.; Suzuki, M. Avoidance of ischemic complications after resection of a brain lesion based on intraoperative real-time recognition of the vasculature using laser speckle flow imaging. J. Neurosurg. 2017, 126, 274–280. [Google Scholar] [CrossRef] [PubMed]
  19. Hecht, N.; Woitzik, J.; König, S.; Horn, P.; Vajkoczy, P. Laser Speckle Imaging Allows Real-Time Intraoperative Blood Flow Assessment During Neurosurgical Procedures. J. Cereb. Blood Flow Metab. 2013, 33, 1000–1007. [Google Scholar] [CrossRef] [PubMed]
  20. Nomura, S.; Inoue, T.; Ishihara, H.; Koizumi, H.; Suehiro, E.; Oka, F.; Suzuki, M. Reliability of Laser Speckle Flow Imaging for Intraoperative Monitoring of Cerebral Blood Flow During Cerebrovascular Surgery: Comparison with Cerebral Blood Flow Measurement by Single Photon Emission Computed Tomography. World Neurosurg. 2014, 82, e753–e757. [Google Scholar] [CrossRef]
  21. Miller, D.R.; Ashour, R.; Sullender, C.T.; Dunn, A.K. Continuous blood flow visualization with laser speckle contrast imaging during neurovascular surgery. Neurophotonics 2022, 9, 021908. [Google Scholar] [CrossRef]
  22. Werkmeister, E.; Kerdjoudj, H.; Marchal, L.; Stoltz, J.F.; Dumas, D. Multiphoton microscopy for blood vessel imaging: New non-invasive tools (Spectral, SHG, FLIM). In Clinical Hemorheology and Microcirculation; IOS Press: Amsterdam, The Netherlands, 2007; Volume 37, pp. 77–88. [Google Scholar]
  23. Xi, G.; Cao, N.; Guo, W.; Kang, D.; Chen, Z.; He, J.; Ren, W.; Shen, T.; Wang, C.; Chen, J. Label-free imaging of blood vessels in human normal breast and breast tumor tissue using multiphoton microscopy. Scanning 2019, 2019, 5192875. [Google Scholar] [CrossRef]
  24. Siegel, A.M.; Boas, D.A.; Stott, J.J.; Culver, J.P. Volumetric diffuse optical tomography of brain activity. Opt. Lett. 2003, 28, 2061–2063. [Google Scholar] [CrossRef]
  25. Ogawa, S.; Lee, T.M.; Kay, A.R.; Tank, D.W. Brain magnetic resonance imaging with contrast dependent on blood oxygenation. Proc. Natl. Acad. Sci. USA 1990, 87, 9868–9872. [Google Scholar] [CrossRef] [PubMed]
  26. Humeau-Heurtier, A.; Abraham, P.; Mahé, G. Analysis of laser speckle contrast images variability using a novel empirical mode decomposition: Comparison of results with laser doppler flowmetry signals variability. IEEE Trans. Med. Imaging 2015, 34, 618–627. [Google Scholar] [CrossRef] [PubMed]
  27. Fredriksson, I.; Larsson, M.; Strömberg, T. Measurement depth and volume in laser Doppler flowmetry. Microvasc. Res. 2009, 78, 4–13. [Google Scholar] [CrossRef] [PubMed]
  28. Draijer, M.; Hondebrink, E.; Van Leeuwen, T.; Steenbergen, W.; van Leeuwen, T.; Steenbergen, W.; Van Leeuwen, T.; Steenbergen, W. Review of laser speckle contrast techniques for visualizing tissue perfusion. Lasers Med Sci. 2009, 24, 639–651. [Google Scholar] [CrossRef] [PubMed]
  29. Regan, C.; Hayakawa, C.; Choi, B. Momentum transfer Monte Carlo for the simulation of laser speckle imaging and its application in the skin. Biomed. Opt. Express 2017, 8, 5708. [Google Scholar] [CrossRef] [PubMed]
  30. Basak, K.; Dey, G.; Mahadevappa, M.; Mandal, M.; Dutta, P.K. In vivo laser speckle imaging by adaptive contrast computation for microvasculature assessment. Opt. Lasers Eng. 2014, 62, 87–94. [Google Scholar] [CrossRef]
  31. Briers, J.D.; Webster, S. Laser speckle contrast analysis (LASCA): A nonscanning, full-field technique for monitoring capillary blood flow. J. Biomed. Opt. 1996, 1, 174. [Google Scholar] [CrossRef]
  32. Verkruysse, W.; Choi, B.; Zhang, J.R.; Kim, J.; Nelson, J.S. Thermal depth profiling of vascular lesions: Automated regularization of reconstruction algorithms. Phys. Med. Biol. 2008, 53, 1463–1474. [Google Scholar] [CrossRef]
  33. Kim, J.; Oh, J.; Choi, B. Magnetomotive laser speckle imaging. J. Biomed. Opt. 2010, 15, 011110. [Google Scholar] [CrossRef]
  34. Son, T.; Yoon, J.; Ko, C.Y.; Lee, Y.H.; Kwon, K.; Kim, H.S.; Lee, K.J.; Jung, B. Contrast enhancement of laser speckle skin image: Use of optical clearing agent in conjunction with micro-needling. J. Opt. Soc. Korea 2008, 17, 86–90. [Google Scholar] [CrossRef]
  35. Kalchenko, V.; Israeli, D.; Kuznetsov, Y.; Meglinski, I.; Harmelin, A. A simple approach for non-invasive transcranial optical vascular imaging (nTOVI). J. Biophotonics 2015, 8, 897–901. [Google Scholar] [CrossRef] [PubMed]
  36. Kalchenko, V.; Meglinski, I.; Sdobnov, A.; Kuznetsov, Y.; Harmelin, A. Combined laser speckle imaging and fluorescent intravital microscopy for monitoring acute vascular permeability reaction. J. Biomed. Opt. 2019, 24, 1. [Google Scholar] [CrossRef] [PubMed]
  37. Molodij, G.; Sdobnov, A.; Kuznetsov, Y.; Harmelin, A.; Meglinski, I.; Kalchenko, V. Time-space Fourier κω′ filter for motion artifacts compensation during transcranial fluorescence brain imaging. Phys. Med. Biol. 2020, 65, 075007. [Google Scholar] [CrossRef] [PubMed]
  38. Kalchenko, V.; Sdobnov, A.; Meglinski, I.; Kuznetsov, Y.; Molodij, G.; Harmelin, A. A robust method for adjustment of laser speckle contrast imaging during transcranial mouse brain visualization. Photonics 2019, 6, 80. [Google Scholar] [CrossRef]
  39. Morales-Vargas, E.; Peregrina-Barreto, H.; Ramirez-San-Juan, J.C. Adaptive processing for noise attenuation in laser speckle contrast imaging. Comput. Methods Programs Biomed. 2021, 212, 106486. [Google Scholar] [CrossRef] [PubMed]
  40. Han, G.; Li, D.; Wang, J.; Guo, Q.; Yuan, J.; Chen, R.; Wang, J.; Wang, H.; Zhang, J. Adaptive window space direction laser speckle contrast imaging to improve vascular visualization. Biomed. Opt. Express 2023, 14, 3086–3099. [Google Scholar] [CrossRef] [PubMed]
  41. Ren, C.; Chen, T.; Chen, M.; Shen, Y.; Li, B. Enhancing laser speckle contrast imaging based on adaptive scale and directional kernel during V-PDT. In Proceedings of the Sixteenth International Conference on Photonics and Imaging in Biology and Medicine (PIBM 2023), Haikou, China, 29 March–1 April 2023; Luo, Q., Wang, L.V., Tuchin, V.V., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2023; Volume 12745, p. 1274509. [Google Scholar] [CrossRef]
  42. Han, G.; Li, D.; Yuan, J.; Lu, J.; Wang, H.; Wang, J. Study of adaptive window space direction contrast method in transmission speckle contrast imaging. In Proceedings of the Optics in Health Care and Biomedical Optics XIII, Beijing, China, 14–16 October 2023; Luo, Q., Li, X., Gu, Y., Zhu, D., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2023; Volume 12770, p. 1277028. [Google Scholar] [CrossRef]
  43. Rege, A.; Senarathna, J.; Li, N.; Thakor, N.V. Anisotropic processing of laser speckle images improves spatiotemporal resolution. IEEE Trans. Bio-Med Eng. 2012, 59, 1272–1280. [Google Scholar] [CrossRef]
  44. Morales-Vargas, E.; Sosa-Martinez, J.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.; Ramirez-San-Juan, J. A morphological approach for locating blood vessels in laser contrast speckle imaging. In Proceedings of the I2MTC 2018—2018 IEEE International Instrumentation and Measurement Technology Conference: Discovering New Horizons in Instrumentation and Measurement, Houston, TX, USA, 14–17 May 2018. [Google Scholar] [CrossRef]
  45. Fu, S.; Xu, J.; Chang, S.; Yang, L.; Ling, S.; Cai, J.; Chen, J.; Yuan, J.; Cai, Y.; Zhang, B.; et al. Robust vascular segmentation for raw complex images of laser speckle contrast based on weakly supervised learning. IEEE Trans. Med Imaging 2023, 43, 39–50. [Google Scholar] [CrossRef]
  46. Chen, R.; Tong, S.; Tong, S.; Miao, P. Deep-learning-based 3D blood flow reconstruction in transmissive laser speckle imaging. Opt. Lett. 2023, 48, 2913–2916. [Google Scholar] [CrossRef]
  47. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef]
  48. Lopez-Tiro, F.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.; Ramirez-San-Juan, J.C. Localization of blood vessels in in-vitro LSCI images with K-means. In Proceedings of the Conference Record—IEEE Instrumentation and Measurement Technology Conference, Glasgow, UK, 17–20 May 2021. [Google Scholar] [CrossRef]
  49. Morales-Vargas, E.; Padilla-Martinez, J.P.; Peregrina-Barreto, H.; Garcia-Suastegui, W.A.; Ramirez-San-Juan, J.C. Adaptive Feature Extraction for Blood Vessel Segmentation and Contrast Recalculation in Laser Speckle Contrast Imaging. Micromachines 2022, 13, 1788. [Google Scholar] [CrossRef]
  50. Cheng, H.; Duong, T.Q. Simplified laser-speckle-imaging analysis method and its application to retinal blood flow imaging. Opt. Lett. 2007, 32, 2188. [Google Scholar] [CrossRef]
  51. Qiu, J. Spatiotemporal laser speckle contrast analysis for blood flow imaging with maximized speckle contrast. J. Biomed. Opt. 2010, 15, 016003. [Google Scholar] [CrossRef]
  52. Duncan, D.D.; Kirkpatrick, S.J. Spatio-temporal algorithms for processing laser speckle imaging data. In Proceedings of the Optics in Tissue Engineering and Regenerative Medicine II, San Jose, CA, USA, 20–21 January 2008; Kirkpatrick, S.J., Wang, R.K., Eds.; SPIE: Bellingham, WA, USA, 2008; Volume 6858, p. 685802. [Google Scholar] [CrossRef]
  53. Perez-Corona, C.E.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.; Ramos-Garcia, R.; Ramirez-San-Juan, J.C. Space-directional laser speckle contrast imaging to improve blood vessels visualization. In Proceedings of the I2MTC 2018—2018 IEEE International Instrumentation and Measurement Technology Conference: Discovering New Horizons in Instrumentation and Measurement, Houston, TX, USA, 14–17 May 2018; pp. 1–5. [Google Scholar] [CrossRef]
  54. Morales-Vargas, E.; Peregrina-Barreto, H.; Ramirez-San-Juan, J.C. Exposure Time and Depth Effect in Laser Speckle Contrast Images under an Adaptive Processing. In Proceedings of the 2022 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), Ixtapa, Mexico, 9–11 November 2022; Volume 6, pp. 1–6. [Google Scholar] [CrossRef]
  55. Zherebtsov, E.; Dremin, V.; Popov, A.; Doronin, A.; Kurakina, D.; Kirillin, M.; Meglinski, I.; Bykov, A. Hyperspectral imaging of human skin aided by artificial neural networks. Biomed. Opt. Express 2019, 10, 3545–3559. [Google Scholar] [CrossRef]
Figure 1. Example of representative images generated to train the models. From top to bottom: RSI with values between 0 and 255, CI ranging from 0 to 1, and binary mask indicating blood vessels presence with values of 0 or 1.
Figure 1. Example of representative images generated to train the models. From top to bottom: RSI with values between 0 and 255, CI ranging from 0 to 1, and binary mask indicating blood vessels presence with values of 0 or 1.
Information 15 00185 g001
Figure 2. Pareto diagram of the standardized effects for each factor with two interactions ( σ = 0.005 ).
Figure 2. Pareto diagram of the standardized effects for each factor with two interactions ( σ = 0.005 ).
Information 15 00185 g002
Figure 3. Box plots varying the parameters of the UNet architecture. The red line represents the median of the distribution, the blue box represents the interquartile range, and the red markers represent the outlier values. The statistics were computed for the 600 partially trained models of the optimization strategy.
Figure 3. Box plots varying the parameters of the UNet architecture. The red line represents the median of the distribution, the blue box represents the interquartile range, and the red markers represent the outlier values. The statistics were computed for the 600 partially trained models of the optimization strategy.
Information 15 00185 g003
Figure 4. Multi-input models for (a) segmentation of blood vessels in RSIs and (b) regression + segmentation of the depth in CIs.
Figure 4. Multi-input models for (a) segmentation of blood vessels in RSIs and (b) regression + segmentation of the depth in CIs.
Information 15 00185 g004
Figure 5. Two in vivo representative RSIs and their corresponding segmentation varying the algorithm. From left to right: RSI, global, k-means, morphological k-means with features, UNet, UNet + ET + LA, R-UNet, and R-UNet + ET + LA.
Figure 5. Two in vivo representative RSIs and their corresponding segmentation varying the algorithm. From left to right: RSI, global, k-means, morphological k-means with features, UNet, UNet + ET + LA, R-UNet, and R-UNet + ET + LA.
Information 15 00185 g005
Table 1. Parameters to optimize by a Bayesian optimization strategy. c: Cross-entropy, w: weighted cross-entropy for unbalanced classes, and d: dice loss.
Table 1. Parameters to optimize by a Bayesian optimization strategy. c: Cross-entropy, w: weighted cross-entropy for unbalanced classes, and d: dice loss.
ParameterRangeTransform
numFilters{8, 16, 32}none
encoderDepth{2, 4, 6}none
filterSize{3, 5, 7}none
learnRateDropFactor[0.5–0.9]log
initialLearnRate[0.001–0.1]log
loss{c, w, d}none
Table 2. Comparison of the results obtained in the segmentation task for the validation and test datasets. The regression models obtained the highest accuracy.
Table 2. Comparison of the results obtained in the segmentation task for the validation and test datasets. The regression models obtained the highest accuracy.
MethodValidation Set (IoU)Test Set (IoU)
global0.760 ± 0.1900.595 ± 0.149
k-means0.762 ± 0.1890.598 ± 0.148
morphological0.882 ± 0.1190.723 ± 0.119
k-means with features0.883 ± 0.1170.725 ± 0.117
UNet0.928 ± 0.0750.784 ± 0.090
UNet + ET + LA0.940 ± 0.0540.795 ± 0.091
R-UNet0.943 ± 0.0730.803 ± 0.080
R-UNet + ET + LA0.944 ± 0.0650.812 ± 0.080
REG: Regression; ET: Exposure time; LA: Lens aperture; IoU: Intersection over Union.
Table 3. Root Mean Squared Error (RMSE) of the regression models for the depth estimation pixel by pixel, the results are presented by contrast method used as input for the model.
Table 3. Root Mean Squared Error (RMSE) of the regression models for the depth estimation pixel by pixel, the results are presented by contrast method used as input for the model.
kR-UNet + ET + LAR-UNetTotal
k 3 0.0077 ± 0.02470.0137 ± 0.03670.0107 ± 0.0314
k 5 0.0100 ± 0.02700.0161 ± 0.04070.1300 ± 0.0347
ak0.0084 ± 0.03290.0172 ± 0.05190.0128 ± 0.0437
sdk0.0096 ± 0.02770.0120 ± 0.03830.0108 ± 0.0335
ak 9 0.0096 ± 0.03280.0149 ± 0.04800.0122 ± 0.0412
ak 11 0.0061 ± 0.01630.0128 ± 0.03590.0095 ± 0.0281
Total0.0085 ± 0.02750.0144 ± 0.04230.0114 ± 0.0358
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Morales-Vargas, E.; Peregrina-Barreto, H.; Fuentes-Aguilar, R.Q.; Padilla-Martinez, J.P.; Garcia-Suastegui, W.A.; Ramirez-San-Juan, J.C. Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning. Information 2024, 15, 185. https://doi.org/10.3390/info15040185

AMA Style

Morales-Vargas E, Peregrina-Barreto H, Fuentes-Aguilar RQ, Padilla-Martinez JP, Garcia-Suastegui WA, Ramirez-San-Juan JC. Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning. Information. 2024; 15(4):185. https://doi.org/10.3390/info15040185

Chicago/Turabian Style

Morales-Vargas, Eduardo, Hayde Peregrina-Barreto, Rita Q. Fuentes-Aguilar, Juan Pablo Padilla-Martinez, Wendy Argelia Garcia-Suastegui, and Julio C. Ramirez-San-Juan. 2024. "Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning" Information 15, no. 4: 185. https://doi.org/10.3390/info15040185

APA Style

Morales-Vargas, E., Peregrina-Barreto, H., Fuentes-Aguilar, R. Q., Padilla-Martinez, J. P., Garcia-Suastegui, W. A., & Ramirez-San-Juan, J. C. (2024). Improving Blood Vessel Segmentation and Depth Estimation in Laser Speckle Images Using Deep Learning. Information, 15(4), 185. https://doi.org/10.3390/info15040185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop