Breast Delineation in Full-Field Digital Mammography Using the Segment Anything Model
Abstract
:1. Introduction
2. Materials and Methods
2.1. Datasets
- GVA (proprietary): A multi-center dataset that covers 11 medical centers of the Generalitat Valenciana (GVA) as part of the Spanish breast cancer screening network. It includes 1785 women with ages from 45 to 70. The CC and MLO views were available for 10 out of 11 of the centers, while one center only collected the CC view. This dataset was used for training, validation, and testing. The dataset was randomly partitioned into 75% (2492 mammograms) for training and validation (10%) and 25% for testing (844 mammograms). All the mammograms are of the type “for presentation”. Further details about this dataset can be found in our previous work [9].
- IMIM (proprietary): A dataset composed of 881 images obtained at the Hospital del Mar Research Institute (IMIM). It was included solely for testing purposes to better evaluate the generalization performance of the models. This dataset consists of images from three different acquisition devices. One of these devices (Hologic Lorad Selenia, Marlborough, MA, USA) is older and was used to obtain images back in 2012. As a result, the image quality is lower, making the segmentation task more challenging. Only CC views were provided for this dataset, and they are also of the type “for presentation”.
- CSAW-S (public): The CSAW-S dataset, released by Matsoukas et al. [19], is a companion subset of CSAW [20]. This subset contains mammograms with expert radiologist labels for cancer and complementary labels of breast anatomy made by non-experts. The anonymized dataset contains mammograms from 150 cases of breast cancer, some of them including both MLO and CC views. We generated the breast masks for our experiments by combining the provided mammary gland and pectoral muscle labels, thus obtaining a total of 270 images with breast mask segmentations.
- InBreast (public): A well-known publicly available dataset [21]. It has ground truth annotations for the pectoral muscle in MLO views. We used these annotations to generate the ground truth breast mask for a total of 200 images.
- Mini-MIAS (public): The Mini-MIAS database [22], created by the Mammographic Image Analysis Society (MIAS), is a resource that has been extensively used in prior research. It contains 322 digitized mammographic films in MLO view. The breast masks that we used for evaluation were obtained from Verboom et al. [23].
2.2. Models
2.2.1. Thresholding
2.2.2. MedSegDiff
2.2.3. SAM
2.2.4. SAM-Adapter
2.2.5. SAM-Breast
2.2.6. Training Strategy
2.3. Evaluation
- Dice Similarity Coefficient (DSC): The DSC is a widely used metric for assessing the similarity between the predicted segmentation and the ground truth. It calculates the ratio of twice the intersection of the two segmentations to the sum of the number of pixels in each segmentation. We express this metric as a percentage, ranging from 0 to 100%, with a higher value signifying a superior model performance. The DSC can be expressed as:
- Intersection over Union (IoU): The IoU, also known as the Jaccard index, measures the overlap between two segmentations. While it bears resemblance to the DSC, its calculation method differs. The IoU can provide a more stringent measure of segmentation performance than the DSC because it penalizes false positives and false negatives equally. It also ranges from 0 to 100%, with higher values indicating better performance. The formula for IoU is given by:
- Hausdorff Distance (HD): The HD is a metric based on spatial distance. It calculates the maximum distance from a point in one set to the closest point in the other set, allowing for the scoring of localization similarity by focusing on contour delineation. Compared to the overlapping metrics (DSC and IoU), the HD is more sensitive to the boundary. The HD is measured in pixels, with lower values indicating better performance. The formula is given by:
- Average Surface Distance (ASD): This is another metric used to evaluate the quality of image segmentation. It calculates the average distance from each point on the boundary of the predicted segmentation to the closest point on the boundary of the ground truth segmentation. The ASD is less sensitive to outliers than HD, as it considers the average performance over all points, not just the worst one. It is also measured in pixels, with lower values indicating better performance. The formula for the ASD is given by:
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Gegios, A.R.; Peterson, M.S.; Fowler, A.M. Breast Cancer Screening and Diagnosis: Recent Advances in Imaging and Current Limitations. PET Clin. 2023, 18, 459–471. [Google Scholar] [CrossRef]
- Advani, S.M.; Zhu, W.; Demb, J.; Sprague, B.L.; Onega, T.; Henderson, L.M.; Buist, D.S.M.; Zhang, D.; Schousboe, J.T.; Walter, L.C.; et al. Association of Breast Density with Breast Cancer Risk among Women Aged 65 Years or Older by Age Group and Body Mass Index. JAMA Netw. Open 2021, 4, e2122810. [Google Scholar] [CrossRef] [PubMed]
- Gudhe, N.R.; Behravan, H.; Sudah, M.; Okuma, H.; Vanninen, R.; Kosma, V.M.; Mannermaa, A. Area-based breast percentage density estimation in mammograms using weight-adaptive multitask learning. Sci. Rep. 2022, 12, 12060. [Google Scholar] [CrossRef]
- Lopez-Almazan, H.; Pérez-Benito, F.J.; Larroza, A.; Perez-Cortes, J.C.; Pollan, M.; Perez-Gomez, B.; Trejo, D.S.; Casals, M.; Llobet, R. A deep learning framework to classify breast density with noisy labels regularization. Comput. Methods Programs Biomed. 2022, 221, 106885. [Google Scholar] [CrossRef]
- Michael, E.; Ma, H.; Li, H.; Kulwa, F.; Li, J. Breast cancer segmentation methods: Current status and future potentials. Biomed Res. Int. 2021, 2021, 9962109. [Google Scholar] [CrossRef]
- Hazarika, M.; Mahanta, L.B. A New Breast Border Extraction and Contrast Enhancement Technique with Digital Mammogram Images for Improved Detection of Breast Cancer. Asian Pac. J. Cancer Prev. 2018, 19, 2141–2148. [Google Scholar] [PubMed]
- Bora, V.B.; Kothari, A.G.; Keskar, A.G. Robust automatic pectoral muscle segmentation from mammograms using texture gradient and Euclidean Distance Regression. J. Digit. Imaging 2016, 29, 115–125. [Google Scholar] [CrossRef]
- Sansone, M.; Marrone, S.; Di Salvio, G.; Belfiore, M.P.; Gatta, G.; Fusco, R.; Vanore, L.; Zuiani, C.; Grassi, F.; Vietri, M.T.; et al. Comparison between two packages for pectoral muscle removal on mammographic images. Radiol. Medica 2022, 127, 848–856. [Google Scholar] [CrossRef] [PubMed]
- Larroza, A.; Pérez-Benito, F.J.; Perez-Cortes, J.C.; Román, M.; Pollán, M.; Pérez-Gómez, B.; Salas-Trejo, D.; Casals, M.; Llobet, R. Breast Dense Tissue Segmentation with Noisy Labels: A Hybrid Threshold-Based and Mask-Based Approach. Diagnostics 2022, 12, 1822. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment Anything. arXiv 2023, arXiv:2304.02643. [Google Scholar]
- Mazurowski, M.A.; Dong, H.; Gu, H.; Yang, J.; Konz, N.; Zhang, Y. Segment anything model for medical image analysis: An experimental study. Med. Image Anal. 2023, 89, 102918. [Google Scholar] [CrossRef] [PubMed]
- He, S.; Bao, R.; Li, J.; Stout, J.; Bjornerud, A.; Grant, P.E.; Ou, Y. Computer-Vision Benchmark Segment-Anything Model (SAM) in Medical Images: Accuracy in 12 Datasets. arXiv 2023, arXiv:2304.09324. [Google Scholar]
- Deng, G.; Zou, K.; Ren, K.; Wang, M.; Yuan, X.; Ying, S.; Fu, H. SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image. arXiv 2023, arXiv:2307.04973. [Google Scholar]
- Zhang, K.; Liu, D. Customized segment anything model for medical image segmentation. arXiv 2023, arXiv:2304.13785. [Google Scholar]
- Hu, X.; Xu, X.; Shi, Y. How to Efficiently Adapt Large Segmentation Model (SAM) to Medical Images. arXiv 2023, arXiv:2306.13731. [Google Scholar]
- Ma, J.; Wang, B. Segment anything in medical images. arXiv 2023, arXiv:2304.12306. [Google Scholar] [CrossRef] [PubMed]
- Wu, J.; Zhang, Y.; Fu, R.; Fang, H.; Liu, Y.; Wang, Z.; Xu, Y.; Jin, Y. Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation. arXiv 2023, arXiv:2304.12620. [Google Scholar]
- Cheng, J.; Ye, J.; Deng, Z.; Chen, J.; Li, T.; Wang, H.; Su, Y.; Huang, Z.; Chen, J.; Jiang, L.; et al. SAM-Med2D. arXiv 2023, arXiv:2308.16184. [Google Scholar]
- Matsoukas, C.; Hernandez, A.B.I.; Liu, Y.; Dembrower, K.; Miranda, G.; Konuk, E.; Haslum, J.F.; Zouzos, A.; Lindholm, P.; Strand, F.; et al. Adding Seemingly Uninformative Labels Helps in Low Data Regimes. arXiv 2020, arXiv:2008.00807. [Google Scholar]
- Dembrower, K.; Lindholm, P.; Strand, F. A multi-million mammography image dataset and population-based screening cohort for the training and evaluation of deep neural networks—The cohort of Screen-Aged Women (CSAW). J. Digit. Imaging 2020, 33, 408–413. [Google Scholar] [CrossRef]
- Moreira, I.C.; Amaral, I.; Domingues, I.; Cardoso, A.; Cardoso, M.J.; Cardoso, J.S. INbreast: Toward a Full-field Digital Mammographic Database. Acad. Radiol. 2012, 19, 236–248. [Google Scholar] [CrossRef] [PubMed]
- Suckling, J.; Parker, J.; Dance, D.; Astley, S.; Hutt, I. Mammographic Image Analysis Society (MIAS) Database v1.21. Apollo—University of Cambridge Repository. 2015. Available online: http://peipa.essex.ac.uk/info/mias.html (accessed on 11 January 2024).
- Verboom, S.D.; Caballo, M.; Peters, J.; Gommers, J.; van den Oever, D.; Broeders, M.; Teuwen, J.; Sechopoulos, I. Segmentation Masks Mini-MIAS [Data Set]. 2023. Available online: https://zenodo.org/records/10149914 (accessed on 11 January 2024).
- Wu, J.; Ji, W.; Fu, H.; Xu, M.; Jin, Y.; Xu, Y. MedSegDiff-V2: Diffusion based Medical Image Segmentation with Transformer. Proc. Aaai Conf. Artif. Intell. 2024, 38, 6030–6038. [Google Scholar] [CrossRef]
- Jadon, S. A survey of loss functions for semantic segmentation. In Proceedings of the 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Via del Mar, Chile, 27–29 October 2020; pp. 1–7. [Google Scholar]
- Müller, D.; Soto-Rey, I.; Kramer, F. Towards a guideline for evaluation metrics in medical image segmentation. BMC Res. Notes 2022, 15, 1–11. [Google Scholar] [CrossRef] [PubMed]
- Maintainers, P.M. MONAI: A PyTorch-based, open-source framework for deep learning in healthcare imaging. arXiv 2020, arXiv:2003.05597. [Google Scholar]
- Zhou, K.; Li, W.; Zhao, D. Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3+. Technol. Health Care 2022, 30, S173–S190. [Google Scholar] [CrossRef] [PubMed]
- Lbachir, I.A.; Es-Salhi, R.; Daoudi, I.; Tallal, S. A New Mammogram Preprocessing Method for Computer-Aided Diagnosis Systems. In Proceedings of the 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia, 30 October–3 November 2017; pp. 166–171. [Google Scholar] [CrossRef]
- Taghanaki, S.A.; Liu, Y.; Miles, B.; Hamarneh, G. Geometry-Based Pectoral Muscle Segmentation from MLO Mammogram Views. IEEE Trans. Biomed. Eng. 2017, 64, 2662–2671. [Google Scholar] [CrossRef]
- Rahman, M.A.; Jha, R.K.; Gupta, A.K. Gabor phase response based scheme for accurate pectoral muscle boundary detection. IET Image Process. 2019, 13, 771–778. [Google Scholar] [CrossRef]
Dataset | Split | #MLO | #CC | Total |
---|---|---|---|---|
GVA | Train | 1052 | 1190 | 2242 |
GVA | Validation | 111 | 139 | 250 |
GVA | Test | 395 | 449 | 844 |
IMIM | Test | 0 | 881 | 881 |
CSAW-S | Test | 126 | 144 | 270 |
InBreast | Test | 200 | 0 | 200 |
Mini-MIAS | Test | 322 | 0 | 322 |
Dataset | Model | DSC | IoU | HD | ASD |
---|---|---|---|---|---|
Thresholding | 95.81 ± 4.80 | 92.31 ± 7.58 | 101.87 ± 91.44 | 10.21 ± 16.66 | |
MedSegDiff | 98.40 ± 1.18 | 96.88 ± 2.17 | 5.96 ± 7.58 | 0.96 ± 0.89 | |
GVA | SAM | 95.32 ± 5.73 | 91.60 ± 9.80 | 111.70 ± 130.12 | 19.78 ± 40.17 |
SAM-adapter | 99.21 ± 0.87 | 98.45 ± 1.65 | 20.88 ± 27.95 | 1.85 ± 2.21 | |
SAM-breast | 99.37 ± 0.86 | 98.76 ± 1.61 | 18.72 ± 30.19 | 1.64 ± 2.95 | |
Thresholding | 97.14 ± 8.00 | 95.32 ± 11.24 | 58.20 ± 100.52 | 13.88 ± 45.56 | |
MedSegDiff | 98.91 ± 1.45 | 97.87 ± 2.59 | 5.77 ± 9.14 | 0.77 ± 1.12 | |
IMIM | SAM | 99.22 ± 1.04 | 98.47 ± 1.95 | 42.10 ± 69.11 | 4.14 ± 15.83 |
SAM-adapter | 99.49 ± 0.59 | 98.98 ± 1.14 | 18.20 ± 21.84 | 1.45 ± 1.90 | |
SAM-breast | 99.54 ± 0.52 | 99.10 ± 1 | 15.09 ± 19.31 | 1.27 ± 1.74 | |
Thresholding | 92.48 ± 8.36 | 86.94 ± 12.30 | 152.50 ± 104.63 | 23.04 ± 21.50 | |
MedSegDiff | 97.19 ± 3.98 | 94.80 ± 6.74 | 15.26 ± 23.05 | 2.83 ± 7.06 | |
CSAW-S | SAM | 94.26 ± 6.70 | 89.85 ± 11.10 | 129.10 ± 126.87 | 20.39 ± 29.58 |
SAM-adapter | 98.72 ± 1.41 | 97.52 ± 2.60 | 58.15 ± 75.96 | 3.21 ± 3.81 | |
SAM-breast | 99.10 ± 0.96 | 98.22 ± 1.83 | 51.89 ± 75.16 | 2.54 ± 3.22 | |
Thresholding | 94.70 ± 2.74 | 90.06 ± 4.77 | 146.33 ± 72.53 | 12.66 ± 7.02 | |
MedSegDiff | 98.37 ± 1.23 | 96.82 ± 2.30 | 7.56 ± 8.27 | 1.10 ± 0.88 | |
InBreast | SAM | 92.04 ± 5.16 | 85.65 ± 8.48 | 170.89 ± 71.21 | 28.77 ± 16.92 |
SAM-adapter | 99.05 ± 1.18 | 98.13 ± 2.15 | 30.98 ± 36.24 | 2.59 ± 3.29 | |
SAM-breast | 99.27 ± 0.85 | 98.55 ± 1.62 | 25.37 ± 32.76 | 2.08 ± 2.42 | |
Thresholding | 81.30 ± 7.36 | 69.11 ± 10.05 | 187.58 ± 57.46 | 52.02 ± 13.64 | |
MedSegDiff | 87.81 ± 6.59 | 78.85 ± 9.98 | 46.18 ± 16.24 | 10.79 ± 5.38 | |
Mini-MIAS | SAM | 89.04 ± 6.24 | 80.78 ± 9.61 | 199.71 ± 79.20 | 40.45 ± 26.70 |
SAM-adapter | 95.35 ± 4.37 | 91.43 ± 7.43 | 81.66 ± 60.26 | 13.68 ± 12.47 | |
SAM-breast | 98.07 ± 2.11 | 96.29 ± 3.82 | 46.06 ± 41.70 | 6.55 ± 8.13 |
Dataset | Model | DSC | IoU | HD | ASD |
---|---|---|---|---|---|
Thresholding | 98.17 ± 3.65 | 96.59 ± 4.88 | 42.57 ± 56.90 | 6.14 ± 19.33 | |
MedSegDiff | 98.73 ± 0.68 | 97.49 ± 1.31 | 4.70 ± 6.96 | 0.77 ± 0.50 | |
GVA | SAM | 99.25 ± 1.39 | 98.54 ± 2.42 | 45.80 ± 104.89 | 6.73 ± 36.58 |
SAM-adapter | 99.49 ± 0.48 | 98.99 ± 0.94 | 15.02 ± 24.52 | 1.21 ± 1.31 | |
SAM-breast | 99.60 ± 0.42 | 99.21 ± 0.82 | 12.96 ± 23.97 | 0.96 ± 1.08 | |
Thresholding | 97.14 ± 8.00 | 95.32 ± 11.24 | 58.20 ± 100.52 | 13.88 ± 45.56 | |
MedSegDiff | 98.91 ± 1.45 | 97.87 ± 2.59 | 5.77 ± 9.14 | 0.77 ± 1.12 | |
IMIM | SAM | 99.22 ± 1.04 | 98.47 ± 1.95 | 42.10 ± 69.11 | 4.14 ± 15.83 |
SAM-adapter | 99.49 ± 0.59 | 98.98 ± 1.14 | 18.20 ± 21.84 | 1.45 ± 1.90 | |
SAM-breast | 99.54 ± 0.52 | 99.10 ± 1 | 15.09 ± 19.31 | 1.27 ± 1.74 | |
Thresholding | 89.72 ± 9.14 | 82.37 ± 12.57 | 193.76 ± 99.24 | 33.17 ± 21.63 | |
MedSegDiff | 97.18 ± 4.00 | 94.76 ± 6.66 | 16.13 ± 21.39 | 2.59 ± 5.91 | |
CSAW-S | SAM | 91.59 ± 7.05 | 85.21 ± 11.37 | 167.77 ± 108.24 | 26.39 ± 22.34 |
SAM-adapter | 98.56 ± 1.39 | 97.20 ± 2.54 | 79.14 ± 86.93 | 3.53 ± 3.71 | |
SAM-breast | 98.99 ± 0.80 | 98.02 ± 1.53 | 71.05 ± 84.74 | 2.86 ± 3.23 |
Dataset | Model | DSC | IoU | HD | ASD |
---|---|---|---|---|---|
Thresholding | 93.13 ± 4.54 | 87.44 ± 7.15 | 169.28 ± 75.13 | 14.84 ± 11.33 | |
MedSegDiff | 98.03 ± 1.49 | 96.18 ± 2.69 | 7.38 ± 8.01 | 1.17 ± 1.15 | |
GVA | SAM | 90.86 ± 5.53 | 83.70 ± 9.03 | 186.62 ± 114.64 | 34.62 ± 38.94 |
SAM-adapter | 98.90 ± 1.08 | 97.84 ± 2.03 | 27.56 ± 30.07 | 2.57 ± 2.74 | |
SAM-breast | 99.11 ± 1.11 | 98.25 ± 2.07 | 25.89 ± 34.87 | 2.40 ± 4.02 | |
Thresholding | 95.62 ± 6.01 | 92.16 ± 9.66 | 105.36 ± 89.95 | 11.47 ± 14.36 | |
MedSegDiff | 97.22 ± 3.97 | 94.84 ± 6.86 | 14.25 ± 24.87 | 3.11 ± 8.19 | |
CSAW-S | SAM | 97.32 ± 4.68 | 95.15 ± 8.03 | 84.91 ± 132.46 | 13.53 ± 34.98 |
SAM-adapter | 98.91 ± 1.41 | 97.88 ± 2.62 | 34.15 ± 51.81 | 2.83 ± 3.90 | |
SAM-breast | 99.21 ± 1.11 | 98.46 ± 2.09 | 30 ± 55.11 | 2.17 ± 3.17 | |
Thresholding | 94.70 ± 2.74 | 90.06 ± 4.77 | 146.33 ± 72.53 | 12.66 ± 7.02 | |
MedSegDiff | 98.37 ± 1.23 | 96.82 ± 2.30 | 7.56 ± 8.27 | 1.10 ± 0.88 | |
InBreast | SAM | 92.04 ± 5.16 | 85.65 ± 8.48 | 170.89 ± 71.21 | 28.77 ± 16.92 |
SAM-adapter | 99.05 ± 1.18 | 98.13 ± 2.15 | 30.98 ± 36.24 | 2.59 ± 3.29 | |
SAM-breast | 99.27 ± 0.85 | 98.55 ± 1.62 | 25.37 ± 32.76 | 2.08 ± 2.42 | |
Thresholding | 81.30 ± 7.36 | 69.11 ± 10.05 | 187.58 ± 57.46 | 52.02 ± 13.64 | |
MedSegDiff | 87.81 ± 6.59 | 78.85 ± 9.98 | 46.18 ± 16.24 | 10.79 ± 5.38 | |
Mini-MIAS | SAM | 89.04 ± 6.24 | 80.78 ± 9.61 | 199.71 ± 79.20 | 40.45 ± 26.70 |
SAM-adapter | 95.34 ± 4.38 | 91.41 ± 7.44 | 81.86 ± 60.24 | 13.71 ± 12.47 | |
SAM-breast | 98.07 ± 2.11 | 96.29 ± 3.82 | 46.06 ± 41.10 | 6.55 ± 8.13 |
Dataset | Device | DSC | IoU | HD | ASD |
---|---|---|---|---|---|
01-FUJIFILM | 99.12 ± 1.64 | 98.30 ± 2.98 | 19.98 ± 26.20 | 1.94 ± 3.07 | |
02-FUJIFILM | 99.59 ± 0.44 | 99.19 ± 0.86 | 10.41 ± 14.68 | 0.96 ± 1.07 | |
04-IMS Giotto | 99.37 ± 1.08 | 98.78 ± 2.03 | 12.85 ± 10.22 | 1.25 ± 1.32 | |
05-FUJIFILM | 99.55 ± 0.34 | 99.10 ± 0.67 | 13.98 ± 26.48 | 1.05 ± 1.06 | |
07-HOLOGIC | 99.27 ± 1.12 | 98.58 ± 2.14 | 27.21 ± 62.35 | 3.95 ± 10.97 | |
GVA | 10-SIEMENS | 99.06 ± 1.26 | 98.16 ± 2.33 | 27.94 ± 33.87 | 2.48 ± 4.33 |
11-FUJIFILM | 99.44 ± 0.54 | 98.90 ± 1.05 | 14.40 ± 24.22 | 1.37 ± 1.94 | |
13-FUJIFILM | 99.43 ± 0.84 | 98.87 ± 1.59 | 19.72 ± 40.30 | 1.54 ± 3.47 | |
18-IMS Giotto | 99.14 ± 0.67 | 98.30 ± 1.30 | 19.43 ± 22.83 | 2.16 ± 2.02 | |
20-Unknown | 99.13 ± 0.73 | 98.29 ± 1.41 | 28.27 ± 41.29 | 2.04 ± 2.34 | |
99-GE | 99.46 ± 0.38 | 98.93 ± 0.76 | 17.72 ± 32.02 | 1.52 ± 1.10 | |
FUJIFILM | 99.34 ± 0.47 | 98.69 ± 0.92 | 16.25 ± 18.37 | 1.86 ± 1.55 | |
IMIM | GE | 99.28 ± 0.71 | 98.59 ± 1.38 | 23.98 ± 30.83 | 1.97 ± 2.91 |
HOLOGIC | 99.31 ± 0.55 | 98.64 ± 1.05 | 14.70 ± 17.75 | 1.71 ± 1.61 |
Dataset | Method | Year | Images | Accuracy |
---|---|---|---|---|
Lbachir et al. [29] | 2017 | 40 | 90% | |
Taghanaki et al. [30] | 2017 | 197 | 96% | |
InBreast | Rahman et al. [31] | 2019 | 200 | 94.50% |
Zhou et al. [28] | 2022 | 410 | 99.12% | |
SAM-breast (proposed) | 2024 | 200 | 99.56% | |
Lbchir et al. [29] | 2017 | 322 | 98.75% | |
Taghanaki et al. [30] | 2017 | 322 | 95% | |
Mini-MIAS | Rahman et al. [31] | 2019 | 200 | 97.50% |
Zhou et al. [28] | 2022 | 102 | 98.98% | |
SAM-breast (proposed) | 2024 | 322 | 98.76% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Larroza, A.; Pérez-Benito, F.J.; Tendero, R.; Perez-Cortes, J.C.; Román, M.; Llobet, R. Breast Delineation in Full-Field Digital Mammography Using the Segment Anything Model. Diagnostics 2024, 14, 1015. https://doi.org/10.3390/diagnostics14101015
Larroza A, Pérez-Benito FJ, Tendero R, Perez-Cortes JC, Román M, Llobet R. Breast Delineation in Full-Field Digital Mammography Using the Segment Anything Model. Diagnostics. 2024; 14(10):1015. https://doi.org/10.3390/diagnostics14101015
Chicago/Turabian StyleLarroza, Andrés, Francisco Javier Pérez-Benito, Raquel Tendero, Juan Carlos Perez-Cortes, Marta Román, and Rafael Llobet. 2024. "Breast Delineation in Full-Field Digital Mammography Using the Segment Anything Model" Diagnostics 14, no. 10: 1015. https://doi.org/10.3390/diagnostics14101015
APA StyleLarroza, A., Pérez-Benito, F. J., Tendero, R., Perez-Cortes, J. C., Román, M., & Llobet, R. (2024). Breast Delineation in Full-Field Digital Mammography Using the Segment Anything Model. Diagnostics, 14(10), 1015. https://doi.org/10.3390/diagnostics14101015