Deep Learning Based Detection of Missing Tooth Regions for Dental Implant Planning in Panoramic Radiographic Images
Round 1
Reviewer 1 Report
Dear Authors,
The article: 'Deep Learning based Missing Tooth Regions Detection for Dental Implant Planning in Panoramic Radiographic Images' was to propose a missing tooth regions detection method for implant placement planning in panoramic radiographic images.
The examination is based on the analysis of a panoramic examination. Despite the fact that there are few studies about applying deep neural network to implant placement planning, this study is not revealing. Planning using cone beam computed tomography is the standard of contemporary implantology. Current treatment should aim to maximize planning, use of CBCT, and surgical guides.
It doesn't make sense to promote an inferior weaker diagnostic tool when CBCT is available.
This summary is a mistake!!!!:
'For automatic implant placement planning, future work is required to develop a method that creates the position, axis, and size of the implant for the missing tooth regions. Through this method, we expect it to support the diagnosis of the clinician. Furthermore, we expect implant placement to be possible using only panoramic radiographic images without CBCT.'
To sum up, article must be rejected.
Author Response
Please see the attachment.
We sincerely thank you for your sincere review. We are happy that our study can be improved through your sincere review. Along with the answer to your comments, we did our best to explain what you mentioned, and we hope that this is sufficient. Thank you for taking time to review our paper.
Author Response File: Author Response.docx
Reviewer 2 Report
This manuscript provide a clear and recordable method to improve case study before implant placement. The algorithm can be in the future widely used to make OPTs a valuable method of analysis also in implant surgery. I recommand the acceptance of this work in the way it is reported.
Author Response
Please see the attachment
We sincerely thank you for your sincere review. We are happy that our study can be improved through your sincere review. The contents of the paper were revised in order to clearly explain our research. And we hope that the content is sufficient. Thank you for taking time to review our paper.
Author Response File: Author Response.docx
Reviewer 3 Report
The authors have reported the study on detection on tooth regions based on deep learning based object detection which provides mAP of 92.14 % In general, the main conclusions presented in the paper are supported by the figures and supporting text. However, to meet the journal quality standards, the following comments need to be addressed.
- Page 2: “However, existing automatic methods using deep learning can only detect the missing region of the specific tooth and have limitations in detecting multiple missing tooth regions simultaneously.”..not quite understand why this is the case. If object detection models such as R-CNN, SSD, YOLO can be employed, it should detect multiple classes in the same image. The author should clarify this point with references.
- Page 2 : “Deep learning is widely used for automation in various fields[19–22]”. However, the authors did not cover different object detection models which is the core theme of their work. It would be necessary to discuss different CNN-based object detection models [ see : Neural, Comput & Applic (2022). https://doi.org/10.1007/s00521-021-06651-x; Sensors 2021, 21(9), 3263;https://doi.org/10.3390/s21093263, AI 2021, 2(3), 413-428 https://doi.org/10.3390/ai2030026] . Hence should be addressed in the introduction.
- Page 3: “As a result, the dataset is split into train: val: test = 348: 35: 72” ..better to provide a percentage. Do the authors really think the volume of data is sufficient to train the model properly considering there are 32 classes?
- Page 3: “The segmentation model’s backbone is an 117 ResNet-101”..better to provide a schematic of network architecture for the non-specialist readers.
- Section 2.4. what about precision, recall, and F-1 score ?
- Did the authors employ any data augmentation methods before training? If so, it should be mentioned.
- Authors should provide data set arrangements such as test-train split and cross-validation (if applicable).
- Also, all hyperparameters (learning rate, mini-batch size, number of epochs, optimizer) and model complexity.
Author Response
Please see the attachment
We sincerely thank you for your sincere review. We are happy that our study can be improved through your sincere review. Along with the answer to your comments, we did our best to explain what you mentioned, and we hope that this is sufficient. Thank you for taking time to review our paper.
Author Response File: Author Response.docx
Reviewer 4 Report
I would like to congratulate the authors for conducting the present study which addresses the a model to detect areas prone to be used for implants placement.
Here go a few of my concerns:
TITLE
The title bring some confusion of has grammar issues, specially the “Deep learning based missing…”. This connection seems wrong somehow.
ABSTRACT
May the authors provide the meaning of “mAP” abbreviation?
KEYWORDS
I recommend the authors to add the term “dental implant”
INTRODUCTION
The authors are giving the aim of the study in the first paragraph. The aim should appear only in the last sentence(s) of the Introduction. Please reformulate and eliminate the aim sentence from the first paragraph. Aims appear also in the end of the third and fourth paragraphs. Please avoid this and combine the aim in a single final sentence/paragraph in the Introduction.
Regarding the sentence: “Panoramic x-ray is one of the most commonly used diagnostic tools in modern dentistry along with CBCT[10–13]. Panoramic x-ray has the advantage of being less expensive and time efficient than other tools, such as CBCT.Many dental studies have been conducted to diagnose dental disease using panoramic radiographic imaging[14–16]. However, implant placement is currently diagnosed using both a panoramic radiographic image and a CBCT image[9,17].” The authors are repeating themselves a lot. Try to say the something in a single sentence.
“Due to the lack of automated technology, implant placement was traditionally planned by clinicians through panoramic radiographic images and CBCT images.”… once again repletion of the last comment.
The authors try do debate the “deep learning” but never define it, and it may not be clear to all reader what it means. Try do define it (Deep learning may be defined as…)
The Aim paragraph (fifth paragraph) and 3 bullets points are providing part of the results. The authors are missing the points of the aim sentence. Try the following: “The aim of the present study is to…(and complete)”
METHODS
Is it possible to know the Panoramic device brand and settings? Were all conducted in the same device?
RESULTS
May the authors define “mPA”?
DISCUSSION
The autors are not being enough clear on the real need to develop this deep learning process. Even because the Figure provided are quite straight to the point. Even a low trained professional can understand where there are spaces or not to place an implant. What are the major beneficts of this process which appears much more complicated and time consuming that a pure clinical analysis? What beneficts the clinicians and patients after all?
May the authors debate the external validity and generalization?
May the authors debate the study strengths and limitations?
The following sentence should be removed since the 3D nature of the CBCT will also be a must in the implantology planning. (“Furthermore, we expect implant placement to be possible using only panoramic radiographic images without CBCT.”)
REFERENCES
According to the journal instructions for authors the references should have the short and abbreviated journal name, not full name.
Author Response
Please see the attachment
We sincerely thank you for your sincere review. We are happy that our study can be improved through your sincere review. Along with the answer to your comments, we did our best to explain what you mentioned, and we hope that this is sufficient. Thank you for taking time to review our paper.
Author Response File: Author Response.docx
Round 2
Reviewer 1 Report
Possible acceptance for the editor's decision.
Author Response
We sincerely thank you for your review. We are happy that our study can be improved through your sincere review. Also, we believe that your review will be very useful in future studies.Thank you for taking time to review our paper.
Reviewer 3 Report
Although most of the reviewer's comments were addressed satisfactorily, the author should carefully incorporate the reviewer's previous comments #2. The proper background and current state-of-the-art are still missing. Hence, it should be incorporated and suggested references should be mentioned. The manuscript can not be accepted in its current form.
Author Response
We sincerely thank you for your review. We are happy that our study can be improved through your sincere review. Also, we believe that your review will be very useful in future studies.
As you mentioned in the previous comments #2, we didn't cover different object detection models in our paper, therefore we added following paper in the last revision.
- Roy, A. M., & Bhaduri, J. (2021). A deep learning enabled multi-class plant disease detection model based on computer vision. AI, 2(3), 413-428.
However, we noticed that there are no references to state-of-the-art research for instance segmentation and object detection. Therefore we added the following papers that are state-of-the-art to the reference list to provide proper background.
- Xu, M., Zhang, Z., Hu, H., Wang, J., Wang, L., Wei, F., ... & Liu, Z. (2021). End-to-End Semi-Supervised Object Detection with Soft Teacher. arXiv preprint arXiv:2106.09018.
- Dai, X., Chen, Y., Xiao, B., Chen, D., Liu, M., Yuan, L., & Zhang, L. (2021). Dynamic Head: Unifying Object Detection Heads with Attention. In Processing of the IEEE/CVF Conference on Computer Vision and Pattern Recreation (pp. 7373-7382).
- Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., ... & Guo, B. (2021). Swin Transformer V2: Scaling Up Capacity and Resolution. arXiv preprint arXiv:2111.09883.
Also, please understand that it is difficult to directly compare our work to other studies because the dataset we used was built by us, and our work is newly defined task.
Furthermore, please understand that the papers that suggested in the previous comments #2 were not able to be included in the reference list due to decision of editorial board.
Once again, thank you for taking the time to review our study.
Reviewer 4 Report
Dear authors, I have no more concerns. Thank you
Author Response
We sincerely thank you for your review. We are happy that our study can be improved through your sincere review. Also, we believe that your review will be very useful in future studies.Thank you for taking time to review our paper.