A Survey of Comics Research in Computer Science
Abstract
:1. Introduction
1.1. Comics and Society
1.2. Research and Open Problems
- Content analysis: Getting information from raw images and extracting high- to low-level structured descriptions.
- Content generation and adaption: Comics can be used as an input or output to create or modify contents. Content conversion and augmentation are possible from comics to comics, comics to other media, and other media to comics.
- User interaction: Analyzing human reading behavior and internal states (emotions, interests, etc.) based on comics contents, and, reciprocally, analyzing comics contents based on human behavior and interactions.
2. What Is Comics?
3. Research on Content Analysis
3.1. Textures, Screentones, and Structural Lines
3.2. Text
3.3. Faces and Pose
3.4. Balloons
3.5. Panel
3.6. High Level Understanding
3.7. Applications
3.8. Conclusions
4. Content Generation
4.1. Vectorization
4.2. Colorization
4.3. Comics and Character Generation
4.4. Animation
4.5. Media Conversion
4.6. Content Adaptation
4.7. Conclusions
5. User Interaction
5.1. Eye Gaze and Reading Behavior
5.2. Emotion
5.3. Visualization and Interaction
5.4. Education
5.5. Conclusions
6. Available Materials
6.1. Tools
6.2. Datasets
7. General Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Augereau, O.; Iwata, M.; Kise, K. An Overview of Comics Research in Computer Science. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 54–59. [Google Scholar]
- Lam, P.E. Japan’s quest for “soft power”: Attraction and limitation. East Asia 2007, 24, 349–363. [Google Scholar] [CrossRef]
- Hall, I.; Smith, F. The struggle for soft power in Asia: Public diplomacy and regional competition. Asian Secur. 2013, 9, 1–18. [Google Scholar] [CrossRef]
- Screech, M. Masters of the Ninth Art: Bandes Dessinées and Franco-Belgian Identity; Liverpool University Press: Liverpool, UK, 2005; Volume 3. [Google Scholar]
- Christiansen, H.C. Comics & Culture: Analytical and Theoretical Approaches to Comics; Museum Tusculanum Press: København, Denmark, 2000. [Google Scholar]
- AJPEA. Manga Market in Japan. 2017. Available online: http://www.ajpea.or.jp/information/20170224/ index.html (accessed on 25 June 2018).
- Jain, E.; Sheikh, Y.; Hodgins, J. Inferring artistic intention in comic art through viewer gaze. In Proceedings of the ACM Symposium on Applied Perception, Los Angeles, CA, USA, 3–4 August 2012; pp. 55–62. [Google Scholar]
- Cao, Y.; Lau, R.W.; Chan, A.B. Look over here: Attention-directing composition of manga elements. ACM Trans. Graph. 2014, 33, 94. [Google Scholar] [CrossRef]
- Pederson, K.; Cohn, N. The changing pages of comics: Page layouts across eight decades of American superhero comics. Stud. Comics 2016, 7, 7–28. [Google Scholar] [CrossRef]
- Fujimoto, A.; Ogawa, T.; Yamamoto, K.; Matsui, Y.; Yamasaki, T.; Aizawa, K. Manga109 dataset and creation of metadata. In Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, Cancun, Mexico, 4 December 2016; p. 2. [Google Scholar]
- Wilber, M.J.; Fang, C.; Jin, H.; Hertzmann, A.; Collomosse, J.; Belongie, S. BAM! the behance artistic media dataset for recognition beyond photography. arXiv 2017, arXiv:1704.08614. [Google Scholar]
- Bergs, A. Protanopia, a Revolutionary Digital Comic for Iphone and Ipad. Available online: http:// andrebergs.com/protanopia (accessed on 25 June 2018).
- Ito, K.; Matsui, Y.; Yamasaki, T.; Aizawa, K. Separation of Manga Line Drawings and Screentones. In Proceedings of the Eurographics, Zürich, Switzerland, 4–8 May 2015; pp. 73–76. [Google Scholar]
- Arai, K.; Tolle, H. Method for real time text extraction of digital manga comic. Int. J. Image Process. 2011, 4, 669–676. [Google Scholar]
- Chu, W.T.; Cheng, W.C. Manga-specific features and latent style model for manga style analysis. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 1332–1336. [Google Scholar]
- Daiku, Y.; Augereau, O.; Iwata, M.; Kise, K. Comic Story Analysis Based on Genre Classification. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 60–65. [Google Scholar]
- Liu, X.; Li, C.; Wong, T.T. Boundary-aware texture region segmentation from manga. Comput. Vis. Media 2017, 3, 61–71. [Google Scholar] [CrossRef] [Green Version]
- Li, C.; Liu, X.; Wong, T.T. Deep extraction of manga structural lines. ACM Trans. Graph. 2017, 36, 117. [Google Scholar] [CrossRef]
- Rigaud, C.; Tsopze, N.; Burie, J.C.; Ogier, J.M. Robust frame and text extraction from comic books. In Graphics Recognition. New Trends and Challenges; Springer: Heidelberg/Berlin, Germany, 2013; pp. 129–138. [Google Scholar]
- Aramaki, Y.; Matsui, Y.; Yamasaki, T.; Aizawa, K. Text detection in manga by combining connected-component-based and region-based classifications. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 2901–2905. [Google Scholar]
- Rigaud, C.; Burie, J.C.; Ogier, J.M. Segmentation-Free Speech Text Recognition for Comic Books. In Proceedings of the 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 29–34. [Google Scholar]
- Hiroe, S.; Hotta, S. Histogram of Exclamation Marks and Its Application for Comics Analysis. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 66–71. [Google Scholar]
- Sun, W.; Burie, J.C.; Ogier, J.M.; Kise, K. Specific comic character detection using local feature matching. In Proceedings of the 2013 12th International Conference on Document Analysis and Recognition (ICDAR), Washington, DC, USA, 25–28 August 2013; pp. 275–279. [Google Scholar]
- Chu, W.T.; Li, W.W. Manga FaceNet: Face Detection in Manga based on Deep Neural Network. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, Bucharest, Romania, 6–9 June 2017; pp. 412–415. [Google Scholar]
- Qin, X.; Zhou, Y.; He, Z.; Wang, Y.; Tang, Z. A Faster R-CNN Based Method for Comic Characters Face Detection. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Bucharest, Romania, 6–9 June 2017; pp. 1074–1080. [Google Scholar]
- Nguyen, N.V.; Rigaud, C.; Burie, J.C. Comic Characters Detection Using Deep Learning. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 41–46. [Google Scholar]
- Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Correia, J.M.; Gomes, A.J. Balloon extraction from complex comic books using edge detection and histogram scoring. Multimed. Tools Appl. 2016, 75, 11367–11390. [Google Scholar] [CrossRef]
- Rigaud, C.; Le Thanh, N.; Burie, J.C.; Ogier, J.M.; Iwata, M.; Imazu, E.; Kise, K. Speech balloon and speaker association for comics and manga understanding. In Proceedings of the 2015 13th International Conference on Document Analysis and Recognition (ICDAR), Tunis, Tunisia, 23–26 August 2015; pp. 351–355. [Google Scholar]
- Yamanishi, R.; Tanaka, H.; Nishihara, Y.; Fukumoto, J. Speech-balloon Shapes Estimation for Emotional Text Communication. Inf. Eng. Express 2017, 3, 1–10. [Google Scholar]
- Tanaka, T.; Shoji, K.; Toyama, F.; Miyamichi, J. Layout Analysis of Tree-Structured Scene Frames in Comic Images. In Proceedings of the 20th International Joint Conference on Artifical intelligence, Hyderabad, India, 6–12 January 2007; pp. 2885–2890. [Google Scholar]
- Arai, K.; Tolle, H. Automatic e-comic content adaptation. Int. J. Ubiquit. Comput. 2010, 1, 1–11. [Google Scholar]
- Pang, X.; Cao, Y.; Lau, R.W.; Chan, A.B. A robust panel extraction method for manga. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, Florida, USA, 3–7 November 2014; pp. 1125–1128. [Google Scholar]
- Iyyer, M.; Manjunatha, V.; Guha, A.; Vyas, Y.; Boyd-Graber, J.; Daumé III, H.; Davis, L. The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Rigaud, C.; Guérin, C.; Karatzas, D.; Burie, J.C.; Ogier, J.M. Knowledge-driven understanding of images in comic books. Int. J. Doc. Anal. Recognit. 2015, 18, 199–221. [Google Scholar] [CrossRef]
- McCloud, S. Understanding Comics: The Invisible Art; HarperCollins Publishers: New York, NY, USA, 1993. [Google Scholar]
- Mohammad, S.M. Sentiment analysis: Detecting valence, emotions, and other affectual states from text. In Emotion Measurement; Elsevier: New York, NY, USA, 2016; pp. 201–237. [Google Scholar]
- Cohn, N. Visual narrative structure. Cognit. Sci. 2013, 37, 413–452. [Google Scholar] [CrossRef] [PubMed]
- Matsui, Y.; Ito, K.; Aramaki, Y.; Fujimoto, A.; Ogawa, T.; Yamasaki, T.; Aizawa, K. Sketch-based manga retrieval using manga109 dataset. Multimed. Tools Appl. 2017, 76, 21811–21838. [Google Scholar] [CrossRef]
- Narita, R.; Tsubota, K.; Yamasaki, T.; Aizawa, K. Sketch-Based Manga Retrieval Using Deep Features. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 49–53. [Google Scholar]
- Le, T.N.; Luqman, M.M.; Burie, J.C.; Ogier, J.M. Retrieval of comic book images using context relevance information. In Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, Cancun, Mexico, 4 December 2016; p. 12. [Google Scholar]
- Saito, M.; Matsui, Y. Illustration2vec: a semantic vector representation of illustrations. In Proceedings of the SIGGRAPH Asia 2015 Technical Briefs, Kobe, Japan, 2–6 November 2015; p. 5. [Google Scholar]
- Vie, J.J.; Yger, F.; Lahfa, R.; Clement, B.; Cocchi, K.; Chalumeau, T.; Kashima, H. Using Posters to Recommend Anime and Mangas in a Cold-Start Scenario. arXiv 2017, arXiv:1709.01584. [Google Scholar] [Green Version]
- Yao, C.Y.; Hung, S.H.; Li, G.W.; Chen, I.Y.; Adhitya, R.; Lai, Y.C. Manga Vectorization and Manipulation with Procedural Simple Screentone. IEEE Trans. Vis. Comput. Graph. 2017, 23, 1070–1084. [Google Scholar] [CrossRef] [PubMed]
- Zhang, S.H.; Chen, T.; Zhang, Y.F.; Hu, S.M.; Martin, R.R. Vectorizing cartoon animations. IEEE Trans. Vis. Comput. Graph. 2009, 15, 618–629. [Google Scholar] [CrossRef] [PubMed]
- Qu, Y.; Wong, T.T.; Heng, P.A. Manga colorization. ACM Trans. Graph. 2006, 25, 1214–1220. [Google Scholar] [CrossRef]
- Sato, K.; Matsui, Y.; Yamasaki, T.; Aizawa, K. Reference-based manga colorization by graph correspondence using quadratic programming. In Proceedings of the SIGGRAPH Asia 2014 Technical Briefs, Shenzhen, China, 3–6 December 2014; p. 15. [Google Scholar]
- Cinarel, C.; Zhang, B. Into the Colorful World of Webtoons: Through the Lens of Neural Networks. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 35–40. [Google Scholar]
- Furusawa, C.; Hiroshiba, K.; Ogaki, K.; Odagiri, Y. Comicolorization: semi-automatic manga colorization. In Proceedings of the SIGGRAPH Asia 2017 Technical Briefs, Bangkok, Thailand, 27–30 November 2017; p. 12. [Google Scholar]
- Zhang, L.; Ji, Y.; Lin, X. Style transfer for anime sketches with enhanced residual u-net and auxiliary classifier GAN. arXiv 2017, arXiv:1706.03319. [Google Scholar]
- Kopf, J.; Lischinski, D. Digital reconstruction of halftoned color comics. ACM Trans. Graph. 2012, 31, 140. [Google Scholar] [CrossRef]
- lllyasviel. Style2paints Github. Available online: https://github.com/lllyasviel/style2paints (accessed on 25 June 2018).
- Preferred Networks. Hakusensha and Hakuhodo DY Digital Announces the Launch of Colorized Manga Products Using PaintsChainer. 2018. Available online: https://www.preferred-networks.jp/en/news/ pr20180206 (accessed on 25 June 2018).
- Cao, Y.; Chan, A.B.; Lau, R.W. Automatic stylistic manga layout. ACM Trans. Graph. 2012, 31, 141. [Google Scholar] [CrossRef]
- Wu, Z.; Aizawa, K. MangaWall: Generating manga pages for real-time applications. In Proceedings of the Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 679–683. [Google Scholar]
- Jin, Y.; Zhang, J.; Li, M.; Tian, Y.; Zhu, H.; Fang, Z. Towards the Automatic Anime Characters Creation with Generative Adversarial Networks. arXiv 2017, arXiv:1708.05509. [Google Scholar]
- Jin, Y.; Zhang, J.; Li, M.; Tian, Y.; Zhu, H.; Fang, Z. MakeGirlsMoe. Available online: http://make.girls. moe/#/ (accessed on 25 June 2018).
- Cao, Y.; Pang, X.; Chan, A.B.; Lau, R.W. Dynamic Manga: Animating Still Manga via Camera Movement. IEEE Trans. Multimed. 2017, 19, 160–172. [Google Scholar] [CrossRef]
- Jain, E.; Sheikh, Y.; Hodgins, J. Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data. IEEE Comput. Graph. Appl. 2016, 36, 34–45. [Google Scholar] [CrossRef] [PubMed]
- White, T.; Loh, I. Generating Animations by Sketching in Conceptual Space. In Proceedings of the Eighth International Conference on Computational Creativity, Atlanta, GA, USA, 19–June 23 2017. [Google Scholar]
- White, T.; Loh, I. TopoSketch, Generating Animations by Sketching in Conceptual Space. Available online: https://vusd.github.io/toposketch/ (accessed on 25 June 2018).
- Kumar, R.; Sotelo, J.; Kumar, K.; de Brebisson, A.; Bengio, Y. ObamaNet: Photo-realistic lip-sync from text. arXiv 2017, arXiv:1801.01442. [Google Scholar]
- Jing, G.; Hu, Y.; Guo, Y.; Yu, Y.; Wang, W. Content-aware video2comics with manga-style layout. IEEE Trans. Multimed. 2015, 17, 2122–2133. [Google Scholar] [CrossRef]
- Zhou, Y.; Wang, Z.; Fang, C.; Bui, T.; Berg, T.L. Visual to Sound: Generating Natural Sound for Videos in the Wild. arXiv 2017, arXiv:1712.01393. [Google Scholar]
- Reed, S.; Akata, Z.; Yan, X.; Logeswaran, L.; Schiele, B.; Lee, H. Generative adversarial text to image synthesis. arXiv 2016, arXiv:1605.05396. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv 2017, arXiv:1703.10593. [Google Scholar]
- Wu, J.; Zhang, C.; Xue, T.; Freeman, B.; Tenenbaum, J. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 82–90. [Google Scholar]
- Zhu, J.Y.; Krähenbühl, P.; Shechtman, E.; Efros, A.A. Generative visual manipulation on the natural image manifold. arXiv 2016, arXiv:1609.03552. [Google Scholar]
- Carroll, P.J.; Young, J.R.; Guertin, M.S. Visual analysis of cartoons: A view from the far side. In Eye Movements and Visual Cognition; Springer: Heidelberg/Berlin, Germany, 1992; pp. 444–461. [Google Scholar]
- Rigaud, C.; Le, T.N.; Burie, J.C.; Ogier, J.M.; Ishimaru, S.; Iwata, M.; Kise, K. Semi-automatic Text and Graphics Extraction of Manga Using Eye Tracking Information. In Proceedings of the 12th IAPR Workshop on Document Analysis Systems (DAS), Santorini, Greece, 11–14 April 2016; pp. 120–125. [Google Scholar]
- Cohn, N.; Campbell, H. Navigating Comics II: Constraints on the Reading Order of Comic Page Layouts. Appl. Cognit. Psychol. 2015, 29, 193–199. [Google Scholar] [CrossRef]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Lee, J.S.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef]
- Soleymani, M.; Lichtenauer, J.; Pun, T.; Pantic, M. A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 2012, 3, 42–55. [Google Scholar] [CrossRef]
- Soleymani, M.; Asghari-Esfeden, S.; Fu, Y.; Pantic, M. Analysis of EEG signals and facial expressions for continuous emotion detection. IEEE Trans. Affect. Comput. 2016, 7, 17–28. [Google Scholar] [CrossRef]
- Lima Sanches, C.; Augereau, O.; Kise, K. Manga content analysis using physiological signals. In Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, Cancun, Mexico, 4 December 2016; p. 6. [Google Scholar]
- Matsubara, M.; Augereau, O.; Lima Sanches, C.; Kise, K. Emotional Arousal Estimation While Reading Comics Based on Physiological Signal Analysis. In Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, Cancun, Mexico, 4 December 2016; pp. 7:1–7:4. [Google Scholar]
- Kalimeri, K.; Saitis, C. Exploring multimodal biosignal features for stress detection during indoor mobility. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan, 12–16 November 2016; pp. 53–60. [Google Scholar]
- Greene, S.; Thapliyal, H.; Caban-Holt, A. A survey of affective computing for stress detection: Evaluating technologies in stress detection for better health. IEEE Consum. Electron. Mag. 2016, 5, 44–56. [Google Scholar] [CrossRef]
- Augereau, O.; Matsubara, M.; Kise, K. Comic visualization on smartphones based on eye tracking. In Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, Cancun, Mexico, 4 December 2016; p. 4. [Google Scholar]
- Andrews, D.; Baber, C.; Efremov, S.; Komarov, M. Creating and using interactive narratives: Reading and writing branching comics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 1703–1712. [Google Scholar]
- Rayar, F. Accessible Comics for Visually Impaired People: Challenges and Opportunities. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; Volume 3, pp. 9–14. [Google Scholar]
- Meyer, D.J.; Wiertlewski, M.; Peshkin, M.A.; Colgate, J.E. Dynamics of ultrasonic and electrostatic friction modulation for rendering texture on haptic surfaces. In Proceedings of the Haptics Symposium (HAPTICS), Houston, TX, USA, 23–26 February 2014; pp. 63–67. [Google Scholar]
- Tanvas. TanvasTouch. Available online: https://youtu.be/ohL_B-6Vy6o?t=19s (accessed on 25 June 2018).
- Eneh, A.; Eneh, O. Enhancing Pupils’ Reading Achievement by Use of Comics and Cartoons in Teaching Reading. J. Appl. Sci. 2008, 11, 8058–62. [Google Scholar]
- Sarada, P. Comics as a powerful tool to enhance English language usage. IUP J. Engl. Stud. 2016, 11, 60. [Google Scholar]
- Rigaud, C.; Burie, J.C.; Ogier, J.M. Text-Independent Speech Balloon Segmentation for Comics and Manga. In Proceedings of the Graphic Recognition. Current Trends and Challenges: 11th International Workshop, GREC 2015, Nancy, France, 22–23 August 2015; Lamiroy, B., Dueire Lins, R., Eds.; Springer: Heidelberg/Berlin, Germany, 2015; pp. 133–147. [Google Scholar]
- Rigaud, C.; Pal, S.; Burie, J.C.; Ogier, J.M. Toward Speech Text Recognition for Comic Books. In Proceedings of the 1st International Workshop on coMics ANalysis, Processing and Understanding, Cancun, Mexico, 4 December 2016; pp. 8:1–8:6. [Google Scholar]
- cbrTekStraktor. Available online: https://sourceforge.net/projects/cbrtekstraktor (accessed on 25 June 2018).
- Furusawa, C.; Hiroshiba, K.; Ogaki, K.; Odagiri, Y. Semi-Automatic Manga Colorization. Available online: https://github.com/DwangoMediaVillage/Comicolorization (accessed on 25 June 2018).
- Saito, M.; Matsui, Y. illustration2vec. Available online: https://github.com/rezoo/illustration2vec (accessed on 25 June 2018).
- Rigaud, C. Christophe Rigaud’s Github webpage. Available online: https://github.com/crigaud (accessed on 25 June 2018).
- Dunst, A.; Hartel, R.; Laubrock, J. The Graphic Narrative Corpus (GNC): Design, Annotation, and Analysis for the Digital Humanities. In Proceedings of the 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Kyoto, Japan, 9–15 November 2017; pp. 15–20. [Google Scholar]
- Guérin, C.; Rigaud, C.; Mercier, A.; Ammar-Boudjelal, F.; Bertet, K.; Bouju, A.; Burie, J.C.; Louis, G.; Ogier, J.M.; Revel, A. eBDtheque: A representative database of comics. In Proceedings of the Document Analysis and Recognition (ICDAR), Washington, DC, USA, 25–28 August 2013; pp. 1145–1149. [Google Scholar]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Augereau, O.; Iwata, M.; Kise, K. A Survey of Comics Research in Computer Science. J. Imaging 2018, 4, 87. https://doi.org/10.3390/jimaging4070087
Augereau O, Iwata M, Kise K. A Survey of Comics Research in Computer Science. Journal of Imaging. 2018; 4(7):87. https://doi.org/10.3390/jimaging4070087
Chicago/Turabian StyleAugereau, Olivier, Motoi Iwata, and Koichi Kise. 2018. "A Survey of Comics Research in Computer Science" Journal of Imaging 4, no. 7: 87. https://doi.org/10.3390/jimaging4070087
APA StyleAugereau, O., Iwata, M., & Kise, K. (2018). A Survey of Comics Research in Computer Science. Journal of Imaging, 4(7), 87. https://doi.org/10.3390/jimaging4070087