Next Article in Journal
Flight Stability Analysis of a Symmetrically-Structured Quadcopter Based on Thrust Data Logger Information
Previous Article in Journal
On Neutrosophic Triplet Groups: Basic Properties, NT-Subgroups, and Some Notes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Authentication Algorithm for Polygonal Models with High Embedding Capacity

1
Department of M-Commerce and Multimedia Applications, Asia University, No. 500, Lioufeng Rd., Wufeng District, Taichung 41354, Taiwan
2
Department of Medical Research, China Medical University Hospital, China Medical University, No. 2, Yude Road, North District, Taichung 40447, Taiwan
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 2018 IEEE International Conference on Applied System Innovation as titled “An Improved Region-based Tampered Detection Algorithm for 3D Polygonal Models”.
Symmetry 2018, 10(7), 290; https://doi.org/10.3390/sym10070290
Submission received: 20 June 2018 / Revised: 13 July 2018 / Accepted: 17 July 2018 / Published: 18 July 2018

Abstract

:
Fragile watermarking algorithms for 3D models has attracted extensive research attention in recent years. Although many literatures have proposed lots of solutions on this issue, low embedding capacity and inaccurate located tampered region are still presented. Higher embedding capacity can reduce the probability of false alarms for authentication, while accurate tamper localization can detect all the modified vertices with fewer unaltered ones. This study proposes three strategies to enhance the embedding capacity and the detection accuracy of our previous algorithm. First, the modified message-digit substitution table can increase the embedding capacity by 11.5%. Second, the modified embedding ratio generation method can be integrated to raise the embedding capacity by about 47.74%. Finally, the strategy adopting a reduced number of reference vertices for authentication code generation accompanying the above two ones improves the embedding capacity up to 123.74%. Extensive experiments show that the proposed algorithm can achieve superior performance in terms of embedding capacity and tamper localization accuracy. Further, the model distortion between the original and the marked ones is small.

1. Introduction

The rise of Internet changes human behavior significantly. Multimedia content can be rapidly shared and dispersed all over the world with the advanced network technology. However, Internet also breeds new information security problems, such as piracy, reproduction, and tampering. Copyright marking algorithms [1] can prevent the above problems from occurring, and robust watermarking and fragile watermarking are two main branches according to their different applications.
Robust watermarking algorithms [2,3,4] can detect copyright information and protect intellectual property rights even when the protected multimedia content is under different malicious attacks. However, fragile watermarking algorithms [5,6,7] can effectively detect and locate the tampered locations even if the protected multimedia content has minimum modifications. Some fragile watermarking algorithms even have the capacity for self-recovery. Recently, some authors [8] proposed a distributed and tamper-proof media transaction framework. The unique watermark information contains a cryptographic hash of the transaction histories in the blockchain and an image hash preserving retrievable original media content. The first part of the watermark is to retrieve the historical transaction trail and the latter part is used to identify the tampered regions. Consequently, copyright marking is still a research topic that deserves our attention.
Whereas fragile watermarking has been extensively studied in images [9,10,11,12,13,14], videos [15,16], and audio [17,18], there are still few studies discussing its application in 3D models. According to the size of the tampered region localization, 3D fragile watermarking algorithms can be classified as either vertex-based or region-based ones. A vertex-based algorithm can exactly locate a modified vertex, whereas a region-based one can only locate a rough region with some unaltered vertices inside. However, a region-based fragile watermarking algorithm has the ability to detect the modification of the topological relationship between vertices.
The first fragile watermarking algorithm on 3D polygonal models was proposed by Yeo and Yeung [19]. They iteratively moved the vertices to new positions to make the location and value indexes, calculated from pre-defined hash functions, consistent. Lin et al. [20] proposed a local mesh parametrization approach to perturb the coordinates of invalid vertices to eliminate the causality problem caused by Yeo and Yeung. Molaei et al. [21] proposed a QIM-based fragile watermarking for 3D triangular mesh models. Watermark data is embedded into the middles of three sides of a marked triangle in the spherical coordinate system. The recognition accuracy and the embedding capacity can be enhanced by increasing the number of marked triangles. To solve the problem of low embedding capacity, Tsai et al. [6] proposed a low-complexity, high-capacity, distortion-controllable, region-based fragile watermarking algorithm with a high embedding rate for 3D polygonal models. A vertex traversal scheme subdivides the input polygonal model into several overlapping embedding blocks, each with one embeddable vertex and its reference vertices. The authentication code is generated by feeding the local geometrical properties of each block into a hash function. The embeddable vertex is then embedded with the authentication code and its vertex coordinates are modified based on a simple message-digit substitution scheme.
Tsai et al.’s proposed algorithm had a higher embedding rate and higher embedding capacity than up-to-date literature. However, the problems of low embedding capacity and inaccurately located tampered regions have not been resolved. The proposed algorithm comprehensively considers three issues that are not discussed in our previous scheme. Thus, the proposed algorithm offers three advantages over the previous method, including higher embedding capacity, higher embedding rate, and accurate tamper localization. In this paper, we employ modified message grabbing and modified embedding ratio generation methods to resolve the problem of low embedding capacity. In addition, we adopt a reduced number of reference vertices for authentication code generation, providing a significant improvement on embedding capacity and embedding rate. Furthermore, the accuracy of tamper localization is consequently raised. Finally, extensive experimental results demonstrate the feasibility of the proposed algorithm.
The rest of paper is organized as follows. In Section 2, Tsai et al.’s proposed algorithm is introduced. In Section 3, we give detailed descriptions of our three strategies to improve the embedding capacity. Section 4 shows the experimental evaluations; finally, conclusions and future studies are made in Section 5.

2. Tsai et al.’s Proposed Algorithm

This section provides a review of Tsai et al.’s proposed algorithm, and its flowchart is shown in Figure 1. The proposed algorithm inputs a polygonal model with vertices and its topological information and then constructs a vertex neighboring table in the preprocessing process. The polygonal model is subdivided into lots of overlapping embedding blocks, each with one embeddable vertex and its reference vertices, based on an iterative vertex traversal mechanism. Each block is the basic element for the authentication code generation, embedding, reconstruction, and tamper localization processes. Taking the embedding block shown in Figure 2 for example, V 1 is the embeddable vertex and R 1 , R 2 , R 3 , and R 4 are the reference vertices of V 1 . G 1 is the center of the reference vertices of V 1 .
For each embedding block, the local geometrical property concerning the topological relationship is fed into the hash function to obtain the authentication code. Thereafter, the authors calculated the embedding ratio between the length of the embeddable vertex to the center of its reference vertices and the summation of all sides of the reference vertices for message embedding. Continuing the example in Figure 2, the embedding ratio A V 1 is shown in (1), where S V 1 is the summation of all sides R 1 R 2 , R 2 R 3 , R 3 R 4 , and R 4 R 1 .
A V 1 = { V 1 G 1 / S V 1 i f   V 1 G 1 S V 1 S V 1 / V 1 G 1 i f   V 1 G 1 > S V 1
Finally, the authors select parts of the authentication code to embed into above ratio. The data-embedded ratio A V 1 is derived by modifying the next few decimal digits after the first non-zero one of A V 1 using a simple message-digit substitution mechanism shown in Table 1. The data-embedded vertex can be then obtained following the direction of V 1 G 1 with a data-embedded ratio. Figure 3 shows the modification example, where V 1 is the vertex with the authentication code embedded. Experimental results demonstrate the feasibility of the proposed algorithm.

3. The Proposed Algorithm

This section shows three improvement methods to provide higher embedding capacity than Tsai et al.’s proposed algorithm, including modified message grabbing, modified embedding ratio generation [22], and a reduced number of reference vertices for message embedding. The proposed algorithm can also support higher embedding rate and accurate tamper localization.

3.1. Modified Message Grabbing Method

The authentication code embedding process in Tsai et al.’s proposed algorithm used a simple message-digit substitution scheme shown in Table 1 to modify the embedded ratio. The distance with the center of its reference vertices for each embeddable vertex is consequently modified. Three-bit authentication code is grabbed one time and one more bit can be grabbed again if the above three-bit code equals 110 or 111. In our test, the embedding capacity rises effectively if a four-bit authentication code is firstly grabbed and the last bit is left for the next iteration if the grabbed four-bit binary code is larger than one-bit decimal digit. Table 2 shows our message-digit substitution table. The bit with a strikeout sign means that the bit should be left for next iteration because the decimal value for the grabbed four-bit binary code is larger than nine.

3.2. Modified Embedding Ratio Generation Method

From the above message-digit substitution table, we can know one decimal digit can have a three- to four-bit authentication code embedded. However, from the experimental results in Tsai et al.’s proposed algorithm, the number of most used decimal digits is three. It means most embedding ratios equal to 0.0XEEEN, where X is the first non-zero digit, E is the decimal digit that can have the authentication code embedded, and N is the digit without a message embedded for avoiding extraction error. If we can change the embedding ratio with the value of 0.0XEEEN to 0.XEEEEN, one more decimal digit can be used to further raise the embedding capacity.
We then modify the embedding ratio generation method shown in (1) from the summation of all sides of the reference vertices S V 1 to maximum, minimum, and average side length (see blue line in Figure 4) between the reference vertices for each embeddable vertex. The experimental results show that the embedding ratio using minimum side length can achieve highest embedding capacity.

3.3. Reduced Number of Reference Vertices for Authentication Code Generation

Tsai et al.’s proposed algorithm uses all the reference vertices for authentication code generation. Therefore, only 24 to 30 percent of vertices [6] can be message-embedded. This improvement adopts a user-defined embedding parameter p within 0 and 1 to control the ratio of used reference vertices for each embeddable vertex. Equation (2) shows the controlling mechanism, where O N V i and F N V i are the original and final number of used reference vertices for the embeddable vertex V i . We then randomly obtain F N V i reference vertices for the remaining process.
F N V i = p × O N V i
However, not each vertex pair in the original input model has one edge to connect them. For the example shown in Figure 2, assume that R 2 , R 3 , and R 4 are the selected reference vertices for the embeddable vertex V 1 . Comparing Figure 2 with Figure 5, no edges are presented to connect the vertices R 2 and R 4 . To make the following processes correctly performed, we construct the ‘virtual’ edge to connect the corresponding two unconnected vertices. G 1 becomes the center of the used reference vertices R 2 , R 3 , and R 4 . Further, the minimum number of used reference vertices is set as 3.

4. Experimental Evaluations

This section presents the experimental results obtained from twenty-five 3D polygonal models shown in Figure 6, where N V and N F mean the number of vertices and faces separately. Microsoft Visual C++ programming language was used to implement the proposed algorithm on a personal computer with an Intel Core i7-6700K 4.00 GHz processor and 16 GB RAM. The distortion between the original and marked models was measured by normalized root-mean-squared error, which is derived from dividing the root-mean-squared error by the model size.
First, this section presents the embedding capacity comparison for each improved strategy with Tsai et al.’s proposed algorithm. We also show the model distortion for different embedding ratio generation methods. Finally, we present the experimental results for tamper detection to demonstrate the feasibility of our proposed algorithm.
Table 3 shows the embedding capacity comparison under different embedding thresholds between Tsai et al.’s and our message grabbing method. The experimental results show that our message grabbing method can effectively raise the embedding capacity, exceeding 11.5% on average for all embedding thresholds.
Table 4 shows the embedding capacity comparison for different embedding ratio generation methods, including maximum (MAX), minimum (MIN), and average (AVE) of all the side lengths with the embedding threshold T = 1 . The embedding capacity using the summation (SUM) method for each model is shown in Table 3. Obviously, the embedding ratio generation method using the minimum side length can have the maximum embedding capacity, improving by about 47.74% on average, with our modified message grabbing method. From the experimental results, we also found that the highest embedding capacity for each embeddable vertex in each 3D polygonal model is between 14 and 16. Four decimal digits of the calculated embedding ratio are used for data embedding, each with three or four bits. Table 5 shows the model distortion for each test model under different embedding ratio generation methods. The model distortion using the MIN method is only 0.009% of the model size, on average.
This section shows the experimental results for adopting a reduced number of reference vertices during the authentication code generation. When not all reference vertices are used, the residual unused reference vertices may be the embeddable ones in some iterations. Because the number of embedded vertices is increased, and the total embedding capacity can be effectively raised. Table 6 shows the embedding rate under different embedding parameters for each test model. Obviously, the embedding rate can be effectively raised from 23.93~31.25% to 35.32~42.80%. Table 7, Table 8 and Table 9 show the embedding capacity comparison for each model under different embedding parameters. When the value of the embedding parameter p is smaller, it means the ratio of used reference vertices is decreased. The total embedding capacity is increased with more embeddable vertices. The improved ratio, on average, can be improved from 47.74% (see Table 4), 94.13%, or 121.37%, to 123.74% with different embedding parameters p using the minimum side length for authentication code generation.
Finally, Table 10 shows the number of detected suspicious vertices for each model under different embedding parameters. We randomly add or subtract 0.01% of the length, width, and height of the bounding volume of input model from the x, y, and z coordinate values of 50 vertices separately. The proposed algorithm is then used to perform tamper localization. Recall that a region-based tamper detection algorithm can only locate a rough region with some unaltered vertices inside. Therefore, any one vertex that is altered in the embedding block may lead to all the vertices within the block being regarded as suspicious. As the value of embedding parameter is decreased, the embedding block is smaller with fewer vertices. Thus, the number of suspicious vertices is decreased and the accuracy of the located tampered region is higher. The only exception is the results of embedding parameter 0.50 and 0.25 because their numbers of reference vertices for each embeddable vertex are similar.

5. Conclusions and Future Studies

This paper proposes three strategies to improve Tsai et al.’s proposed algorithm. We firstly modified the message grabbing method to raise the embedding capacity above 11.5%. Further, we also modified Tsai et al.’s embedding ratio generation method, increasing the embedding capacity to 47.74%. Finally, the strategy adopting a reduced number of reference vertices for message embedding accompanying the above two ones improves the embedding capacity up to 123.74%. The experimental results demonstrate the feasibility of the proposed algorithm with higher embedding capacity, higher embedding rate, and accurate tamper localization. Further, the model distortion between the original and the marked ones is small.
Future studies could fruitfully explore the proposed algorithm further for point geometries without topological relationship between vertices. Thereafter, providing self-recovery ability is another important research issue to be discussed.

Author Contributions

Y.-Y.T. supervised the whole work, designed the experiments, analyzed the performance of the proposed idea and wrote the paper. Y.-S.T. and C.-C.C. perform experiments simulations and collect the results.

Funding

This research was funded by Ministry of Science and Technology of Taiwan under the grant numbers MOST 106-2221-E-468-022, MOST 106-2218-E-468-001, and MOST 106-2632-E-468-003.

Acknowledgments

The authors would like to thank the editor and three anonymous reviewers for their constructive suggestions in improving the paper significantly.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Petitcolas, F.A.P.; Anderson, R.J.; Kuhn, M.G. Information Hiding—A Survey. Proc. IEEE 1999, 87, 1062–1078. [Google Scholar] [CrossRef]
  2. Bors, A.G.; Luo, M. Optimized 3D Watermarking for Minimal Surface Distortion. IEEE Trans. Image Process. 2013, 22, 1822–1835. [Google Scholar] [CrossRef] [PubMed]
  3. Chen, H.K.; Chen, W.S. GPU-accelerated Blind and Robust 3D Mesh Watermarking by Geometry Image. Multimedia Tools Appl. 2016, 75, 10077–10096. [Google Scholar] [CrossRef]
  4. Hou, J.U.; Kim, D.G.; Lee, H.K. Blind 3D Mesh Watermarking for 3D Printed Model by Analyzing Layering Artifact. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2712–2725. [Google Scholar] [CrossRef]
  5. Chen, T.Y.; Hwang, M.S.; Jan, J.K. Adaptive Authentication Schemes for 3D Mesh Models. Int. J. Innov. Comput. Inf. Control 2009, 5, 4561–4572. [Google Scholar]
  6. Tsai, Y.Y.; Chen, T.C.; Huang, Y.H. A Low-complexity Region-based Authentication Algorithm for 3D Polygonal Models. Secur. Commun. Netw. 2017, 2017, 1096463. [Google Scholar] [CrossRef]
  7. Wang, J.T.; Chang, Y.C.; Yu, C.Y.; Yu, S.S. Hamming Code Based Watermarking Scheme for 3D Model Verification. Math. Probl. Eng. 2014, 2014, 241093. [Google Scholar]
  8. Bhowmik, D.; Feng, T. The Multimedia Blockchain: A Distributed and Tamper-proof Media Transaction Framework. In Proceedings of the 22nd International Conference on Digital Signal Processing, London, UK, 23–25 August 2017. [Google Scholar]
  9. Fan, M.; Wang, H. An Enhanced Fragile Watermarking Scheme to Digital Image Protection and Self-Recovery. Signal Process. Image Commun. 2018, 66, 19–29. [Google Scholar] [CrossRef]
  10. Huynh-The, T.; Banos, O.; Lee, S.; Yoon, Y.; Le-Tien, T. Improving Digital Image Watermarking by Means of Optimal Channel Selection. Expert Syst. Appl. 2016, 62, 177–189. [Google Scholar] [CrossRef]
  11. Lo, C.C.; Hu, Y.C. A Novel Reversible Image Authentication Scheme for Digital Images. Signal Process. 2014, 98, 174–185. [Google Scholar] [CrossRef]
  12. Qin, C.; Zhang, X.; Dong, J.; Wang, J. Fragile Image Watermarking with Pixel-wise Recovery Based on Overlapping Embedding Strategy. Signal Process. 2017, 138, 280–293. [Google Scholar] [CrossRef]
  13. Singh, D.; Singh, S.K. DCT Based Efficient Fragile Watermarking Scheme for Image Authentication and Restoration. Multimedia Tools Appl. 2017, 76, 953–977. [Google Scholar] [CrossRef]
  14. Wang, X.Y.; Zhang, J.M. A Novel Image Authentication and Recovery Algorithm Based on Chaos and Hamming Code. Acta Phys. Sin. 2014, 63, 020701. [Google Scholar]
  15. Asikuzzaman, M.; Pickering, M.R. An Overview of Digital Video Watermarking. IEEE Trans. Circuits Syst. Video Technol. 2017. accepted. [Google Scholar] [CrossRef]
  16. Fallahpour, M.; Shirmohammadi, S.; Semsarzadeh, M.; Zhao, J. Tampering Detection in Compressed Digital Video Using Watermarking. IEEE Trans. Instrum. Meas. 2014, 63, 1057–1072. [Google Scholar] [CrossRef]
  17. Qian, Q.; Wang, H.X.; Hu, Y.; Zhou, L.N.; Li, J.F. A Dual Fragile Watermarking Scheme for Speech Authentication. Multimedia Tools Appl. 2016, 75, 13431–13450. [Google Scholar] [CrossRef]
  18. Renza, D.; Ballesteros, L.D.M.; Lemus, C. Authenticity Verification of Audio Signals Based on Fragile Watermarking for Audio Forensics. Expert Syst. Appl. 2018, 91, 211–222. [Google Scholar] [CrossRef]
  19. Yeo, B.L.; Yeung, M.M. Watermarking 3-D Objects for Verification. IEEE Comput. Graph. Appl. 1999, 19, 36–45. [Google Scholar]
  20. Lin, H.Y.S.; Liao, H.Y.M.; Lu, C.S.; Lin, J.C. Fragile Watermarking for Authenticating 3-D Polygonal Meshes. IEEE Trans. Multimedia 2005, 7, 997–1006. [Google Scholar] [CrossRef]
  21. Molaei, A.M.; Ebrahimnezhad, H.; Sedaaghi, M.H. A Blind Fragile Watermarking Method for 3D Models Based on Geometric Properties of Triangles. 3D Res. 2013, 4, 4. [Google Scholar] [CrossRef]
  22. Tsai, Y.Y.; Tsai, Y.S.; Chang, C.C. An Improved Region-based Tampered Detection Algorithm for 3D Polygonal Models. In Proceedings of the 4th IEEE International Conference on Applied System Innovation, Chiba, Japan, 13–17 April 2018. [Google Scholar]
Figure 1. Flowchart of Tsai et al.’s proposed algorithm.
Figure 1. Flowchart of Tsai et al.’s proposed algorithm.
Symmetry 10 00290 g001
Figure 2. One embedding block of Tsai et al.’s proposed algorithm.
Figure 2. One embedding block of Tsai et al.’s proposed algorithm.
Symmetry 10 00290 g002
Figure 3. The modification example for the embeddable vertex after the authentication code is embedded: (a) V 1 G 1 V 1 G 1 ; (b) V 1 G 1 < V 1 G 1 .
Figure 3. The modification example for the embeddable vertex after the authentication code is embedded: (a) V 1 G 1 V 1 G 1 ; (b) V 1 G 1 < V 1 G 1 .
Symmetry 10 00290 g003
Figure 4. Modified embedding ratio generation method: (a) Maximum side length; (b) Minimum side length.
Figure 4. Modified embedding ratio generation method: (a) Maximum side length; (b) Minimum side length.
Symmetry 10 00290 g004
Figure 5. One example of a reduced number of reference vertices for authentication code generation.
Figure 5. One example of a reduced number of reference vertices for authentication code generation.
Symmetry 10 00290 g005
Figure 6. Visual effects of our twenty-five test models.
Figure 6. Visual effects of our twenty-five test models.
Symmetry 10 00290 g006aSymmetry 10 00290 g006b
Table 1. Tsai et al.’s message-digit substitution table.
Table 1. Tsai et al.’s message-digit substitution table.
Decimal DigitBinary Authentication Code
0000
1001
2010
3011
4100
5101
61100
71101
81110
91111
Table 2. Our proposed message-digit substitution table.
Table 2. Our proposed message-digit substitution table.
Decimal DigitBinary Authentication Code
00000
10001
20010/1010
30011/1011
40100/1100
50101/1101
60110/1110
70111/1111
81000
91001
Table 3. Experimental results for our message grabbing method.
Table 3. Experimental results for our message grabbing method.
Capacity[6]Proposed
Model T = 1 T = 2 T = 3 T = 1 T = 2 T = 3
Armadillo414,512269,128147,280462,775300,427164,377
Ateneam18,16112,046623120,26313,4496941
Brain53,98237,43821,01060,23141,79223,501
Brain2766,719482,906284,062855,862539,170317,113
Bunny79,11549,02031,13788,27354,70934,704
Cow108,20062,87345,415120,69570,19850,650
Dinosaur141,04790,35150,857157,553100,93856,807
Dragon1,199,940806,740435,7551,337,835899,538485,751
DragonFusion1,177,286785,816440,2081,313,028876,273490,908
Elephant46,10730,40115,88351,46133,99217,758
Gear625,266409,512219,449697,417456,854244,805
Golfball352,285227,500124,838392,886253,655139,112
Hand868,284580,290315,408968,111646,857351,560
HappyBuddha1,484,268996,728534,8711,656,1551,111,984596,324
Hippo58,47438,90020,09265,03743,20622,305
Horse117,45075,29143,598131,06383,97848,625
Lion430,769278,123157,266480,906310,553175,720
Lucy652,499433,616224,752727,550483,643250,656
Maxplanck117,01175,54041,556130,23284,16146,302
Rabbit162,576102,79960,551181,361114,60567,502
RockerArm98,94863,07236,546110,31470,26240,665
Screw121,39881,68144,396135,36691,16749,568
Teeth278,982175,831105,332311,230196,141117,493
VenusBody45,65430,27516,01851,08833,81717,872
VenusHead314,810197,871120,740351,333220,870134,751
Average Improvement 11.56%11.56%11.53%
Table 4. Experimental results for our embedding ratio generation method.
Table 4. Experimental results for our embedding ratio generation method.
ModelMAXRatioAVERatioMINRatio
Armadillo592,33142.90%605,77146.14%616,17048.65%
Ateneam24,90837.15%26,24844.53%26,98748.60%
Brain72,11633.59%73,23135.66%72,56934.43%
Brain21,007,68431.43%1,070,70739.65%1,191,82055.44%
Bunny114,73945.03%117,14948.07%120,61252.45%
Cow152,84841.26%156,47244.61%168,73255.94%
Dinosaur191,92036.07%200,94442.47%209,39148.45%
Dragon1,660,46438.38%1,705,09342.10%1,656,70338.07%
DragonFusion1,663,57541.31%1,688,50843.42%1,671,79142.00%
Elephant64,61340.14%67,37646.13%69,12649.93%
Gear873,12539.64%910,41645.60%932,66549.16%
Golfball471,01833.70%505,15243.39%532,60051.18%
Hand1,235,38942.28%1,250,35044.00%1,246,83043.60%
HappyBuddha2,062,76938.98%2,121,26942.92%2,067,09639.27%
Hippo80,34337.40%84,05143.74%85,82446.77%
Horse169,72844.51%173,29147.54%176,90550.62%
Lion621,35144.24%634,77047.36%643,83649.46%
Lucy912,96839.92%951,11945.77%971,86848.95%
Maxplanck158,74635.67%164,16840.30%168,96744.40%
Rabbit237,87846.32%241,27748.41%246,36251.54%
RockerArm142,76844.29%145,52947.08%148,66450.24%
Screw171,42941.21%172,94142.46%171,05340.90%
Teeth406,52345.72%413,81748.33%423,26751.72%
VenusBody63,74939.64%66,68146.06%68,05849.07%
VenusHead459,56145.98%469,80049.23%480,74052.71%
Average 40.27% 44.60% 47.74%
Table 5. Model distortion between the original and marked models.
Table 5. Model distortion between the original and marked models.
ModelSUMMAXAVEMIN
Armadillo0.003%0.006%0.005%0.004%
Ateneam0.066%0.032%0.027%0.026%
Brain0.080%0.062%0.043%0.047%
Brain20.004%0.003%0.003%0.004%
Bunny0.008%0.013%0.011%0.007%
Cow0.003%0.002%0.002%0.003%
Dinosaur0.003%0.004%0.005%0.004%
Dragon0.006%0.006%0.005%0.004%
DragonFusion0.006%0.006%0.005%0.004%
Elephant0.014%0.027%0.019%0.011%
Gear0.004%0.007%0.006%0.004%
Golfball0.004%0.005%0.007%0.005%
Hand0.005%0.005%0.004%0.003%
HappyBuddha0.005%0.006%0.004%0.003%
Hippo0.019%0.026%0.019%0.013%
Horse0.007%0.011%0.009%0.006%
Lion0.018%0.022%0.017%0.013%
Lucy0.003%0.005%0.004%0.003%
Maxplanck0.005%0.006%0.007%0.007%
Rabbit0.005%0.011%0.009%0.006%
RockerArm0.008%0.014%0.012%0.009%
Screw0.016%0.014%0.011%0.011%
Teeth0.005%0.010%0.008%0.006%
VenusBody0.039%0.029%0.020%0.015%
VenusHead0.003%0.008%0.007%0.005%
Average0.014%0.014%0.011%0.009%
Table 6. Embedding rate under different embedding parameters for each test model.
Table 6. Embedding rate under different embedding parameters for each test model.
ModelEmbedding Rate
p = 1.00 p = 0.75 p = 0.50 p = 0.25
Armadillo26.18%32.73%36.13%36.54%
Ateneam24.98%33.46%38.15%38.88%
Brain26.68%33.07%36.06%36.82%
Brain229.74%33.59%35.55%35.62%
Bunny27.45%33.24%35.59%35.73%
Cow30.08%33.62%35.17%35.32%
Dinosaur27.85%37.30%42.48%42.56%
Dragon27.58%36.05%40.21%40.55%
DragonFusion27.50%35.53%40.02%40.13%
Elephant24.38%32.28%35.70%36.15%
Gear26.79%34.71%38.93%39.28%
Golfball31.25%30.91%34.57%36.91%
Hand27.42%35.38%39.30%39.41%
HappyBuddha27.63%36.16%40.30%40.65%
Hippo26.10%32.83%36.21%37.17%
Horse27.40%37.23%41.91%41.97%
Lion26.32%32.77%35.98%36.39%
Lucy26.17%32.92%36.52%37.37%
Maxplanck26.01%32.70%35.99%36.40%
Rabbit27.60%37.41%42.35%42.41%
RockerArm27.73%38.30%42.65%42.73%
Screw27.53%35.44%39.56%39.93%
Teeth27.62%36.39%42.73%42.80%
VenusBody23.93%33.06%38.02%38.64%
VenusHead27.62%37.97%42.62%42.66%
Average27.18%34.60%38.51%38.92%
Table 7. The embedding capacity comparison for each model under p = 0.75 .
Table 7. The embedding capacity comparison for each model under p = 0.75 .
Model p = 0.75
SUMRatioMAXRatioAVERatioMINRatio
Armadillo620,65149.73%771,15086.04%789,11490.37%801,15093.28%
Ateneam28,84858.85%33,83386.29%35,24594.07%36,11398.85%
Brain77,51443.59%87,68962.44%89,43065.67%89,71966.20%
Brain21,113,23445.19%1,323,80572.66%1,364,71677.99%1,402,49082.92%
Bunny126,78460.25%154,57195.38%159,829102.02%161,943104.69%
Cow172,82259.72%213,14896.99%217,408100.93%220,346103.65%
Dinosaur226,44860.55%270,54291.81%279,17797.93%284,736101.87%
Dragon1,897,52958.14%2,189,92382.50%2,233,08886.10%2,230,75085.91%
DragonFusion1,910,05962.24%2,247,13290.87%2,277,57593.46%2,284,91594.08%
Elephant73,73359.92%88,40191.73%90,17295.57%91,35798.14%
Gear989,28458.22%1,173,12887.62%1,211,00193.68%1,238,56198.09%
Golfball418,87518.90%502,19742.55%522,08948.20%537,98652.71%
Hand1,332,62453.48%1,607,96385.19%1,629,14987.63%1,638,18688.67%
HappyBuddha2,364,30859.29%2,729,19883.88%2,784,28687.59%2,784,23787.58%
Hippo86,69548.26%104,25378.29%106,99282.97%108,83986.13%
Horse193,48564.74%232,81298.22%241,042105.23%244,307108.01%
Lion654,42851.92%816,35889.51%835,45593.95%846,98596.62%
Lucy978,07649.90%1,172,69479.72%1,211,37585.65%1,239,24389.92%
Maxplanck175,50949.99%215,30884.01%221,68289.45%227,81694.70%
Rabbit263,59262.13%312,83392.42%327,974101.74%334,099105.50%
RockerArm162,80764.54%196,43598.52%202,342104.49%205,530107.72%
Screw191,18957.49%221,44782.41%224,92285.28%225,04885.38%
Teeth452,92662.35%545,49795.53%566,572103.09%575,483106.28%
VenusBody76,73868.09%89,76796.62%92,430102.46%94,199106.33%
VenusHead517,68964.44%627,00699.17%651,104106.82%660,952109.95%
Average 55.68% 86.01% 91.29% 94.13%
Table 8. The embedding capacity comparison for each model under p = 0.50 .
Table 8. The embedding capacity comparison for each model under p = 0.50 .
Model p = 0.50
SUMRatioMAXRatioAVERatioMINRatio
Armadillo779,94388.16%885,585113.65%890,101114.73%894,890115.89%
Ateneam35,85897.45%40,180121.24%40,925125.35%41,456128.27%
Brain90,43067.52%96,23278.27%97,27680.20%97,20080.06%
Brain21,339,80974.75%1,455,69589.86%1,480,35793.08%1,492,99294.72%
Bunny155,32496.33%175,873122.30%177,556124.43%177,989124.98%
Cow213,08896.94%232,068114.48%234,133116.39%234,979117.17%
Dinosaur282,584100.35%338,080139.69%342,089142.54%343,994143.89%
Dragon2,264,30588.70%2,485,317107.12%2,510,444109.21%2,491,536107.64%
DragonFusion2,323,07897.32%2,567,574118.09%2,585,866119.65%2,581,596119.28%
Elephant91,35598.14%99,234115.23%100,394117.74%101,377119.87%
Gear1,222,78195.56%1,366,716118.58%1,386,046121.67%1,399,906123.89%
Golfball540,52153.43%592,09468.07%602,89671.14%608,55572.75%
Hand1,636,25688.45%1,829,274110.68%1,844,593112.44%1,849,223112.97%
HappyBuddha2,819,90789.99%3,094,219108.47%3,127,214110.69%3,107,199109.34%
Hippo105,98581.25%117,936101.69%119,370104.14%120,406105.91%
Horse239,368103.80%289,708146.66%291,669148.33%292,486149.03%
Lion823,87791.26%935,470117.16%939,658118.14%944,095119.17%
Lucy1,210,17585.47%1,347,293106.48%1,366,173109.38%1,381,562111.73%
Maxplanck219,92087.95%250,033113.68%252,351115.66%254,049117.12%
Rabbit335,830106.57%401,153146.75%404,071148.54%405,735149.57%
RockerArm204,968107.15%243,940146.53%245,722148.33%246,705149.33%
Screw227,39987.32%250,753106.55%252,551108.04%251,622107.27%
Teeth591,592112.05%705,657152.94%711,382154.99%714,163155.99%
VenusBody95,500109.18%105,800131.74%107,417135.28%108,608137.89%
VenusHead666,516111.72%813,406158.38%817,330159.63%819,809160.41%
Average 92.67% 118.17% 120.39% 121.37%
Table 9. The embedding capacity comparison for each model under p = 0.25 .
Table 9. The embedding capacity comparison for each model under p = 0.25 .
Model p = 0.25
SUMRatioMAXRatioAVERatioMINRatio
Armadillo790,66090.74%898,171116.68%902,038117.61%905,992118.57%
Ateneam36,753102.37%40,922125.33%41,663129.41%42,195132.34%
Brain92,90772.11%98,31582.13%99,33584.02%99,26883.89%
Brain21,342,15475.05%1,458,62490.24%1,483,49693.49%1,496,20395.14%
Bunny156,11197.32%176,644123.27%178,348125.43%178,757125.95%
Cow213,70497.51%233,022115.36%235,147117.33%235,891118.01%
Dinosaur283,536101.02%339,191140.48%343,144143.28%345,000144.60%
Dragon2,298,32391.54%2,509,489109.13%2,532,754111.07%2,512,309109.37%
DragonFusion2,336,78798.49%2,576,554118.86%2,594,226120.36%2,589,329119.94%
Elephant92,850101.38%100,541118.06%101,598120.35%102,602122.53%
Gear1,242,28298.68%1,382,372121.09%1,400,308123.95%1,413,421126.05%
Golfball579,06264.37%633,10579.71%643,22382.59%649,48184.36%
Hand1,643,66189.30%1,835,045111.34%1,849,972113.06%1,854,574113.59%
HappyBuddha2,863,07092.89%3,124,863110.53%3,155,207112.58%3,133,523111.12%
Hippo109,63787.50%121,174107.23%122,571109.62%123,613111.40%
Horse239,887104.25%290,235147.11%292,155148.75%292,969149.44%
Lion835,19993.89%948,496120.19%952,092121.02%955,782121.88%
Lucy1,246,94791.10%1,382,862111.93%1,400,097114.57%1,414,295116.75%
Maxplanck223,09290.66%253,473116.62%255,685118.51%257,157119.77%
Rabbit336,766107.14%402,232147.41%405,113149.18%406,737150.18%
RockerArm205,505107.69%244,484147.08%246,315148.93%247,260149.89%
Screw231,04490.32%253,407108.74%255,109110.14%254,139109.34%
Teeth593,427112.71%707,872153.73%713,451155.73%716,165156.71%
VenusBody97,584113.75%107,681135.86%109,314139.44%110,436141.90%
VenusHead667,585112.06%814,909158.86%818,767160.08%821,202160.86%
Average 95.35% 120.68% 122.82% 123.74%
Table 10. Number of suspicious vertices detected for each model under different embedding parameters.
Table 10. Number of suspicious vertices detected for each model under different embedding parameters.
Vertex NumberEmbedding Parameter p
Model 1.000.750.500.25
Armadillo617396277272
Ateneam540365265254
Brain588417281269
Brain2597357254263
Bunny561358245254
Cow636367257272
Dinosaur565414353353
Dragon1011714545488
DragonFusion553419346339
Elephant532356274268
Gear592373284293
Golfball706411247249
Hand587398311303
HappyBuddha685488418429
Hippo613426274251
Horse543356277277
Lion578418282276
Lucy570375271277
Maxplanck553385267263
Rabbit583393330321
RockerArm586369279283
Screw500329288275
Teeth513368295289
VenusBody498356290290
VenusHead594379286286

Share and Cite

MDPI and ACS Style

Tsai, Y.-Y.; Tsai, Y.-S.; Chang, C.-C. An Effective Authentication Algorithm for Polygonal Models with High Embedding Capacity. Symmetry 2018, 10, 290. https://doi.org/10.3390/sym10070290

AMA Style

Tsai Y-Y, Tsai Y-S, Chang C-C. An Effective Authentication Algorithm for Polygonal Models with High Embedding Capacity. Symmetry. 2018; 10(7):290. https://doi.org/10.3390/sym10070290

Chicago/Turabian Style

Tsai, Yuan-Yu, Yu-Shiou Tsai, and Chia-Chun Chang. 2018. "An Effective Authentication Algorithm for Polygonal Models with High Embedding Capacity" Symmetry 10, no. 7: 290. https://doi.org/10.3390/sym10070290

APA Style

Tsai, Y. -Y., Tsai, Y. -S., & Chang, C. -C. (2018). An Effective Authentication Algorithm for Polygonal Models with High Embedding Capacity. Symmetry, 10(7), 290. https://doi.org/10.3390/sym10070290

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop