Next Article in Journal
Dual Aptamer-Functionalized 3D Plasmonic Metamolecule for Thrombin Sensing
Next Article in Special Issue
Measurement of Period Length and Skew Angle Patterns of Textile Cutting Pieces Based on Faster R-CNN
Previous Article in Journal
Power Grid Reliability Evaluation Considering Wind Farm Cyber Security and Ramping Events
Previous Article in Special Issue
Quantitative Analysis of Benign and Malignant Tumors in Histopathology: Predicting Prostate Cancer Grading Using SVM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Tampering by Image Resizing Using Local Tchebichef Moments

1
Hunan Provincial Key Laboratory of Intelligent Processing of Big Data on Transportation, School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha 410114, China
2
School of Computing Science and Engineering, Vellore Institute of Technology (VIT), Vellore 632014, India
3
Department of Computer Science, Texas Tech University, Lubbock, TX 79409, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2019, 9(15), 3007; https://doi.org/10.3390/app9153007
Submission received: 30 June 2019 / Revised: 22 July 2019 / Accepted: 23 July 2019 / Published: 26 July 2019
(This article belongs to the Special Issue Texture and Colour in Image Analysis)

Abstract

:
There are many image resizing techniques, which include scaling, scale-and-stretch, seam carving, and so on. They have their own advantages and are suitable for different application scenarios. Therefore, a universal detection of tampering by image resizing is more practical. By preliminary experiments, we found that no matter which image resizing technique is adopted, it will destroy local texture and spatial correlations among adjacent pixels to some extent. Due to the excellent performance of local Tchebichef moments (LTM) in texture classification, we are motivated to present a detection method of tampering by image resizing using LTM in this paper. The tampered images are obtained by removing the pixels from original images using image resizing (scaling, scale-and-stretch and seam carving). Firstly, the residual is obtained by image pre-processing. Then, the histogram features of LTM are extracted from the residual. Finally, an error-correcting output code strategy is adopted by ensemble learning, which turns a multi-class classification problem into binary classification sub-problems. Experimental results show that the proposed approach can obtain an acceptable detection accuracies for the three content-aware image re-targeting techniques.

1. Introduction

As image editing tools and various mobile devices are easily acquired and conveniently used, maximizing the viewing experience of end users on small devices becomes very important. Compared to traditional image re-targeting methods, such as linear scaling and cropping, many content-aware image resizing methods can preserve salient areas, avoiding serious distortions or loss of significant information [1,2,3]. Meanwhile, many content-aware resizing algorithms have been adopted using image editing tools, such as photoshop and GIMP. An ordinary user can very easily create tampered images for malicious purposes using image editing tools. Moreover, it is impossible to distinguish those tampered images from authentic images with the naked eye. Therefore, how to detect tampered images is a hot topic in the field of image content security.
In recent years, a few approaches have existed about the detection of content-aware image re-targeting. Moreover, most of the detection methods are for the seam carved images. Lu et al. adopted a forensic hash to distinguish whether an image is subjected to a seam carving operation [4]. However, it is an active forensics approach; moreover, a falsifier might remove the forensic hash. For passive forensic detection, Sarkar et al. used 324D Markov features to detect image seam carving [5]. Later, Fillion et al. exploited a series of intrinsic features to expose the trace of seam carved images [6]. In Wei et al. [7], an approach based patch analysis was adopted to distinguish whether an image is original or not. According to noise and energy distribution in seam carved images, Ryu et al. [8] exploited the features based on noise and energy bias to detect seam carved images. Local binary pattern (LBP) was adopted to detect seam carved images [9] in our recent work. Inspired by image entropy with the ability of capturing the intrinsic information of an image, we exploited multi-scale spectral and spatial entropies to detect seam carved images with low resizing ratios [10]. Web Local Descriptor (WLD) and LBP were adopted to distinguish whether an image is original or not [11]. In [12], a large feature mining approach was proposed to detect image seam carving under recompression in joint photographic experts group (JPEG) images.
However, most existing detections of image resizing are designed for a specific content-aware resizing. Much less has been done to distinguish different content-aware resizing approaches in the process of image re-targeting. In practice, the best re-targeting method relies on an image itself. For example, scaling images in horizontal or vertical direction can be performed in real time using interpolation and will preserve the global visual effects and retarget images with medium perceptual quality. However, scaling will introduce some shape deformation into the retarget image. Seam carving [1] supports various visual saliency measures for defining the energy of an image. Nevertheless, seam carving can excessively carve less important parts of an image and result in unwanted visual distortions. scale-and-stretch [3] can preserve the aspect ratios of local objects. However, if there are many quads in the image, the approach will fail to preserve the aspect ratio of the whole image [2]. Therefore, there are different image resizing methods depending on the image content to achieve image change in size while preserving the saliency region. it is necessary to propose a universal detection of image resizing.
The rest paper is organized as follows: Section 2 summarizes several common methods of image re-targeting and analyzes their artifacts. Section 3 briefly introduces the proposed detection approach. Our experimental results are described and analyzed in detail in Section 4, and conclusions are made in Section 5.

2. Image Resizing and Their Possible Artifacts

2.1. Several Common Methods of Image Resizing

Among the methods of content-aware image resizing, scaling, seam carving [1], and scale-and-stretch [3] are three common approaches to re-target an image. Seam carving is defined by forward energy. The intensity gradient magnitude in L 1 metric is defined as an importance map. The contiguous chains of pixels that pass through the regions of the least importance in the image are deleted or duplicated by seam carving to obtain image resizing. Dynamic programming is adopted to compute seams. Scale-and-stretch is defined warping. Both image dimensions are processed by warping at once. Moreover, an objective function is optimized to allow important regions uniformly scaling in order to preserve their shapes. A saliency and combination of L 2 gradient magnitude (defined by [13]) is defined as the importance map. Scaling implemented image resizing by simply bi-cubic interpolation and non-uniform scaling.
According to the above description of the three resizing methods, it is clearly found that scale-and-stretch can keep significant regions in an image, which is consistent with its original image after the image is re-targeted. However, seam carving implements image re-targeting by deleting or inserting pixels within a minimal energy; therefore, it may cause the salient object distortion.

2.2. Analysis of Image Resizing Artifacts

There exist three kinds of artifacts for the image processed by the content-aware resizing method [13], such as geometric deformation, information loss and local texture distortion. Figure 1 shows these artifacts caused by content-aware re-targeting. Figure 1b shows line or edge distortion after re-targeting. However, salient areas of an image, such as people and building, do not significantly change. The shape distortion of an image is shown in Figure 1c. This further explains that removed pixels might exist in salient areas of the image when a pixel with a minimum energy is deleted in the process of the image seam carving. Therefore, it is easy to deform important objects of the image in the process of image resizing. Figure 1d shows the artifact of information loss. A scaling method is bi-cubic interpolation and re-targets the entire image in the process of image re-targeting. To better show the image distortion in the process of image resizing, we adopt an LTM histogram to identify the distortion caused by different resizing methods in this paper. Figure 2 shows the residual LTM diagram for three different resizing methods. It can be found from Figure 2b that the image distortions of a non-subject area are not easily perceived in Figure 1b. However, these distortions can be clearly found in the residual LTM diagram. It can be clearly showed that the distortions are found in the process of image re-targeting in Figure 2d.

3. Proposed Method

A passive detection method is presented for image resizing forgery detection in this paper. Figure 3 shows the implementation process diagram of our presented algorithm. Similar to the process of most existing forensics methods, our proposed method consists of two parts, i.e., a training part and a testing part. In the training process, tampered images and their corresponding original images are adopted as data sets. First, all the training images are preprocessed. Second, the LTM histogram features are extracted from preprocessed images. Finally, a training model is obtained by using ensemble learning based on extracted features. In the testing process, the LTM histogram features are extracted according to the same steps in the training part. Finally, the extracted features are used by the trained ensemble classifier to distinguish which resizing method a tested image is re-targeted.

3.1. Preprocessing

An image obtained by content-aware resizing methods usually has a good visual effect. Furthermore, it is impossible for users to distinguish from authentic photographs using the naked eye. However, the correlation between adjacent pixels will inevitably change after an image is resized. Therefore, it is necessary to preprocess re-targeted images. Image residuals can efficiently capture the change of adjacent pixels in the process of image re-targeting. In this paper, a one-dimensional low-pass filter is adopted to calculate residuals along the horizontal and the vertical directions. The formula is shown as Equation (1):
R ( x , y ) = I ( x , y ) I ( x , y ) L ( u ) ,
where I ( x , y ) is an image and L ( u ) is low-pass filter. Figure 4 shows the residuals of the preprocessed content-aware image resizing. Through the residuals, it can be clearly found that tampered traces are caused by different content-aware image resizing methods.

3.2. Features of LTM

After images preprocessing, orthogonal Tchebichef moments are adopted to construct feature vectors on 5 × 5 neighbor pixels. In addition, the texture information is encoded with Lehmer to represent the relative strength of moments. The extracted feature vectors are called LTM. A byte value for each pixel is provided, and an LTM diagram is generated by the encoding scheme. Therefore, the histogram features of LTM are adopted to identify whether an image is subjected to image resizing. Figure 5 shows the histogram features of LTM after preprocessing.

3.3. Ensemble Learning for Blind Forensics

In this paper, an error-correcting output codes (ECOC) strategy [14] based on ensemble learning is adopted, which transforms multi-class classification problems into binary classification sub-problems. This is because ECOC is an excellent multi-class classification tool, and the ensemble learning performs well in terms of computational complexity and detection accuracy. The tamper of three different resizing methods, such as scale-and-stretch, seam carving and scaling, is identified. Therefore, for this three-class classification problem, a pair coupling strategy [15] is adopted. Specifically, a discrete matrix (coding matrix) is defined first. In addition, then, the problem is decomposed into n = 3 binary classification sub-problems according to the sequence of 0 and 1 in the coding matrix, namely dichotomies. After that, ensemble learning is adopted to train these dichotomies and test the extracted histogram of LTM to obtain binary vectors. Finally, the class is identified by the minimum hamming distance between the encoded word and the vector.

4. Results

4.1. Experimental Environment

To verify the performance of our proposed algorithm, we conduct a number of experiments in our personal computer. The passive forensics approach is implemented in Matlab2012b. The ensemble learning can be downloaded from [16]. In this paper, the Uncompressed Colour Image Database (UCID) [17] is adopted as the original images. The image database contains 1338 images, which are composed of people, buildings, animals and landscapes. Since there is no publicly available image database of image resizing available, we construct an image library from UCID for resizing carving detection. According to different resizing ratios, three resizing methods are used to produce tampered images. The resizing ratios vary from 10% to 50% with a step size of 10%. That is, for every resizing method, the resizing ratios of tampered images are 10%, 20%, 30%, 40%, and 50%. Therefore, we have 1338 original images and 3 × 5 × 1338 tampered images. To verify our proposed approach, we perform the performance evaluation for the following cases: (1) tamper detection for a single resizing method; (2) tamper detection for multiple resizing methods; and (3) tamper detection without preprocessing. In all experiments, the ECOC based on an ensemble learning strategy is adopted to test the effect of our proposed method. The image dataset is divided into two groups, 50% for training and 50% for testing. The training and testing are repeated ten times, and the average detection accuracy is reported in this paper.

4.2. Experimental Discussions

4.2.1. Tamper Forensics on a Single Resizing Method

In this experiment, we test the detection performance of our proposed method for a single resizing method. The tampered images with scaling ratios from 10% to 50% are adopted to test the performance of our proposed approach. Table 1 shows the detection results. From Table 1, we can see that the detection accuracy is improved with the increment of the scaling ratio. When the scaling ratio is less than 20%, our proposed approach has a higher detection accuracy for the scale-and-stretch resizing method. Since the optimal local scaling ratio of each local block is calculated iteratively and the warped image is updated simultaneously to match the scaling ratios as much as possible, the entire image is resized in the process of scale-and-stretch re-targeting. However, the seam carving method resizes an image by deleting the seams with the lowest energy once. Therefore, the tampered images obtained by the seam carving method are difficult to be distinguished from the authentic images when the scaling ratio is low. With the increment of the scaling ratio, the algorithm will cause a global structure distortion. It can also be reflected in Table 1. Our proposed approach can get a higher accuracy rate for images obtained using the seam carving method than that using other resizing methods with the increment of the resizing ratio.

4.2.2. Identifying Images Obtained by Different Content-Aware Resizing Methods

In this experiment, the tampered images with scaling ratios from 10% to 50% are obtained by different content-aware resizing methods. They are adopted to test the performance of our proposed algorithm. The average detection accuracies of different content-aware re-targeting methods, where the average detection accuracy is the average value of diagonal elements in the confusion matrix, are summarized in Table 2. Note that “mixed” represents the mixed test set of tampered images with the scaling ratios from 10% to 50%. There are three content-aware resizing methods in this paper. Therefore, this is a four-class classification problem (the original images as a special class), according to the ECOC strategy.
Table 2 shows that the average accuracy is improved with the increment of the scaling ratio. However, the detection accuracy is apparently decreased for highly compressed images with Quality factor (QF) being equal to 75. Through careful analysis of our experimental results, it is found that the main reason for the decrement of the detection accuracy in the compressed condition is that the traces of the tampered images are weakened when images are compressed. Therefore, the detection accuracy is decreased in this case. We have also completed the experiment of the “mixed” tampered uncompressed and compressed images and get the confusion matrix. Table 3 shows our experimental results, where CMOMTUI represents the confusion matrix of “mixed” tampered uncompressed images, CMOMTCI represents the confusion matrix of “mixed” tampered compressed images, “*” represents the classified accuracy less than 1%, OR represents original images, SNS represents a scale-and-stretch method, SC represents a seam carving method, and SL represents a scaling method. From Table 3, we can see that our proposed method can get a high accuracy for the three content-aware resizing methods mentioned in this paper. However, it can’t obtain a good detection accuracy on the tampered images with JPEG compression. In addition, its false positive rate is relatively high for the seam carving method (SC) and the scaling method (SL).

4.2.3. The Detection Accuracy without Preprocessing

Table 4 and Table 5 report the identified results for the uncompressed and compressed tampered images without preprocessing, respectively, where CMOMTUI represents the confusion matrix of “mixed” tampered uncompressed images, CMOMTCI represents the confusion matrix of “mixed” tampered compressed images, “*” represents the classified accuracy less than 1%, OR represents original images, SNS represents the scale-and-stretch method, SC represents the seam carving method, and SL represents the scaling method. It can be found from Table 4 and Table 5 that our proposed approach has a sightly higher detection accuracy on the uncompressed images when the features of LTM are extracted from the images without preprocessing. However, when the images are compressed by QF = 75, the accuracy of our proposed approach is significantly lower than that of the images with preprocessing. The main reason is that the residual may weaken the tampered trace of the images without compression when it is used in the process of preprocessing. However, the images with compression are preprocessed by residuals, and the changes of these images will be highlighted.

5. Conclusions

Content-aware image re-targeting methods are widely adopted to resize images to display on all kinds of terminals. However, they can be also used to make fake images, which don’t have any perceptual annoying distortions. By the principle analysis of the three image resizing methods, we found that the correlation between adjacent pixels can be destroyed in the process of image resizing. Tchebichef moments have been extensively applied in field of image/vedio such as information security [18], pattern recognition [19] and image quality assessment [20], and so on. Inspired by this, it can be found from experiments that LTM can effectively reflect the correlation changes between adjacent pixels. We proposed a passive forensics algorithm based on LTM to identify tampered images obtained by content-aware image resizing methods in this paper. Our experimental results showed that our proposed method can obtain a better accuracy. It is verified that it has a good performance for the image resizing with high scaling ratios. In the future, we will try to evaluate the detection accuracy on the tampered images obtained by image resizing with low scaling ratios. In addition, our proposed method could not obtain a satisfied detection accuracy on the re-targeted images with JPEG compression. We will further analyze the tampered trace of the resized images with JPEG compression. In addition, since there are still a few image resizing methods [21,22] besides the three methods proposed in this paper, we will attempt to distinguish these image resizing techniques for re-targeted images by applying other multi-class classifiers [23,24,25,26,27,28] and designing more general features from image/video processing methods [29,30,31,32,33,34,35,36,37,38]. In view of the importance of social media digital images in practical applications, research on their authenticity, integrity and traceability has been one of the hot and challenging research topics in the field of information security. We will adopt network optimization methods [39,40,41,42,43,44,45,46,47,48] to improve the real-time and high efficiency performance of the feature extraction phases.

Author Contributions

Conceptualization: D.Z. and S.W.; investigation: J.W.; methodology: D.Z. and S.W.; software: D.Z. and S.W.; supervision: F.L.; validation: A.K.S. and V.S.S.; writing—original draft: D.Z. and S.W.; writing—review and editing: J.W. and A.K.S.

Funding

This research was funded by the National Natural Science Foundation of China (61772454, 61811530332, 61811540410, 61772087, 61232016), the Scientific Research Fund of Hunan Provincial Education Department of China (14C0029) and the “Double First-class” International Cooperation and Development Scientific Research Project of Changsha University of Science and Technology (No. 2018IC25).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Avidan, S.; Shamir, A. Seam carving for content-aware resizing. ACM Trans. Graphics 2007, 26, 10. [Google Scholar] [CrossRef]
  2. Vaquero, D.; Turk, M.; Pulli, K. A survey of image retargeting techniques. Int. Soc. Opt. Photonics 2010, 7898, 789814. [Google Scholar]
  3. Wang, Y.S.; Tai, C.L.; Sorkine, O. Optimized scale-and-stretch for image resizing. ACM Trans. Graphics (TOG) 2008, 27, 118. [Google Scholar] [CrossRef]
  4. Lu, W.; Wu, M. Seam carving estimation using forensic hash. In Proceedings of the Thirteenth ACM Multimedia Workshop on Multimedia and Security, Buffalo, NY, USA, 29–30 September 2011; pp. 9–14. [Google Scholar]
  5. Sakar, A.; Nataraj, L.; Manjunath, B.S. Detection of seam carving and localization of seam insertions in digital images. In Proceedings of the 11th ACM Workshop on Multimedia and Security, Princeton, NJ, USA, 7–8 September 2009; pp. 107–116. [Google Scholar]
  6. Fillion, C.; Sharma, G. Detecting content adaptive scaling of images for forensic applications. SPIE Electron. Imaging Int. Soc. Opt. Photonics 2010, 7541, 75410Z. [Google Scholar]
  7. Wei, J.D.; Lin, Y.J.; Wu, Y.J. A patch analysis method to detect seam carved images. Pattern Recognit. Lett. 2014, 36, 100–106. [Google Scholar] [CrossRef]
  8. Seung-Jin, R.Y.U.; Hae-Yeoun, L.E.E.; Heung-Kyu, L.E.E. Detecting trace of seam carving for forensic analysis. IEICE Trans. Inf. Syst. 2014, 97, 1304–1311. [Google Scholar]
  9. Yin, T.; Yang, G.; Li, L. Detecting seam carving based image resizing using local binary patterns. Comput. Secur. 2015, 55, 130–141. [Google Scholar] [CrossRef]
  10. Zhang, D.Y.; Yin, T.; Yang, G. Detecting image seam carving with low scaling ratio using multiscale spatial and spectral entropies. J. Vis. Commun. Image Represent. 2017, 48, 281–291. [Google Scholar] [CrossRef]
  11. Zhang, D.Y.; Li, Q.; Yang, G. Detection of image seam carving by using weber local descriptor and local binary patterns. J. Inf. Secur. Appl. 2017, 36, 135–144. [Google Scholar] [CrossRef]
  12. Liu, Q. An approach to detecting JPEG down-recompression and seam carving forgery under recompression anti-forensics. Pattern Recognit. 2017, 65, 35–46. [Google Scholar] [CrossRef] [Green Version]
  13. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
  14. Dietterich, T.G.; Bakiri, G. Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 1995, 2, 263–286. [Google Scholar] [CrossRef]
  15. Hastie, T.; Tibshirani, R. Classification by pairwise coupling. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, UK, 1998; pp. 507–513. [Google Scholar]
  16. The Ensemble Learning. Available online: http://dde.binghamton.edu/download/ensemble (accessed on 25 May 2019).
  17. Schaefer, G.; Stich, M. UCID: An uncompressed color image database. Int. Soc. Opt. Photonics 2003, 5307, 472–480. [Google Scholar]
  18. Chen, B.; Coatrieux, G.; Wu, J. Fast computation of sliding discrete Tchebichef moments and its application in duplicated regions detection. IEEE Trans. Signal Process. 2015, 63, 5424–5436. [Google Scholar] [CrossRef]
  19. Zhang, H.; Dai, X.; Sun, P. Symmetric image recognition by Tchebichef moment invariants. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 2273–2276. [Google Scholar]
  20. Li, L.; Zhu, H.; Yang, G. Referenceless measure of blocking artifacts by Tchebichef kernel analysis. IEEE Signal Process. Lett. 2014, 21, 122–125. [Google Scholar] [CrossRef]
  21. Niu, Y.; Liu, F.; Li, X. Image resizing via non-homogeneous warping. Multimed. Tools Appl. 2012, 56, 485–508. [Google Scholar] [CrossRef]
  22. Lin, S.S.; Yeh, I.C.; Lin, C.H. Patch-based image warping for content-aware retargeting. IEEE Trans. Multimed. 2013, 15, 359–368. [Google Scholar] [CrossRef]
  23. Zhang, J.; Lu, C.; Li, X.; Kim, H.J.; Wang, J. A full convolutional network based on DenseNet for remote sensing scene classification. Math. Biosci. Eng. 2019, 16, 3345–3367. [Google Scholar] [CrossRef]
  24. Yu, J.; Zhang, B.; Kuang, Z.; Lin, D.; Fan, J. iPrivacy: Image Privacy Protection by Identifying Sensitive Objects via Deep Multi-Task Learning. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1005–1016. [Google Scholar] [CrossRef]
  25. Tirkolaee, E.B.; Hosseinabadi, A.A.R.; Soltani, M. A Hybrid Genetic Algorithm for Multi-trip Green Capacitated Arc Routing Problem in the Scope of Urban Services. Sustainability 2018, 10, 1366. [Google Scholar] [CrossRef]
  26. Chen, Y.T.; Wang, J.; Chen, X.; Zhu, M.; Yang, K.; Wang, Z.; Xia, R. Single-Image Super-Resolution Algorithm Based on Structural Self-Similarity and Deformation Block Features. IEEE Access 2019, 7, 58791–58801. [Google Scholar] [CrossRef]
  27. Zhang, J.; Jin, X.; Sun, J.; Wang, J.; Sangaiah, A.K. Spatial and semantic convolutional features for robust visual object tracking. Multimed. Tools Appl. 2018. [Google Scholar] [CrossRef]
  28. Zhang, J.; Jin, X.; Sun, J.; Wang, J.; Li, K. Dual model learning combined with multiple feature selection for accurate visual tracking. IEEE Access 2019, 7, 43956–43969. [Google Scholar] [CrossRef]
  29. Yun, S.; Gaobo, Y.; Hongtao, X. Residual domain dictionary learning for compressed sensing video recovery. Multimed. Tools Appl. 2017, 76, 10083C10096. [Google Scholar]
  30. Xiang, L.; Shen, X.; Qin, J.; Hao, W. Discrete Multi-Graph Hashing for Large-scale Visual Search. Neural Process. Lett. 2019, 49, 1055–1069. [Google Scholar] [CrossRef]
  31. Yu, J.; Rui, Y.; Tang, Y.Y.; Tao, D. High-Order Distance-Based Multiview Stochastic Learning in Image Classification. IEEE Trans. Cybern. 2014, 44, 2431–2442. [Google Scholar] [CrossRef]
  32. Li, Y.; Yang, G.; Zhu, Y.; Ding, X.; Gong, R. Probability model-based early Merge mode decision for dependent views in 3D-HEVC. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2018, 14, 8501–8515. [Google Scholar] [CrossRef]
  33. Ding, X.; Yang, G.; Li, R.; Zhang, L.; Li, Y.; Sun, X. Identification of MC-FRUC based on spatial-temporal Markov features of residue signal. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 1497–1512. [Google Scholar] [CrossRef]
  34. Xia, M.; Yang, G.; Li, L.; Li, R.; Sun, X. Detecting video frame rate up-conversion based on frame-level analysis of average texture variation. Multimed. Tools Appl. 2017, 76, 8399–8421. [Google Scholar] [CrossRef]
  35. He, J.; Yang, G.; Song, J.; Ding, X.; Li, R. Hierarchical prediction-based motion vector refinement for video frame-rate up-conversion. J. Real-Time Image Process. 2018, 1–15. [Google Scholar] [CrossRef]
  36. Tang, D.; Zhou, S.; Yang, W. Random-filtering based sparse representation parallel face recognition. Multimed. Tools Appl. 2019, 78, 1419–1439. [Google Scholar] [CrossRef]
  37. Pan, J.S.; Kong, L.; Sung, T.W.; Tsai, P.W.; Snášel, V. A Clustering Scheme for Wireless Sensor Networks Based on Genetic Algorithm and Dominating Set. J. Internet Technol. 2018, 19, 1111–1118. [Google Scholar]
  38. Meng, Z.; Pan, J.-S.; Tseng, K.-K. PaDE: An enhanced Differential Evolution algorithm with novel control parameter adaptstion schemes for numerical optimization. Knowl.-Based Syst. 2019, 168, 80–99. [Google Scholar] [CrossRef]
  39. Nguyen, T.-T.; Pan, J.-S.; Dao, T.-K. An Improved Flower Pollination Algorithm for Optimizing Layouts of Nodes in Wireless Sensor Network. IEEE Access 2019, 7. [Google Scholar] [CrossRef]
  40. Wang, J.; Gao, Y.; Liu, W.; Sangaiah, A.K.; Kim, H.J. An Intelligent Data Gathering Schema with Data Fusion Supported for Mobile Sink in WSNs. Int. J. Distrib. Sens. Netw. 2019, 15. [Google Scholar] [CrossRef]
  41. Wang, J.; Cao, J.; Sherratt, R.S.; Park, J.H. An improved ant colony optimization-based approach with mobile sink for wireless sensor networks. J. Supercomput. 2018, 74, 6633–6645. [Google Scholar] [CrossRef]
  42. Wang, J.; Cao, J.; Ji, S.; Park, J.H. Energy Efficient Cluster-based Dynamic Routes Adjustment Approach for Wireless Sensor Networks with Mobile Sinks. J. Supercomput. 2017, 73, 3277–3290. [Google Scholar] [CrossRef]
  43. Wang, J.; Gao, Y.; Yin, X.; Li, F.; Kim, H.J. An Enhanced PEGASIS Algorithm with Mobile Sink Support for Wireless Sensor Networks. Wirel. Commun. Mob. Comput. 2018, 2018. [Google Scholar] [CrossRef]
  44. Wang, J.; Gao, Y.; Liu, W.; Wu, W.; Lim, S.J. An Asynchronous Clustering and Mobile Data Gathering Schema based on Timer Mechanism in Wireless Sensor Networks. Comput. Mater. Contin. 2019, 58, 711–725. [Google Scholar] [CrossRef]
  45. Pan, J.S.; Lee, C.Y.; Sghaier, A.; Zeghid, M.; Xie, J. Novel Systolization of Subquadratic Space Complexity Multipliers Based on Toeplitz Matrix-Vector Product Approach. IEEE Trans. Very Large Scale Integr. Syst. 2019, 27, 1614–1622. [Google Scholar] [CrossRef]
  46. He, Y.; Xiang, S.; Li, K.; Liu, Y. Region-Based Compressive Networked Storage with Lazy Encoding. IEEE Trans. Parallel Distrib. Syst. 2019, 30, 1390–1402. [Google Scholar]
  47. Pan, J.S.; Kong, L.P.; Sung, T.W.; Tsai, P.W.; Snasel, V. Alpha-Fraction First, Strategy for Hierarchical Wireless Sensor Networks. J. Internet Technol. 2018, 19, 1717–1726. [Google Scholar]
  48. He, S.; Xie, K.; Chen, W.; Zhang, D.; Wen, J. Energy-aware Routing for SWIPT in Multi-hop Energy-constrained Wireless Network. IEEE Access 2018, 6, 17996–18008. [Google Scholar] [CrossRef]
Figure 1. Resized image obtained by different retargeting methods.
Figure 1. Resized image obtained by different retargeting methods.
Applsci 09 03007 g001
Figure 2. Residual LTM (local Tchebichef moments) diagram obtained by different re-targeting methods: (ad) correspond to the residual LTM diagram of Figure 1a–d, respectively.
Figure 2. Residual LTM (local Tchebichef moments) diagram obtained by different re-targeting methods: (ad) correspond to the residual LTM diagram of Figure 1a–d, respectively.
Applsci 09 03007 g002
Figure 3. A block diagram of our proposed approach.
Figure 3. A block diagram of our proposed approach.
Applsci 09 03007 g003
Figure 4. Tampered images and correspond residual images obtained by different re-targeting methods: (df) corresponds to the residual LTM diagram of (ac), respectively.
Figure 4. Tampered images and correspond residual images obtained by different re-targeting methods: (df) corresponds to the residual LTM diagram of (ac), respectively.
Applsci 09 03007 g004
Figure 5. The histogram features of LTM: (ac) corresponds to the LTM histogram of Figure 4d–f, respectively.
Figure 5. The histogram features of LTM: (ac) corresponds to the LTM histogram of Figure 4d–f, respectively.
Applsci 09 03007 g005
Table 1. Comparisons in terms of accuracy for tampered images with single re-targeting methods.
Table 1. Comparisons in terms of accuracy for tampered images with single re-targeting methods.
Scaling Ratio (%)Accuracy (%)
Scale-and-StretchSeam CarvingScaling
1082.5175.2674.33
2088.4591.6787.37
3095.5197.8795.55
4098.9599.7798.87
5099.8510099.81
Table 2. The average detection accuracies of our proposed approach with preprocessing.
Table 2. The average detection accuracies of our proposed approach with preprocessing.
Uncompressed (%)Compressed (%)
10%20%30%40%50%mixed10%20%30%40%50%mixed
82.0490.0093.3094.6195.9492.047681.1384.0887.9888.3478.33
Table 3. CMOMTUI and CMOMTCI with preprocessing.
Table 3. CMOMTUI and CMOMTCI with preprocessing.
UncompressedDetection Accuracy (%)CompressedDetection Accuracy (%)
ORSNSSCSLORSNSSCSL
OR95.93***OR93.45***
SNS*90.1411.123.61SNS*80.6911.7510.49
SC**83.716.01SC*1.4968.4318.77
SL3.268.905.1790.38SL5.5617.8219.8270.73
Table 4. The average detection accuracies of our proposed approach without preprocessing.
Table 4. The average detection accuracies of our proposed approach without preprocessing.
Uncompressed (%)Compressed (%)
10%20%30%40%50%mixed10%20%30%40%50%mixed
84.0491.0294.3295.4196.0493.0470.9374.5979.6383.1885.3476.09
Table 5. CMOMTUI and CMOMTCI without preprocessing.
Table 5. CMOMTUI and CMOMTCI without preprocessing.
UncompressedDetection Accuracy (%)CompressedDetection Accuracy (%)
ORSNSSCSLORSNSSCSL
OR96.39***OR87.98***
SNS*91.549.213.21SNS*77.2516.2912.49
SC**85.715.41SC1.351.7365.1417.07
SL3.608.405.0891.38SL10.6721.0218.5770.73

Share and Cite

MDPI and ACS Style

Zhang, D.; Wang, S.; Wang, J.; Sangaiah, A.K.; Li, F.; Sheng, V.S. Detection of Tampering by Image Resizing Using Local Tchebichef Moments. Appl. Sci. 2019, 9, 3007. https://doi.org/10.3390/app9153007

AMA Style

Zhang D, Wang S, Wang J, Sangaiah AK, Li F, Sheng VS. Detection of Tampering by Image Resizing Using Local Tchebichef Moments. Applied Sciences. 2019; 9(15):3007. https://doi.org/10.3390/app9153007

Chicago/Turabian Style

Zhang, Dengyong, Shanshan Wang, Jin Wang, Arun Kumar Sangaiah, Feng Li, and Victor S. Sheng. 2019. "Detection of Tampering by Image Resizing Using Local Tchebichef Moments" Applied Sciences 9, no. 15: 3007. https://doi.org/10.3390/app9153007

APA Style

Zhang, D., Wang, S., Wang, J., Sangaiah, A. K., Li, F., & Sheng, V. S. (2019). Detection of Tampering by Image Resizing Using Local Tchebichef Moments. Applied Sciences, 9(15), 3007. https://doi.org/10.3390/app9153007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop