Next Article in Journal
The Shell Collapsar—A Possible Alternative to Black Holes
Next Article in Special Issue
Boltzmann Sampling by Degenerate Optical Parametric Oscillator Network for Structure-Based Virtual Screening
Previous Article in Journal
Tolerance Redistributing of the Reassembly Dimensional Chain on Measure of Uncertainty
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Metric for Estimating Congruity between Quantum Images †

1
Electrical Engineering Department, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
2
School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China
3
School of Automation, Beijing Institute of Technology, Haidian District, Beijing 100081, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the IEEE Science and Information Conference, London, UK, 28–30 July 2015.
Entropy 2016, 18(10), 360; https://doi.org/10.3390/e18100360
Submission received: 20 July 2016 / Revised: 29 September 2016 / Accepted: 29 September 2016 / Published: 9 October 2016
(This article belongs to the Collection Quantum Information)

Abstract

:
An enhanced quantum-based image fidelity metric, the QIFM metric, is proposed as a tool to assess the “congruity” between two or more quantum images. The often confounding contrariety that distinguishes between classical and quantum information processing makes the widely accepted peak-signal-to-noise-ratio (PSNR) ill-suited for use in the quantum computing framework, whereas the prohibitive cost of the probability-based similarity score makes it imprudent for use as an effective image quality metric. Unlike the aforementioned image quality measures, the proposed QIFM metric is calibrated as a pixel difference-based image quality measure that is sensitive to the intricacies inherent to quantum image processing (QIP). As proposed, the QIFM is configured with in-built non-destructive measurement units that preserve the coherence necessary for quantum computation. This design moderates the cost of executing the QIFM in order to estimate congruity between two or more quantum images. A statistical analysis also shows that our proposed QIFM metric has a better correlation with digital expectation of likeness between images than other available quantum image quality measures. Therefore, the QIFM offers a competent substitute for the PSNR as an image quality measure in the quantum computing framework thereby providing a tool to effectively assess fidelity between images in quantum watermarking, quantum movie aggregation and other applications in QIP.

1. Introduction

Quantum image processing (QIP), which is an emerging sub-discipline that seeks to extend traditional (i.e., classical or digital) image processing tasks to (or using resources from) the quantum computing realm [1], has attracted a lot of attention from practitioners that come from divergent backgrounds, notably those from the quantum computing, computer science, and engineering fields [2]. The first published material relating quantum computers to image processing can be traced to the work [3] by Vlasov in 1996, wherein the use of “analogue” quantum computing hardware to describe image analysis was mooted. This was followed by efforts to design quantum algorithms based on unstructured picture (data set) in [4] and [5]. Optics-based descriptions for quantum imaging then became very prominent [6], while the idea of a more general quantum signal processing framework was presented in [7]. However, it is the exploratory work by Beach and Lomont [8], in 2003, that is often credited as the bellwether of what has transmuted into today’s description of QIP, albeit with a slightly different acronym—quip. This was closely followed by Venegas–Andraca and Bose’s Qubit Lattice description for quantum images [9]. Years later, Lattore [10] proposed the Real Ket representation to encode quantum images. The third and final representation, completing the trio of representations that are collectively regarded as the pioneers of today’s version of the QIP sub-discipline, is Le et al.’s flexible representation for quantum images (FRQI) that was proposed in 2010 [11] and later revised in 2011 [12].
Since then, the impressive development of the QIP sub-area is widely attributed to the FRQI representation because most of the QIP literature are either focused on its use in applications such as quantum watermarking [13], quantum movie [14], image data base search [15], etc.; or the extension and modification of the FRQI representation to encode different types of images [16]. Overall, the objective of QIP is to utilise quantum computing technologies to capture, manipulate and recover quantum images in different formats and for different purposes. Detailed reviews on QIP can be found in [17,18,19].
Most of the papers cited above are, however, focused on covering the latter two objectives of QIP, i.e., capturing and manipulating the images, while the former (i.e., encoding an image) is scantily covered [20]. Even so, to the best of our knowledge, only [16] provided readers with insights into likely technologies to visualise transformed images resulting from operations on quantum images [17]. Meanwhile, interest in image and video security applications such as watermarking [13,21], encryption and decryption [22] has continued to increase.
To the best of our knowledge, all the aforementioned applications of QIP utilise the digital peak-noise-to-signal-ratio (PSNR) image quality measure to benchmark and validate their approaches. This is mainly attributed to the absence of a quantum-based metric equivalent to the PSNR. The closest attempt to quantify the extent that two or more images are alike is Yan et al.’s probability-based image similarity score technique in [15,22].
Most of the researchers are content with adopting the classical PSNR image quality measure to assess likeness between two or more quantum images. The quantum mechanical nature of the quantum information carrier—the qubit—however, specifies a, sometimes confounding, procedure for processing quantum information, considering which, we argue, in this study, that these available classical metrics are insufficient and/or ill-suited to effectively quantify the fidelity between two or more quantum images. Therefore, the absence of an effective metric for evaluating congruity between quantum images had remained an unexplored research area within the QIP sub-discipline until now.
In recent work [1], Iliyasu et al. proposed a wholly quantum-based image fidelity metric (or simply QIFM) to assess “likeness” between quantum images. Very much like the more mature classical PSNR image quality measure, the proposed QIFM metric was formulated as a pixel difference image quality tool but with the necessary attributes required for its execution on the quantum computing framework built into it. These two simple instances differentiate the QIFM from both the digital PSNR measure and Yan’s probability-based image similarity score [23]. The only resources required to execute the QIFM metric are quantum computing resources (i.e., logic gates) and some useful data retrieved from the classical versions of the reference and test image pair.
In this study, we concentrate on this new metric and propose accretions tailored towards ensuring moderate computational requirements are used for its execution. In addition, we present a more detailed validation of the effectiveness of this new quantum image quality metric by deploying it as the metric to estimate congruity between images in an expanded database (in terms of both the size of images and the scope of applications) to include images of varying size, complexity, and applications [13].
Throughout our appraisal of the QIFM’s efficiency we will compare its results alongside those obtained using the classical PSNR (P) measure and Yan’s probability-based image similarity score (S); hence, allowing us to utilise an FPS (fidelity, PSNR, and Score) analysis as the basis for validation.
As originally conceived in [1] and validated in this study, the proposed QIFM metric has the potential to replace the PSNR as an image quality measure on the quantum computing framework.
In Section 2, we enunciate the veridical properties of the strip representation used to encode 2m-ending FRQI quantum images and adopt its use to encode our quantum register containing the reference and test images whose congruity is sought. To complete the background review required for our study, in latter parts of Section 2, we review Yan et al.’s probability-based image similarity score and offer some motivation as to why it is inadequate to estimate congruity between two (or more) quantum images. Section 3 is devoted to discussions on the recrudesced QIFM image quality metric, while Section 4 focusses on the enhanced validation of this new image fidelity metric.

2. Strip Representation and Its Use in the Probability-Based Comparison of Images

The strip representation | S ( m , n ) was first formulated in [14] as an array comprising 2m flexible representation for quantum images (FRQI, [12]) images each of 2 n × 2 n size in the form defined in Equation (1):
| S ( m , n ) = 1 2 m 2 s = 0 2 m 1 | I s ( n ) | k
where:
| I s ( n ) = 1 2 n i = 0 2 2 2 n 1 | c s , i | i
with:
| c s , i = cos θ s , i | 0 + sin θ s , i | 1
θ s , i [ 0 , π 2 ] ,   i = 0 , 1 , , 2 2 n 1     and   s = 0 , 1 , , 2 m 1
where | k is the position of each image in the strip, m is the number of qubits required to encode the images being compared, | I k ( n ) is a FRQI image as defined in Equation (1) at position | k , | c k , i and | i encode the information about the colours and their corresponding positions in the image | I k ( n ) . It is trivial to see that the state | S ( m , n ) is a normalised one capable of encoding 2m quantum images using only m + 2n + 1 qubits.
As mentioned earlier, the strip representation was first conceived in [14] where it was utilised as the backbone for the framework to represent and produce quantum movies. Later, it was used to compare similarity between images [23], in database search [15], and in quantum video encryption and decryption [22].
Inspired by the utility of image search and retrieval on digital computers, the studies in [15,19,23] explored the possibility of undertaking similar tasks on quantum computers.
Therein, similarity between two FRQI quantum images | I a ( n ) and | I b ( n ) (as a function of their pixel information σ a , b ) was obtained by traversing every position in the image. In this context, similarity (Sim), is defined in Equation (5):
S i m ( | I a , | I b ) = f ( σ a , b 0 , σ a , b 1 , , σ a , b 2 2 n 1 ) ,
where S i m ( | I a , | I b ) [ 0 , 1 ] .
Based on the definition in Equation (5), two special cases of the similarity between two quantum images can be deduced as follows:
  • if i , σ a , b i = π / 2 , then S i m ( | I a , | I b ) = 0 and the two images are considered as being totally different (or dissimilar), where i = 0 , 1 , , 2 2 n 1 σ a , b i is the joint pixel information retrieved from the reference and test image pair;
  • if i , σ a , b i = 0 , then S i m ( | I a , | I b ) = 1 and the two images are considered as exact matches, where i = 0 , 1 , , 2 2 n 1 , σ a , b i is the joint pixel information retrieved from the reference and test image pair.
The circuit structure to compare these two FRQI quantum images (shown in Figure 1) requires a Hadamard gate, H that maps the basis states | 0 to ( | 0 + | 1 ) / 2 and | 1 to ( | 0 | 1 ) / 2 that are applied on the strip wire, which is a combination of a reference image | I 0 ( n ) and a test image | I 1 ( n ) . The final step in the circuit is the measurement operation M 0 (always shown with two parallel lines exiting a small rectangular box), which is used to recover the final readout resulting from the operations.
Based on the foregoing, it is obvious that the result of the measurement operation M0 depends on the disparities between | I 0 ( n ) and | I 1 ( n ) . The states | 0 and | 1 subsist on the strip wire S0 at a certain probability. In accordance with the measurement postulate used in [23], the probability of state | 0 on this strip wire is:
P S 0 | 0 = 1 ( 2 2 n + 1 ) 2 i = 0 2 2 n 1 [ 1 + cos ( θ 0 , i θ 1 , i ) 2 + ( sin θ 0 , i + cos θ 1 , 0 ) 2 ] = 1 ( 2 2 n + 1 ) 2 i = 0 2 2 n 1 [ 1 + cos ( θ 0 , i θ 1 , i ) ] .
Therefore:
P S 0 | 0 = 1 2 + 1 2 2 n + 1 i = 0 2 2 n 1 cos σ 0 , 1 i
In the same manner, the probability of state | 1 on the same wire is:
P S 0 | 1 = 1 2 1 2 2 n + 1 i = 0 2 2 n 1 cos σ 0 , 1 i
The probabilities of these two states sum up to 1, i.e., P S 0 | 0 + P S 0 | 1 = 1 , as they should [22]. It is apparent that, arising from Equations (6) and (7), the pixel difference σ 0 , 1 i is related to the probability of obtaining a readout of 1 from the strip wire S 0 , P S 0 ( | 1 ) , in the measurement and that this probability P S 0 ( | 1 ) will increase when the pixel difference increases.
Furthermore, the similarity between two images, which is a function of the pixel differences at every position, depends on P S 0 ( | 1 ) as given in Equation (8) [23]:
S i m ( | I 0 , | I 1 ) = 1 2 P S 0 ( | 1 ) = 1 2 2 n i = 0 2 2 n 1 cos σ 0 , 1 i
where | I 0 ( n ) and | I 1 ( n ) are the two images being compared, P S 0 ( | 1 ) is in the form defined in Equation (7), and S i m ( | I a , | I b ) [ 0 , 1 ] .
The similarity between the states | I 0 ( n ) and | I 1 ( n ) that are encoded in the strip agrees with the definition of similarity between two FRQI quantum images in Equation (5), therefore:
f ( σ a , b 0 , σ a , b 1 , , σ a , b 2 2 n 1 ) = 1 2 2 n i = 0 2 2 n 1 cos σ 0 , 1 i .
Figure 2 shows the generalised circuit required to compare similarity between multiple FRQI quantum images in a strip.
The circuit shows that GS operations on the m strip wires; m (0 or 1 controlled operations that are depicted using the half-moon notation that is further explained in Section 3); GI operations that lead to geometric exchanges between 2m images encoded in the strip; and zero (0) or one (1) 2n–control condition Hadamard operations followed by m measurement operations are needed in order to compare the similarity between multiple quantum images. It should be noted that the measurements M0 to Mm–1 that are depicted in Figure 2 are standard measurement operations that lead to the collapse of the quantum state, and as such the images encoded on the strip are lost after each measurement cycle is executed [14].
As highlighted in [14], in the QIP context, complexity is analysed in terms of three parameters. These parameters are: firstly, cost, which is expressed in terms of the basic quantum gates used to accomplish any transformation on an image; secondly, the width of circuit, which depends on the size of the image and is usually fixed for any N × N image; and thirdly, the depth of the circuit, which represents the number of layers into which the entire circuit can be efficiently partitioned. In other words, complexity is influenced by the dimension and types of control operations required to execute an operation.
As seen in Figure 2, Yan et al.’s probability-based score method to compare similarity between quantum images could become prohibitively expensive because the depth of the circuit increases with the size of the strip (i.e., the number of images in the multi-image strip). This is more so when the comparison is being confined to a smaller proportion of images relative to the size of the strip or when the comparison is focused on predetermined regions of interest within two or more of the images in the strip.
In addition to the complexity of execution, all the measurement operations in [23] are performed on the strip wire, which, as observed earlier, disrupts the “quantumness” of the system and the resulting read outs lead to a complete collapse of the system with which comes the loss of the hitherto quantum information encoded in the system. In many instances, such as quantum movie production or database search [15], these losses could be too expensive to bear. The interim solution suggested in Yan et al.’s probability-based approach to estimating similarity between quantum images [23], which requires multiple copies of the images be prepared, could also turn out to be even more prohibitively expensive. Therefore, in general, Yan’s approach [23] is deemed imprudent.
Most QIP literature assumes operability (or execution) based on the circuit-model of quantum computation. In such models, classical versions of the images are used to prepare a compact unit in a quantum register and, following this, in order to execute any type of computation, the qubits in this register are actively manipulated by a network of logical gates [14].
It has been noted, however, that the required control of these registers is very challenging to realise [14]. In addition to this, the inherent quantum properties, principally, superposition and entanglement make the circuit-model of computation unsuitable for recovering or measuring content of certain transformed images, such as the movie reader in quantum movies [14,17]. This is because the “quantumness” of the quantum state is lost immediately after it is observed. However, oftentimes, it is also required that the content of the register be retained (or left) undisturbed in spite of the measurement.
Measurement-based quantum computation (MBQC) [14,24] was proposed as an alternative strategy to the circuit-model and it relies on the effects of measurement on an entangled multi-partite resource state to perform the required computation [14,17]. This strategy provides an astute pathway to overcome the perceived shortcomings of the circuit-model of quantum computation and aspects of it have been realised experimentally using single-qubit measurements that are considered “cheap” [14,17,24].
All measurement-based models of quantum computing share the common feature that measurements are not performed on the qubits storing the data, because this will destroy the coherence essential for quantum computation. Rather, ancillary qubits are prepared and interacted with the information carrying qubits, so that the measurement operation is performed on them. By carefully choosing appropriate measurement and initial states of the ancillary qubits, the much sought coherence can be preserved [24].
Figure 3A shows the notation for a single qubit projective (destructive) measurement whence the quantum information is lost immediately its content is accessed. On its left (i.e., in Figure 3B) is a non-destructive (or ancilla-driven) measurement as used in MBQC, and in this particular case the ancilla-driven quantum computation (ADQC) model.
The ADQC [19] is a newer type of MBQC, which was utilised for the movie reader in [9], exploits the properties of single-qubit projective measurements and the entanglement-based interaction between the ancilla qubit and the register qubit (i.e., strip state encoding the 2m-ending FRQI images). The measurements are performed subject to satisfying some predefined conditions [24].
In the ADQC, the ancilla A is prepared and then entangled to a register qubit (in our case the single qubit encoding the color information [14]) using a fixed entanglement operator E. A universal interaction between the ancilla and register is accomplished using the Controlled-Z (CZ) gate and a swap (W) gate and then measured. An ADQC with such an interaction allows the implementation of any computation or universal state preparation [14,24]. This is then followed by single qubit corrections on both the ancilla and register qubits. Similar to all the measurement-based models, the standard ADQC uses a fully controlled ancilla qubit, which is coupled sequentially to one or at most two qubits of a register via a fixed entanglement operator E. After each coupling, the ancilla is measured in a suitable basis, providing a back action onto the register [24]. This implements both single- and two-qubit operations on the register qubits. The ADQC is also a hybrid scheme of the circuit model, since computation involves a sequence of single and two-qubit gates implemented on a register.
The description of Figure 3B as provided via [24] presents the ancilla-driven implementation of a single qubit rotation, JR(β), on a register qubit R, where the initial state of the total register can be a pure state, | ψ , or accordingly for a mixed state. The ancilla and register qubits are first coupled with EAR = HA·HR·CZAR. The rotation J(β) is then implemented on the fully controlled ancilla and transferred to the register qubit by measuring the ancilla in the z-basis. The result of the measurement, j = 0, 1, determines if an X correction appears on the register qubit.

3. A Measure for Fidelity between Quantum Images

In this section, we present structure and implementation of the proposed QIFM image quality metric that, like the movie reader in [14], utilises the ADQC paradigm for its hardware in order to ensure preservation of the 2m reference and test images encoded on the strip thereby eliminating the need to prepare multiple copies of the strip encoding the images especially when the measurement operations are executed. In this manner, the hitherto unavoidable loss of the information in the quantum register is overcome since all the destructive measurements are performed on the ancillary qubits as explained in the ancilla-driven measurement in Figure 3B. The proposed QIFM image quality metric will be presented in remainder of this section, but first, we start, in the next subsection, by highlighting a few of the motivations behind its conception.

3.1. Shortcomings of the PSNR Image Quality Metric

The peak-noise-to-signal (or simply PSNR) is an engineering term for the ratio between the maximum possible strength of a signal and the strength of corrupting noise that affects the fidelity of its representation [25,26], which has found widespread applicability as a traditional measure of consistent quality. Its acceptability is mainly attributed to its simplicity, which has seen it widely used in digital image processing as a pixel difference-based measure [26] to assess the quality of reconstruction of lossy compression codecs (for example, in image compression) and in evaluating, for example, the extent of likeness (or congruity) between a watermarked image and is original (unmarked) version [26]. It has, however, been shown that for some applications, such as when dealing with medical images, the PSNR poorly correlates with subjective quality expectations [26]. Similarly, objective image quality metrics have been shown to outperform the PSNR metric in predicting subjective image quality [26].
Motivated by these shortcomings of the PSNR image quality measure, various modifications to it have been suggested. Among them, the weighted PSNR (WPSNR) metric incorporating some parameters of the human visual system (HVS) into the computation was proposed in [26]. While it proves superior to the PSNR metric, the WPSNR method suffers from limited applicability because it fails to discriminate between the regions of interest and region of background of the image [26].
To further atone for this shortcoming, Navas et al. [25] proposed the modified WPSNR (or MWPSNR) metric, which they assert outperforms both the PSNR and WPSNR image quality metrics. In the nutshell, from the foregoing, we can deduce that even on the digital computing framework the PSNR often fails to effectively quantify the congruity between two (or more) images.
Motivated by this argument and the established antinomy or contrariety that exists between data processing on the classical (digital) and quantum computing frameworks, we assert that the PSNR metric is unsuited for use as a quantum image quality metric. For lack of an alternative, however, all the available quantum image watermarking schemes [13,21,27] have relied on the classical interpretation of the PSNR to evaluate quantum image quality and/or fidelity [1,17].
In the sequel, we improve on our earlier attempt (in [1]) at formulating an efficient, wholly quantum-based approach to evaluate the fidelity between two quantum images on the quantum computing framework, such as the likeness between a watermarked quantum image and its unmarked version.

3.2 The Enhanced QIFM Metric

An image quality metric is essential in evaluating the result of image processing algorithms [25]. The need for such a metric for the quantum computing paradigm was first mooted in [13,20]. In [1], this suggestion was given impetus by exploring the rudimentary requirements to realise such a metric. However, that effort was limited to uncomprehensive assessments in terms of its mathematical formulation and lack of extensive experimental analysis to establish its performance and utility.
In this study, mindful of the requirements imposed by the quantum mechanical nature of our information carrier, we catenate this previous effort with a more rigorous mathematical understanding to realise a framework that would facilitate assessment of congruity (or likeness) between two or more quantum images. Following this, in the next section, we will present an extensive experimental analysis to validate the utility of our enhanced QIFM fidelity metric.
Among other requirements, a good objective image quality metric should: (1) reflect the contortion in the image very well; (2) exhibit high subjective fidelity (by utilising some HVS attributes); (3) manifest low complexity (in terms of time and computing resources); (4) have widespread applicability (irrespective of image size and quality), etc. [25].
In addition to these requirements, because of the established contrariety pervading the manner in which how information is processed on digital and quantum computing devices, we impose an additional requirement that any quantum image fidelity metric should satisfy. This requirement specifies that the least number of unitary transformations (or simply, basic quantum gates), which implies lower computational costs, should be utilised in the computation. The generalised circuit to execute the proposed QIFM operation on a register comprising of 2m quantum images is presented in Figure 4.
As seen in this figure, the proposed QIFM consists of two blocks: the upper block (shown inside the red short-dashed rectangular box) is the strip encoding our register of 2m FRQI quantum images; and the lower block (shown inside the blue short-dashed rectangular box), which presents the ancillary information that is crucial for the preservation of the coherence needed for quantum computation. The brown layer (with icons that resemble an hour-glass) that separates the two blocks depicts the interaction that is envisaged between the quantum register and the ancillary information, which is mostly retrieved from the classical versions of the images. Descriptions of how this interaction is facilitated were highlighted in Section 2, but detailed discussions can be found in [14,24] and the other literature therein. We should further add that this interaction provides the required coupling needed for the ancilla-driven (non-destructive) measurement operations as highlighted in Figure 3B and explained in detail in [24].
The half-moon control-condition (◐) shown in Figure 4 is used to depict that a control-condition could be either a 1-control (●) or 0-control (○). First used in [1], the half-moon notation is used, simply for brevity, in order to avoid duplicating the separate use of the ● and ○ control-conditions.
Meanwhile, the flowchart depicting the algorithmic structure of our proposed QIFM metric is presented in Figure 5 and a detailed explanation of its two main units, i.e., the binary check operation (BCO) and bit error rate operation (BO), is presented in the remaining subsections of this section. Its implementation, in latter parts of this study, will, however, be limited to estimating congruity between only two images—a reference image and a test image—as a pair.

3.3. Fidelity Check Operation (FCO)

The peculiar nature of quantum computation imposes certain constraints on how quantum information can be manipulated and its final state recovered [17,24]. In trying to circumvent the violation of these requirements, for example, on the circuit-model, which most QIP literature adopt, the quantum image state is prepared from a classical copy of the same image. After it is prepared and initialised, the quantum image can only be manipulated, and recovered using quantum circuit elements and components, such as the basic unitary gates that were highlighted earlier in Section 2. Many copies of the same image are usually prepared and different circuits designed to manipulate and recover them as required for an intended application [14,15,16,17,18,19,20], albeit, as we shall demonstrate later, oftentimes, this is an expensive undertaking.
Throughout the rest of this paper, we assume that all images (reference and test images) are encoded as FRQI quantum images using procedures that were highlighted earlier in Section 2 and discussed in detail in [11,12,17,19]. Consequently, the geometric (GTQI) and colour (CTQI) transformations and their restricted variants, which are widely used in FRQI quantum image processing (QIP) to manipulate the spatial and chromatic content of images [17,19], are the main resources employed to manipulate our images.
Therefore, the foregoing aphorism enunciates the requirements to formulate our proposed QIFM fidelity metric as a pixel difference-based metric, whose formulation is tailored towards effective quantification of fidelity between a pair of quantum images in a manner that guarantees correlation with subjective evaluation of the notion of fidelity [1]. It is also designed to minimise the cost, in terms of basic quantum gates, required to execute the operations.
These two yardsticks (i.e., acceptable levels of fidelity and minimised cost of quantum resources) form the main benchmark on which the performance of our proposed metric will be evaluated. Based on this stipulation, we present, in the sequel, the two operations that make up the proposed quantum image fidelity operation to quantify “likeness” between two (or more) images (our QIFM protocol). For simplicity, we shall, henceforth, refer to this operation as the fidelity check operation, FCO.
Relying on the rudimentary requirements of quantum computation, which are already established in the literature (such as [2]), we highlight two algorithms that each produces its own sub–circuit, which combine to produce the FCO operation that facilitates the QIFM assessment as presented in Figure 4 and Figure 5.

3.3.1. Binary Check Operation

The binary check operation is so named because it is implemented on the binary content of a pair of quantum images comprising the reference image (Ir) and the test image (IT). The objective of this operation is to probe into the extent of “likeness” between the two images in the pair. It quantifies the pixel–wise amount of exactness (i.e., concordance) between corresponding pixels in a reference and test image [1].
The BCO operation is executed using three simple steps that are enumerated below:
Binary check algorithm:
  • Step 1: Utilising a thresholding technique similar to the ones presented in [13,20], after the content of quantum images have been prepared and initialised, they can be transformed into their binary versions based on a pixel threshold (p) that assigns a value of zero or one to the pixel when 0 ≤ p < 127 or 128 ≤ p < 255, respectively. This is utilised to convert the content of both the reference and test image in the jth (reference and test image) pair into their binary versions.
  • Step 2: Following the transformation of the reference and test images into their binary versions, we extract some detail about the images, albeit separately, and then use them to obtain the binary detail of each jth pairing of the reference and test images.
To do this, first, we compute the number of white (0) and black (1) pixels in the reference and test images, which are described using separate notations referred to as n r b and n r w for the reference image and n T b and n T w for the test image.
Finally, the binary detail in the jth pairing between a reference image I r and test image I T is evaluated using Equation (10):
Γ j = I D r I D T
where ID(r or T) for an N = n × n pixel image is defined in Equation (11):
I D ( r     o r     T ) = { ( n b ( r     o r   T ) n w ( r     o r   T ) ) N ;     if   n b ( r     o r     T ) n w ( r     o r   T ) ( n b ( r     o r     T ) n w ( r     o r     T ) ) N + 1 ;       otherwise .
  • Step 3: In the final step of the BCO algorithm, we count the number of pixel correspondence (Dj), in the jth reference and test image pair (i.e., the number of pixels xiyi in Ir corresponding with pixels xiyi in IT). However, by correspondence, the concordance that arises when pixels have the same value 0 or 1 (as the case may be) is implied.
On the FRQI QIP framework, the BCO can be executed using the circuit in Figure 6. In this sub-circuit, the strip, S is used to encode a multiple of FRQI images, each of which requires 2n-qubits to encode its spatial content and an additional 1 qubit to capture its chromatic content.
The binary detail, Γj, and pair correspondence, Dj, between the reference and test image in the jth pair are computed using the readout from the (ancilla-driven) measurement layer of the BCO sub-circuit (which shown in the short-dashed rectangles) in Figure 6.
As is easily deducible from this BCO sub-circuit, the BCO operation requires m + 2n + 1 qubits to encode the jth pair of reference and test images, where m is the number of qubits required to encode a strip of 2m images, each of them an FRQI state that is encoded using n qubits.
Similarly, a total of m + 22n + 2s controlled (m + (ns)) transformations are required to retrieve the information emanating from the BCO operation (i.e., the binary details (Γj) and pair correspondence (Dj) in the jth pair) [1]. Here, s is the number of control–condition operations required to restrict the BCO operation to any ROI within the reference and test image in the jth pair whose fidelity measure is sought. It is also very crucial in determining the extra cost required to confine the QIFM evaluation to an ROI within the reference and test image pair.
As mentioned earlier, the half-moon control-condition (◐) shown in Figure 6 is used to depict that a control-condition could be either a 1-control (●) or 0-control (○).

3.3.2. Bit Error Rate Operation (BO)

In addition to domiciling the assessment of fidelity between two images to the quantum computing domain, being a pixel difference-based image quality measure, our proposed QIFM metric must be capable of computing the difference in the chromatic (or greyscale, in the strict sense) values of the content of the test and reference image pair whose fidelity is being measured [1]. To do this, we utilise the bit error rate (BER) measure that is widely used in other engineering and computer science domains. In order to execute this, we adduce the need for a BER transformation sub-circuit, (which we shall refer to as the BER operation or simply BO operation) which, as shown in Figure 4, is executed on ancillary information about the 2n× 2n pixels encoding the reference and test images.
Similar to other formulations in FRQI QIP [13,14,20], we assume that this ancillary information is obtained from the classical versions of the reference and test image pair. The process of interacting the ancillary content with a separately prepared quantum register without disrupting the “quantumness” of the overall setup was highlighted in Figure 3B and is well documented in available in the literature, notably in [14,17,24].
Relying on this formalism, we further assume that the ancillary information about the reference and test image pair can be safely interacted with the content of the strip in Figure 6. For brevity, however, only the ancillary information (i.e., without the interaction with the strip that was shown earlier in Figure 4) is presented in Figure 7.
The different colours used in the figure are meant to denote the different layers of the ancillary control-conditions and are only meant to separate the ancillary information about the reference image from those of the test image.
Similar to its use earlier in this section, the half-moon control-condition (◐) depicts that a control–condition could be either a 1-control (●) or 0-control (○) as determined by the binary value of each pixel in the reference and test image pair, b.
The readout shown in the short-dashed rectangle in Figure 7 provides the (0 or 1) output required to compute the BERj for the corresponding ith pixel in the jth pair of the reference and test images as defined in Equation (12):
B j = B E R j 8 N
where B j computes the total pixel-wise variation (i.e., discordance) in the jth pair of images, each comprising of N = 2n × 2n pixels.
As seen from the BO sub–circuit in Figure 7, the BO operation requires a total of 9 qubits to encode; and m × 2ns controlled (8 + (m + (n − s)) condition measurements are required to confine the operation to any predetermined 2s × 2s ROI in a strip consisting of 2m FRQI images, each of size N = 2n × 2n [1].
By combining the two sub-circuits that were presented earlier (i.e., the BCO sub-circuit in Figure 6 and the BO sub-circuit in Figure 7) into the generalised circuit in Figure 7, we realise our QIFM circuit, which performs the fidelity check operation (FCO) needed to assess the fidelity between two images.
Utilising the different measurements in the operation, we quantify the fidelity between the reference and test images in the jth pair of images using Equation (13):
F j = D j + ( 1 B j ) × I j
Equation (13) can be expressed in the form of a percentage as presented in Equation (14):
F j % = F j N × 100
In order to validate the utility of the proposed QIFM technique as an effective to tool assess the fidelity between FRQI quantum images, we applied the framework described in this study on a dataset of popularly used real images. The results and discussions on this are presented in the next section.

4. Experimental Validation of the QIFM Metric

In the absence of physical quantum computing hardware, since its early use in [11,12,13,14], the use of Matlab-based classical simulations of the quantum images and circuitry to manipulate its content has become commonplace in QIP. The experiments reported in this section are based on a similar approach whereby a classical computer with Intel (R) Xeon (R) CPU E3–1225 3.20 GHz, 4.00 GB RAM equipped with Matlab R2016a program was used to perform the simulation-based experiments. Readers are referred to [11,12,13,14,15,16,17,18,19,20,21,22,23] for detailed descriptions of how FRQI-based QIP simulations are carried out.

4.1. FPS Analysis of Congruity between Quantum Images

In the introductory sections of this study we promised readers an analysis that compares congruity between two or more FRQI quantum images in terms of three metrics—our proposed QIFM fidelity metric (F), the well-established classical PSNR (P) image quality measure, and Yan’s probability-based image similarity score (S) [15]. The remainder of this section is devoted to this F, P and S or simply FPS analysis.
In presenting the FPS analysis, we adopt simulation–based techniques similar to those in [11,12,13,14,15,16,17,18,19,20,21,22,23] in order to simulate a dataset comprising of 16 images, each of size 512 × 512 (presented in Figure 8 below), that are widely used in image processing literature. Similar simulation-based approaches were used to simulate the circuitry to execute the BCO and BO sub-circuits that coalesce to execute the FCO operation. Using this layout, the efficacy of our proposed QIFM metric as an acceptable measure to assess fidelity between two or more FRQI quantum images can be validated as presented and discussed in the sequel.
Using the labels assigned to the images in our database (Figure 8), a notation r-T is used to indicate a jth pair consisting of a reference image, r, and its test image, T. It is assumed that a strip (defined in Section 2) consisting of the 16 images in our database has been captured, prepared and initialised as required in FRQI QIP [17,19]. For brevity, however, the images in our database are presented neither as a vertically nor as a horizontally-oriented strip [9]. Instead, it is presented as a stack of images labelled from A to P (in Figure 8).
Next, the 16 images that make up the strip are randomly paired into twenty-three (23) pairs each consisting of a reference and test image. The “likeness” between these pairs was evaluated in terms of the fidelity, Fr,T; the PSNR, Pr,T; and the Similarity, Sr,T, between the pairing of each reference image (r) and a test image (T). These results are summarised in Table 1 while the detailed FPS analysis is presented in the sequel.
The peak-signal-to-noise-ratio—PSNR (P) is widely adopted as an image reconstruction and quality measure in both digital image processing and more recently in QIP. It is most easily defined via the mean squared error (MSE), which for two m × n monochrome images the original or input image, I and its watermarked version K in Equation (15):
P = 20 log 10 [ M A X i M S E ]
Here, MAXi is the maximum possible pixel value of the image [13,17,25,28].

4.2. QIFM (F) versus Yan’s Similarity Score (S)

As seen in Table 1, Yan et al.’s probability-based image similarity score, which was defined in [15,24] and highlighted in Section 2, produces results that are at best exiguous because the similarity scores for the 23 pairs (reported in Table 1) vary within a narrow band in the range from 0.8 to 1.0.
This narrow range does not discernibly convey similarity and less so when two or more pairs are assessed. Actually, as defined in [15,23], this image similarity score produces a minimum value of 0.5 for any random pairing of a reference and test image. In other words, using the definition in [15,23] and Equation (8) it is guaranteed that even the most dissimilar pairing of a reference and test image will have a 50% similarity score, which appears very lopsided.
In contrast, our proposed QIFM fidelity metric has a smoother (i.e., wider-range) interval that is well spaced out in a manner that realistically conveys congruity between each of the reference and test images that were paired.
In order to widen the interval and expand the range of coverage for Yan et al.’s image similarity score (S), here, we recalibrated the original formulation of the similarity score, which was defined in Equation (8) (and the discussions in [23]), so that a wider range that is normalised between zero to one (i.e., an interval [0,1]) is obtained. We achieve this recalibration using Equation (16):
S I M = 1 10 × P S 0 | 1
where P S 0 | 1 , which is the probability of obtaining a state | 1 after measurement is made, retains its form as defined in Equation (7). The coefficient of ten (10) is added to ensure that a normalised value is obtained for the SIM value between two or more images without significantly altering the requirements needed to realise the original results using Equation (8).
Consequently, the values in the sixth column of Table 1 are the revised similarity values and, in order to distinguish it from the original notation for similarity (as defined in Equation (8)), we call this new measure the SIM score between the pair r and T.
Using this recalibrated formulation of similarity, a wider range of values that more graphically convey the notion of congruity is obtained. This adjustment in the similarity scores (S) better mimics the explicitness expected in descriptions of congruity between different pairings of images in Figure 8.
One example that clearly stands out in showing the importance of this revised formulation for Yan’s probability-based image similarity score is the House–Bridge pairing in pair L–O. A simple visual inspection reveals that these two images that are glaringly discrepant. However, Yan’s similarity score (S) attributes a high similarity score of 0.918 for this pair but on the recalibrated similarity (SIM) scale this has a score of only 0.59.
Similar adjustments can be seen in the other reference and test images pairs, notably that in the Tree-House pair (i.e., pair P–L), which reduces from 0.843 using Yan’s image similarity score to only 0.21 using the revised SIM scale. All these adjustments suggest that the recalibrated SIM index is a better similarity scale than Yan’s original similarity score (S).
However, notwithstanding the aforementioned calibration to Yan’s original similarity score, the revised similarity score (i.e., the SIM values) attributes a similarity of 0.52 for the Lena–Inverted Lena reference and test image pair (i.e., pair A–B), which is obviously a poor correlation with explicit lexical descriptions of congruity (or likeness) between the two images). In contrast, to manifest itself as a better measure of congruity, the proposed QIFM metric attributes 62% fidelity between the pair, which is a better explicit lexical description of likeness between that pair.
Similar shortcomings of both the original formulation of Yan’s similarity score (S) and its recalibrated wider range, normalised version—the SIM score—are clearly seen in the Couple–Trees pair (i.e., pair J–P), which has high similarity scores of SJ,P = 0.9 and SIMJ,P = 0.7. These values seem illogical since a mere visual inspection reveals that the two images are clearly dissimilar.
In contrast, a fidelity value of FJ,P = 52% that is obtained using the QIFM metric, which clearly infers less congruity (likeness) between the two images. Additionally, a 52% fidelity clearly resembles the expectations of the explicit lexical description of the congruity (or likeness) between these two images (i.e., the Couples and Trees images in Figure 8).
As noted earlier in Section 2, the most pertinent shortcoming of Yan’s similarity score (S) is that it has high resource demands because its recovery (measurement) procedure is built entirely on the use of destructive measurements, which are carried out in each instance that the similarity between two images is assessed.
To clearly illustrate this high cost, let us assume that the measurements to quantify the similarity of the images in our strip are carried out in the order indicated by the serial numbers in Table 1 (i.e., S/No. in the first column)—starting with the first pair (i.e., pair A–A in S/No. 1) and terminating with the Bridge–Aeroplane pair (i.e., pair O–K in S/No. 23)—it is instructive to note that the entire content of the strip will be lost upon the first measurement, i.e., after recovering the similarity between the first pair (i.e., SA,A in S/No. 1). In other words, to execute the experiments narrated in this section (and quantify the similarity scores for all the 23 pairings of the reference and test images in Table 1) we need to prepare 23 versions of the same strip, i.e., one to recover the similarity of each reference and test image pair.
In cost terms, to execute Yan’s image similarity score a total of Cs = cn × 2mH resources will be required. Here, m is the number of qubits required to encode the images that make up the strip; c is the number of pairs whose similarity is being measured and H accounts for the cost of resources needed to prepare and initialise the 2m-images long the strip each of 2n × 2n size and executing the same number of measurements, i.e., 2m measurement operations to obtain the similarity between the reference and test images in the pair.
To illustrate this cost in terms of our experiments, quantifying Yan et al.’s probability-based similarity score (S) for the 16-image long strip requires a total cost of Cs = 238 × 23H. Here, we should further clarify that H represents the cost associated with the peculiar needs of the technology (i.e., whether ion-trap, photonic, etc. hardware that is) used to prepare and initialise the 16-image long strip [1].
In contrast to this, the proposed QIFM fidelity metric requires only 362n basic quantum gates and 26m Toffoli gates to implement. Additionally, the entire framework requires m + 1 + 2n qubits and m × 22(n−s)+4 ancillary measurement qubits in order to evaluate the fidelity of any predetermined 2s × 2s ROI in a strip consisting of 2m FRQI images, each of size N = 2n × 2n (as discussed earlier in this section).
It also merits emphasis to state that, in the quantum sense, the bizarre requirements needed to prepare and maintain a state in the format that supports meaningful computation is a daunting task. Moreover, it has been suggested that the cost attributed to producing multiple versions of a strip far outweighs the cost of the quantum computing resources required to manipulate and transform a quantum image [14,16,17,29]. Assigning a notation e for this cost (of manipulating a quantum image), it then follows that Cs >> e.
Considering the QIFM metric, our experimental setup that consists of a 16-image long strip each of size 256 × 256 (i.e., m = 3 and n = 8) comes at a much cheaper cost (approximately, 2974 × H) where e = 362n + 26m. This cost is far less than the cost that was attributed to the execution of Yan’s similarity score (CS), which suggests that the QIFM fidelity metric is a cheaper and more efficient tool to assess congruity between FRQI quantum images.

4.3. PSNR versus Quantum Image Quality Metrics (QIFM (F), Similarity Score (S) and SIM Values)

The antinomy that pervades information processing in the digital and quantum realms imposes the need for a different set of rules must be used for quantum state preparation, transformation and recovery [2,29].
In QIP, once an image is prepared its transformation is only possible via a set of unitary operations as dictated by the algorithmic framework generating the required image application or task [14,16,17]. The final step in QIP is quantum state recovery, which is perhaps the most delicate undertaking since, as noted in earlier sections of this paper, the content encoded in the image register could be lost if it is disturbed during any uncontrolled measurement. In addition to the possible collapse of the quantum state, such uncontrolled measurements have inescapable error terms integrated into them.
Without necessary refinement, using the digital PSNR image quality metric to assess congruity between two quantum images is both impracticable and non-executable because of the mismatch in terms of its present formulation with the procedures of quantum information processing. This makes the PSNR image quality measure an ill-suited and irrational choice for use in QIP.
Notwithstanding this, since, in QIP, classical (digital) copies of images used as references to prepare and initialize their quantum versions [11,12,16,17,18,19], it would suffice that the PSNR obtained from the measurement of likeness between classical cover and test image pair should serve as a reference or benchmark to assess fidelity between the quantum versions of the same image pair. Thence, a bivariate statistical analysis could be utilised in order to establish the strength of association or correlation between the quantum metrics (i.e., the QIFM (F), Yan et al.’s probability-based score (S) and recalibrated similarity (i.e., the SIM values)) assessing the likeness between a cover and test images pair with their digital counterpart, i.e., the PSNR, obtained for the same pair of images.
In trying to establish this relationship, however, we must be mindful of the conflict in the measurement scales used in the PSNR and the trio of quantum image quality metrics that are used in our FPS analysis. While the quantum image quality metrics are measured on a normalised linear scale bounded in the range 0 to 1, the PSNR is measured on a logarithmic scale that spans an infinite length. Therefore, it is imperative that we must find some intermediate yet impartial platform for our correlation analysis.
As defined in Equation (15), the PSNR is composed of constant terms having an asymptotic relationship with the mean square error (MSE) value with which it varies inversely and, is defined as the squared deviations of a prediction from its true values. This implies that the PSNR and MSE are equivalent measures with asymptotic behaviour in terms of appropriated normalised or linear measurement scales. Moreover, since the MSE can be normalised and reversed (rNMSE) as presented in Equation (16) [30], as an equivalence of the PSNR would provide an impartial platform for comparison with the other quantum image quality metrics.
r N M S E = 1 i = 0 m 1 j = 0 n 1 I i , j K i , j 2 i = 0 m 1 j = 0 n 1 I i , j 2
where rNMSE is the reversed NMSE that fits in the interval of [0, 1] and in terms of quantitative inference a measurement of rNMSE = 1 means that the two images being compared are exact matches. Using Equation (17), rNMSE measurements for the cover and test image pairs in the first column of Table 1 are presented in the rightmost (i.e., the last) column of the same table.
Having provided a more practical equivalence for the PSNR values, we have an impartial platform for our analysis between the digital image quality measures and their quantum counterparts. Now, we present a statistical analysis for the degree of association between the rNMSE (r) on one side and the trio of the QIFM (F), Yan et al.’s probability-based score (S) and its recalibrated version (SIM) values on the other. For this task, we shall employ the widely used statistical correlation tests—i.e., the Spearman, Pearson and Kendal rank correlation coefficients—or simply the SPK correlation test. Details of these tests can be found in standard statistics texts and other available literature such as [28]. Table 2 presents results of our SPK correlation test to ascertain the degree of association between the rNMSE digital image quality measure and the three quantum image quality metrics (i.e., F, S and SIM).
In interpreting the results in Table 2, we see that the proposed QIFM metric (F) has the best correlation with the rNMSE than any of the other quantum image metrics (i.e., Yan et al.’s probability-based score (S) and the SIM values) for each of the correlation indices that we employed in the SPK test. In other words, using the classical versions of the quantum test and cover images in Figure 8 (as paired in Table 1) as benchmark, the assessment of congruity based on the QIFM fidelity metric (F) offers the least correlation in terms of deviations (aka congruity) between the test and cover images in each pair than is obtainable via both Yan’s probability score (S) and its recalibrated version (SIM). Table 2 allows us to make the deduction that in spite of the quantum mechanical requirements for its execution, the QIFM metric yields the best correlation with expectations from classical (digital) image quality measures.
In concluding our assessment using the PSNR, we revisit the lack of explicit lexicality in descriptions of congruity (or likeness) between images based on the PSNR as measure of image quality. To demonstrate this, we considered descriptions of congruity between the watermarked versions of the Lena and Blonde Lady images in Figure 9D,E and their pristine versions in Figure 9B,C (which were originally presented in Figure 8A,C). Using the WaQI quantum image watermarking protocol [13,21,27], we paired each of the two images with the PSAU logo in Figure 9A as watermark signal in order to obtain the watermarked versions of Lena and Blonde Lady images (i.e., in Figure 9D,E). The PSNR values and fidelity scores conveying the notion of congruity (or likeness) between the original and watermarked versions of the two images are also presented in Figure 9.
As seen from the results, at 97% fidelity score the QIFM fidelity metric offers better explicitness in describing congruity (or likeness) between the original and watermarked versions of the images. From this description, it is trivial to deduce that the watermarked and original versions of the two images are somewhat the same, or that they have high fidelity. On the contrary, the PSNR values of 31.59 and 32.33 dB (for the Lena and Blonde Lady images respectively) present ambiguity in terms of descriptions of congruity between the original and watermarked images. Therefore, the quantum image metrics offer better lexical explicitness in their descriptions of congruity between two or more images than the PSNR in its standard form

5. Concluding Remarks

While remarkable progress has been made in the QIP sub-discipline over the last few years, it has still continued to rely on classical techniques, notations, and formulations, notably the PSNR image quality measure, to evaluate fidelity between quantum images. Incidentally, recent studies have proven that the PSNR metric as an unreliable image quality metric even in some digital applications.
Contending that such a metric does not conform with the intricacies inherent to quantum computing and QIP technologies, in this study, we have appraised the need for, and formulated a wholly quantum-based metric, the quantum image fidelity metric (QIFM), which could be used to efficiently assess the congruity between two or more quantum images.
The proposed QIFM metric is designed as a pixel difference-based image quality measure that can be effectively implemented using available QIP representations for quantum images; while, only basic quantum computing resources and some useful data collated from classical versions of the reference and test image pairs are required to manipulate and assess the fidelity between the images.
We simulated the implementation of our image quality metric (using classical computing resources) on a dataset of popularly used real images and our results suggest that in comparison with the traditional PSNR image quality measure the QIFM metric is a better tool to convey the notion of congruity.
Since it is conceived for use on the quantum computing framework, the QIFM metric is equipped to withstand the intricacies of such hardware, whilst the digital PSNR metric is ill–suited for such hardware implementation. Similarly, in comparison with the only known quantum image similarity technique (i.e., Yan et al.’s probability-based score) [15,23], the ancilla-driven attribute of our proposed QIFM makes it a more efficient and prudent image quality measurement tool. Additionally, by using a statistical analysis we established that the proposed QIFM metric had a better correlation with digital image quality assessments of congruity than the other quantum image quality measures.
Besides being a wholly quantum–based measure, the proposed QIFM image quality metric requires only 362n basic quantum gates and 26m Toffoli gates to implement. Additionally, the entire framework requires m + 1 + 2n qubits and m × 22(n−s)+4 ancillary measurement qubits in order to evaluate the fidelity of any predetermined 2s × 2s ROIs in a strip consisting of 2m FRQI images, each of size N = 2n × 2n. The formulation of the QIFM metric, in itself, is the first step in ensuring that applications sensitive to the peculiarities of quantum computation are formulated for effective quantum image processing (QIP).
In on-going work, we are assessing how to further reduce the overall cost of the proposed metric both in terms of its encoding (based on photonic quantum computing hardware implementation) and in terms of its overall execution.
In later work, we will utilise the resulting improved version of the QIFM metric to better assess the fidelity between different pairs of quantum images, such as those realised when a quantum image is watermarked, make-up frames required for quantum movie aggregation, etc. [15,22,23].
The present formulation of the QIFM metric and its improved version, when realised, have the potential to improve the accuracy quantum image quality assessment and QIP applications in general.

Acknowledgments

The authors are grateful to Abubakar M. Iliyasu for his effort in typing the initial draft of this paper and Yiming Guo for his contributions with some aspects of the experiments reported in the study. This work is sponsored, in full, by the Prince Sattam Bin Abdulaziz University, Saudi Arabia via the Deanship for Scientific Research Project Number 2015/01/3983.

Author Contributions

Abdullah M. Iliyasu conceived and designed the study; Abdullah M. Iliyasu and Fei Yan performed the experiments; Abdullah M. Iliyasu and Kaoru Hirota analysed the data; Abdullah M. Iliyasu wrote the paper, while Abdullah M. Iliyasu and Fei Yan revised it in its final format. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Iliyasu, A.M.; Yan, F.; Abuhasel, K.A. A quantum-based image fidelity metric. In Proceedings of the Science and Information Conference, London, UK, 28–30 July 2015; pp. 664–671.
  2. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information, 5th ed.; Cambridge University Press: New York, NY, USA, 2000. [Google Scholar]
  3. Vlasov, A.Y. Quantum Computations and Images Recognition. 1997; arXiv:quant-ph/9703010. [Google Scholar]
  4. Rastegari, M. Quantum Approach to Image Processing. Available online: http://www.umiacs.umd.edu/~mrastega/paper/final3.pdf (accessed on 30 September 2016).
  5. Schützhold, R. Pattern recognition on a quantum computer. Phys. Rev. A 2002, 67, 062311. [Google Scholar] [CrossRef]
  6. Eldar, Y.C.; Oppenheim, A.V. Quantum signal processing. IEEE Signal Process. Mag. 2002, 19, 12–32. [Google Scholar] [CrossRef]
  7. Lugiato, L.A.; Gatti, A.; Brambilla, E. Quantum Imaging. J. Opt. B 2002, 4, S176–S183. [Google Scholar] [CrossRef]
  8. Beach, G.; Lomont, C.; Cohen, C. Quantum image processing. In Proceedings of the 32nd IEEE Conference on Applied Imagery and Pattern Recognition, Bellingham, WA, USA, 15–17 July 2003; pp. 39–44.
  9. Venegas-Andraca, S.E.; Bose, S. Storing processing and retrieving an image using quantum mechanics. In Proceedings of the SPIE Conference Quantum Information and Computation, Bellingham, WA, USA, 2–4 August 2003; Volume 5105, pp. 137–147.
  10. Latorre, J.I. Image Compression and Entanglement. Available online: http://arxiv.org/abs/quantph/0510031 (accessed on 30 September 2016).
  11. Le, P.Q.; Dong, F.; Hirota, K. A flexible representation of quantum images for polynomial preparation, image compression and processing operations. Quantum Inf. Process. 2012, 11, 63–84. [Google Scholar] [CrossRef]
  12. Le, P.Q.; Iliyasu, A.M.; Dong, F.; Hirota, K. A Flexible Representation and Invertible Transformations for Images on Quantum Computers. In New Advances in Intelligent Signal Processing; Ruano, A.E., Varkonyi-Koczy, A.R., Eds.; Studies in Computational Intelligence Series 372; Springer: Heidelberg/Berlin, Germany, 2011; pp. 179–202. [Google Scholar]
  13. Iliyasu, A.M.; Le, P.Q.; Dong, F.; Hirota, K. Watermarking and Authentication of Quantum Images based on Restricted Geometric Transformations. Inf. Sci. 2012, 186, 126–149. [Google Scholar] [CrossRef]
  14. Iliyasu, A.M.; Le, P.Q.; Dong, F.; Hirota, K. A framework for representing and producing movies on quantum computers. Int. J. Quantum Inf. 2011, 9, 1459–1497. [Google Scholar] [CrossRef]
  15. Yan, F.; Le, P.Q.; Iliyasu, A.M.; Bo, S.; Garcia, J.A.S.; Dong, F.; Hirota, K. Quantum Image Searching Based on Probability Distributions. J. Quantum Inf. Sci. 2012, 2, 55–60. [Google Scholar] [CrossRef]
  16. Iliyasu, A.M.; Le, P.Q.; Yan, F.; Bo, S.; Al-Asmari, A.K.; Dong, F.; Hirota, K. Insights into the Viability of Using Available Photonic Quantum Technologies for Efficient Image and Video Processing Applications. Int. J. Unconv. Comput. 2013, 9, 125–151. [Google Scholar]
  17. Iliyasu, A.M. Towards the Realisation of Secure and Efficient Image and Video Processing Applications on Quantum Computers. Entropy 2013, 15, 2874–2974. [Google Scholar] [CrossRef]
  18. Yan, F.; Iliyasu, A.M.; Venegas-Andraca, S.E. A survey of quantum image representations. Quantum Inf. Process. 2016, 15, 1–35. [Google Scholar] [CrossRef]
  19. Yan, F.; Iliyasu, A.M.; Jiang, Z. Quantum Computation-Based Image Representation, Processing Operations and Their Applications. Entropy 2014, 16, 5290–5338. [Google Scholar] [CrossRef]
  20. Iliyasu, A.M.; Le, P.Q.; Yan, F.; Bo, S.; Garcia, J.A.S.; Dong, F.; Hirota, K. A two-tier scheme for Greyscale Quantum Image Watermarking and Recovery. Int. J. Innov. Comput. Appl. 2013, 5, 85–101. [Google Scholar] [CrossRef]
  21. Yan, F.; Iliyasu, A.M.; Sun, B.; Venegas-Andraca, S.E.; Dong, F.; Hirota, K. A duple watermarking strategy for multi-channel quantum images. Quantum Inf. Process. 2015, 14, 1675–1692. [Google Scholar] [CrossRef]
  22. Yan, F.; Iliyasu, A.M.; Venegas-Andraca, S.E.; Yang, H. Video encryption and decryption on quantum computers. Int. J. Theor. Phys. 2015, 54, 2893–2904. [Google Scholar] [CrossRef]
  23. Yan, F.; Le, P.Q.; Iliyasu, A.M.; Bo, S.; Garcia, J.A.S.; Dong, F.; Hirota, K. A Parallel Comparison of Multiple Pairs of Images on Quantum Computers. Int. J. Innov. Comput. Appl. 2013, 5, 199–101. [Google Scholar] [CrossRef]
  24. Anders, J.; Oi, D.K.L.; Kashefi, E.; Browne, D.E.; Andersson, E. Ancilla-driven universal quantum computation. Phys. Rev. A 2010, 82, 020301. [Google Scholar] [CrossRef]
  25. Navas, K.A.; Sasikumar, M. Image Fidelity Metrics: Future Directions. In Proceedings of the IEEE International Conference on Recent Advances in Intelligent Computational Systems (RAICS), Trivandrum, India, 22–24 September 2011; pp. 627–632.
  26. Huynh, Q.T.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  27. Iliyasu, A.M.; Le, P.Q.; Dong, F.; Hirota, K. Restricted geometric transformations and their applications for quantum image watermarking and authentication. In Proceedings of the 10th Asian Conference on Quantum Information Science, Tokyo, Japan, 18–19 August 2010; pp. 212–214.
  28. Cheung, M.W.L.; Chan, W. Testing dependent correlation coefficient via structural equation modeling. Organ. Res. Methods 2004, 7, 206–223. [Google Scholar] [CrossRef]
  29. Iliyasu, A.M.; Yan, F.; Venegas-Andraca, S.E.; Salama, A.S. Hybrid Quantum-Classical Protocol for Storage and Retrieval of Discrete-Valued Information. Entropy 2014, 16, 3537–3551. [Google Scholar] [CrossRef]
  30. Chaves, R.; Ramirez, J.; Gorriz, J.M.; Lopez, M.; Salas-Gonzalez, D.; Alvarez, I.; Segovia, F. SVM-based computer-aided diagnosis of the Alzheimer’s disease using t-test NMSE feature selection with feature correlation weighting. Neurosci. Lett. 2009, 461, 293–297. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Circuit structure for comparing similarity between two FRQI quantum images (figure adapted from [15,23]).
Figure 1. Circuit structure for comparing similarity between two FRQI quantum images (figure adapted from [15,23]).
Entropy 18 00360 g001
Figure 2. Generalised circuit structure for parallel comparison of similarity between FRQI quantum images (figure adapted from [15,23]).
Figure 2. Generalised circuit structure for parallel comparison of similarity between FRQI quantum images (figure adapted from [15,23]).
Entropy 18 00360 g002
Figure 3. (A) Notation for a single qubit projective measurement operation and (B) description of the ancilla-driven measurement operation (figures and explanations in the text are adapted from [2,14,24]).
Figure 3. (A) Notation for a single qubit projective measurement operation and (B) description of the ancilla-driven measurement operation (figures and explanations in the text are adapted from [2,14,24]).
Entropy 18 00360 g003
Figure 4. Layout of proposed QIFM framework to assess fidelity between two (or more) quantum images.
Figure 4. Layout of proposed QIFM framework to assess fidelity between two (or more) quantum images.
Entropy 18 00360 g004
Figure 5. Flowchart for executing the proposed QIFM framework to compare two (or more) quantum images.
Figure 5. Flowchart for executing the proposed QIFM framework to compare two (or more) quantum images.
Entropy 18 00360 g005
Figure 6. QIFM sub-circuit to execute the Binary check operation (BCO) of the QIFM image metric.
Figure 6. QIFM sub-circuit to execute the Binary check operation (BCO) of the QIFM image metric.
Entropy 18 00360 g006
Figure 7. QIFM sub-circuit to execute the Bit error rate operation (BO) of the QIFM image metric.
Figure 7. QIFM sub-circuit to execute the Bit error rate operation (BO) of the QIFM image metric.
Entropy 18 00360 g007
Figure 8. Dataset of images paired for the FPS analysis. (A) Lena; (B) Inverted Lena; (C) Blonde Lady; (D) Peppers; (E) Scarfed Lady; (F) Baboon; (G) Brunette Lady; (H) Cameraman; (I) Man; (J) Couple; (K) Aeroplane; (L) House; (M) Pentagon; (N) Fingerprint; (O) Bridge; (P) Trees.
Figure 8. Dataset of images paired for the FPS analysis. (A) Lena; (B) Inverted Lena; (C) Blonde Lady; (D) Peppers; (E) Scarfed Lady; (F) Baboon; (G) Brunette Lady; (H) Cameraman; (I) Man; (J) Couple; (K) Aeroplane; (L) House; (M) Pentagon; (N) Fingerprint; (O) Bridge; (P) Trees.
Entropy 18 00360 g008
Figure 9. Comparison between PSNR and QIFM for watermarked images. (A) PSAU watermark logo; (B) original Lena image; (C) original Blonde Lady image; (D) watermarked version of Lena image; (E) watermarked version of Blonde Lady image.
Figure 9. Comparison between PSNR and QIFM for watermarked images. (A) PSAU watermark logo; (B) original Lena image; (C) original Blonde Lady image; (D) watermarked version of Lena image; (E) watermarked version of Blonde Lady image.
Entropy 18 00360 g009
Table 1. Summary of FPS Analysis.
Table 1. Summary of FPS Analysis.
S/No.PairFidelity, FPSNR, PSimilarity Score, SSIM 1rNMSE 2
1A–A1.001.001.001.00
2A–B0.6210.960.910.520.53
3A–C0.4210.690.890.500.50
4A–D0.4510.160.890.430.44
5A–E0.5911.460.920.570.58
6A–F0.6211.760.920.600.61
7A–G0.539.980.880.410.42
8A–H0.4610.700.900.500.50
9A–J0.5111.420.920.570.58
10C–G0.4810.040.880.420.68
11E–D0.4510.230.890.440.63
12F–C0.4211.240.940.670.61
13F–G0.5310.430.890.470.52
14I–C0.5412.200.930.640.73
15I–E0.5010.720.890.500.63
16I–G0.499.950.880.400.55
17I–H0.5110.510.900.480.61
18J–P0.5610.770.900.500.67
19P–L0.448.710.840.210.56
20L–K0.469.510.870.340.67
21L–O0.5811.610.920.510.76
22N–M0.5312.970.940.700.86
23O–K0.458.610.840.200.44
1 SIM are recalibrated values of similarity (S) as obtained using Equation (16) and 2 rNMSE is obtained using Equation (17).
Table 2. SPK correlation test between digital and quantum image metrics (for image pairs in Table 1).
Table 2. SPK correlation test between digital and quantum image metrics (for image pairs in Table 1).
Image MetricCorrelation Coefficient
SpearmanPearsonKendal
rNMSE vs. QIFM0.59390.72680.4172
rNMSE vs. SIM0.48660.70270.3747
rNMSE vs. Similarity0.48190.71060.3701
QIFM vs. Similarity0.55960.73680.4426
QIFM vs. SIM0.37940.64670.2705

Share and Cite

MDPI and ACS Style

Iliyasu, A.M.; Yan, F.; Hirota, K. Metric for Estimating Congruity between Quantum Images. Entropy 2016, 18, 360. https://doi.org/10.3390/e18100360

AMA Style

Iliyasu AM, Yan F, Hirota K. Metric for Estimating Congruity between Quantum Images. Entropy. 2016; 18(10):360. https://doi.org/10.3390/e18100360

Chicago/Turabian Style

Iliyasu, Abdullah M., Fei Yan, and Kaoru Hirota. 2016. "Metric for Estimating Congruity between Quantum Images" Entropy 18, no. 10: 360. https://doi.org/10.3390/e18100360

APA Style

Iliyasu, A. M., Yan, F., & Hirota, K. (2016). Metric for Estimating Congruity between Quantum Images. Entropy, 18(10), 360. https://doi.org/10.3390/e18100360

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop