The celebrated Gram–Schmidt algorithm allows the construction of a system of
orthonormal vectors from an (ordered) system of
linearly independent vectors. Let us mention that there exist a wide variety of proposals to “generalize” the Gram–Schmidt process [
1] serving many different purposes. In contrast to these generalizations, we construct a system of
orthogonal vectors from an (ordered) system of
arbitrary vectors, which may be linearly dependent. (Even repeated vectors are allowed.) This task is accomplished by what will be called “dimensional lifting”.
Some quantum computation tasks require the orthogonalization of previously non-orthogonal vectors. This might be best understood in terms of mutually exclusive outcomes of generalized beam splitter experiments, where the entire array of output ports corresponds to an ensemble of mutually orthogonal subspaces, or, equivalently, mutually orthogonal perpendicular projection operators [
2].
Of course, by definition (we may define a unitary transformation in a complex Hilbert space by the requirement that it preserves the scalar product [
3] (§ 73)), any transformation or mapping of non-orthogonal vectors into mutually orthogonal ones will be non-unitary. However, we may resort to requiring that some sort of angles or distances (e.g., in the original Hilbert space) remain unchanged.
Suppose, for the sake of demonstration, two non-orthogonal vectors, and suppose further that somehow one could “orthogonalize” them while at the same time retaining structural elements, such as the angles between projections of the new, mutually orthogonal vectors onto the subspace spanned by the original vectors. For instance, the two non-orthogonal vectors could be transformed into vectors of some higher-dimensional Hilbert space satisfying the following properties with respect to the original vectors: (i) the new vectors are orthogonal, and (ii) the orthogonal projection along the new, extra dimension(s) of the two vectors render the original vectors. A straightforward three-dimensional construction with the desired outcome can be given as follows: suppose the original vectors are unit vectors denoted by and ; and . Suppose further a two-dimensional coordinate frame in which and are planar; thus, we can write in terms of some orthonormal basis as well as . Suppose we “enlarge” the vector space to include an additional dimension, and suppose a Cartesian basis system in that greater space that includes the two vectors of the old basis (and an additional unit vector that is orthogonal with respect to the original plane spanned by the original basis vectors).
Ad hoc, it is rather intuitive how two (not necessarily unit) vectors can be found that project onto the original vectors, and which are orthogonal: “create” a three-dimensional vector space with one extra dimension, assign a non-zero extra coordinate (such as 1) associated with this dimension for the first vector, and use the extra coordinate of the second vector for compensating any nonzero value of the scalar product of the two original vectors— in particular, whose coordinates with respect to the new basis are
which are orthogonal by construction.
It is not too difficult to find explicit constructions for the more general case of
k vectors
in
(cf. Ref. [
2] for a rather inefficient method).
In the following, for the sake of construction, we shall embed
into
, such that we fill all additional vector coordinates of
with zeroes. For the new, mutually orthogonal, vectors we make the following
Ansatz by defining
with yet to be determined coordinates
. (The symbols
stand for all the
n coordinates of
.)
The unit coordinates 1 ensure that the new vectors are linearly independent. By construction, the orthogonal projection of onto renders for all .
What remains is the recursive determination of the unknown coordinates
. Note that all
must satisfy the following relations: for
, orthogonality demands that
, and therefore
, and therefore
In this way, all unknown coordinates can be determined.
Similar constructions yield the remaining unknown coordinates in
. For
,
, and therefore
, yielding
In this way, all unknown coordinates can be determined.
This procedure is repeated until one arrives at
, and therefore at the orthogonality of
and
, encoded by the condition
, and hence
The approach has the advantage that, at each stage of the recursive construction, there is only a single unknown coordinate per equation. This situation is well known from Gaussian elimination. The Ansatz also works if one of the original vectors is the zero vector, and if some of the original vectors are equal.
The resulting system of orthogonal vectors is not the only solution of the initial problem—to find an orthogonal vector that projects onto the original ones—which can be explicitly demonstrated by multiplying all vectors
with the matrix
whereby
stands for the
n-dimensional unit matrix,
c can be a real nonzero constant, and
is a
k-dimensional orthogonal matrix. (For complex Hilbert space, the orthogonal matrix needs to be substituted by a unitary matrix, and by a complex constant
.)
On the other hand, we may reinterpret our procedure as follows: Let be a system of vectors in , not necessarily spanning , and not necessarily being linearly independent. (The ordering of the vectors in this system will be essential throughout.) We embed in as we did above and denote the orthogonal complement of by . Therefore, the first n coordinates of all vectors in C vanish, and can be represented by a direct sum . Additionally, we choose some (ordered) orthonormal basis of C, say, .
Then, there is a unique system of orthogonal vectors in such that the following conditions are satisfied:
For all , the orthogonal projection of onto sends to .
The orthogonal projection of onto C sends to some (ordered) basis of the subspace C. Applying the Gram–Schmidt process to this (ordered) basis gives the orthonormal basis .
Indeed, in our previous
Ansatz, we tacitly assumed the orthonormal basis
of
C to comprise the orthogonal projections of the last
k vectors of the standard basis
of
onto
C. Condition 2 enforces the presence of all the 1s and 0s in Formula (
2), since the Gram–Schmidt process, applied to the vectors
has to result in
. Notice that the usual Gram–Schmidt process gives merely an
orthogonal basis, whose vectors can be normalized in a second step in order to obtain an
orthonormal basis. In our setting, however, such a second step is not allowed. As we saw above, now Condition 1 guarantees that
are uniquely determined.
Besides uniqueness, this construction has the additional advantage that the dot product in
“decays” into the sum of dot products in
and in
: any basis vector
can be uniquely written as
, where
and
represent the projection of
along
onto the original subspace
, and the projection of
along
onto
C, respectively. Since
is orthogonal to
, for
,
and thus
Let us, for the sake of a physical example, study configurations associated with decision problems that can be efficiently (that is, with some speedup with respect to purely classical means [
2]) encoded quantum mechanically. The
inverse problem is the projection of orthogonal systems of vectors onto lower dimensions. This method renders a system of non-orthogonal rays, also called
eutactic stars [
4,
5,
6,
7,
8], which can be effectively levied to mutually exclusive outcomes in a generalized beam splitter configurations [
9,
10] reflecting the higher dimensional Hilbert space.
One instance of such a quantum computation involving the reduction to ensembles of orthogonal vectors (and their associated span or projection operators) is the Deutsch–Jozsa algorithm, as reviewed in Ref. [
2]. Another, somewhat contrived, problem can be constructed in three dimensions from a eutactic star
which is the projection onto the plane formed by the first two coordinates of a three-dimensional orthormal basis
which, together with the Cartesian standard basis, forms a pair of unbiased bases [
11].
Still another decision configuration is the eutactic star
which is the projection onto the subspace formed by the first three coordinates of a four-dimensional orthormal basis
More concretely, suppose some, admittedly construed, function f, and some quantum encoding , where x and y stand for (sequences of) auxiliary and input bits, respectively, would yield one of the basis systems or . By reducing the auxiliary bits x, one might end up with the eutactic stars introduced above. Alas, so far, no candidate of this kind has been proposed.
In summary, a new method of orthogonalizing ensembles of vectors has been introduced. Thereby, the original vectors are “lifted” to or “completed” in higher dimensions. This method could be utilized for solving quantum decision and computing problems if the original problem does not allow an orthogonal encoding, and if extra bits can be introduced that render the equivalent of the extra dimensions in which the original state vectors can be lifted and orthogonalized.
Compared with methods that were introduced [
12,
13,
14] previously to optimally differentiate between two non-orthogonal states, the scheme suggested here is similar in the sense that, in order to obtain a better resolution, the effective dimensionality of the problem is increased. However, our scheme is not limited to the differentiation between two states, as it uses arbitrary dimensionality. More importantly, whereas our scheme is capable of separating different states precisely, but in general is non-unitary—indeed, the original vectors are not mutually orthogonal, but the lifted vectors are, thereby changing the angles among vectors, resulting in transformations that cannot be unitary—the former methods are unitary and probabilistic.