Next Article in Journal
Chebyshev Interpolation Using Almost Equally Spaced Points and Applications in Emission Tomography
Next Article in Special Issue
Gaussian Mixture Estimation from Lower-Dimensional Data with Application to PET Imaging
Previous Article in Journal
Bifurcations of Phase Portraits, Exact Solutions and Conservation Laws of the Generalized Gerdjikov–Ivanov Model
Previous Article in Special Issue
Numerical Reconstruction of a Space-Dependent Reaction Coefficient and Initial Condition for a Multidimensional Wave Equation with Interior Degeneracy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tomographic Reconstruction: General Approach to Fast Back-Projection Algorithms

1
Federal Research Center Computer Science and Control RAS, 119333 Moscow, Russia
2
Smart Engines Service LLC, 117312 Moscow, Russia
3
Institute for Information Transmission Problems RAS, 127051 Moscow, Russia
4
Phystech School of Applied Mathematics and Informatics, Moscow Institute of Physics and Technology (NRU), 141701 Dolgoprudny, Russia
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(23), 4759; https://doi.org/10.3390/math11234759
Submission received: 27 October 2023 / Revised: 14 November 2023 / Accepted: 22 November 2023 / Published: 24 November 2023
(This article belongs to the Special Issue Inverse Problems and Imaging: Theory and Applications)

Abstract

:
Addressing contemporary challenges in computed tomography (CT) demands precise and efficient reconstruction. This necessitates the optimization of CT methods, particularly by improving the algorithmic efficiency of the most computationally demanding operators—forward projection and backprojection. Every measurement setup requires a unique pair of these operators. While fast algorithms for calculating forward projection operators are adaptable across various setups, they fall short in three-dimensional scanning scenarios. Hence, fast algorithms are imperative for backprojection, an integral aspect of all established reconstruction methods. This paper introduces a general method for the calculation of backprojection operators in any measurement setup. It introduces a versatile method for transposing summation-based algorithms, which rely exclusively on addition operations. The proposed approach allows for the transformation of algorithms designed for forward projection calculation into those suitable for backprojection, with the latter maintaining asymptotic algorithmic complexity. Employing this method, fast algorithms for both forward projection and backprojection have been developed for the 2D few-view parallel-beam CT as well as for the 3D cone-beam CT. The theoretically substantiated complexity values for the proposed algorithms align with their experimentally derived estimates.
MSC:
65R32; 65R10; 68W30; 68W40; 94A08

1. Introduction

1.1. Problem Relevance

Computed tomography (CT) is a non-destructive X-ray technique used to probe the internal morphological structure of objects [1,2]. It plays a pivotal role in diverse fields such as medicine [3,4,5], industry [6,7,8], safety protocols [9,10], and scientific exploration, particularly in scrutinizing the morphology of promising functional materials [11]. Despite over 80 years of development in computed tomography algorithms [12,13], the pursuit of innovation and optimization remains ongoing. This is driven by various factors, including the imperative to reduce radiation dose in medical applications [14,15], augment sensitivity in screening setups [16], and refine spatial resolution in scientific [17], industrial [18], and medical [19,20] applications. Notably, CT has recently been harnessed for real-time imaging of dynamic processes [21,22]. This breakthrough is made possible only by improving classical tomographic reconstruction methods.
Modern challenges addressed by CT impose strict constraints on both reconstruction time and accuracy. Increasing spatial resolution leads to an augmented dataset, demanding enhanced computational speed for reconstruction algorithms. This goal can be pursued through two routes: optimizing the reconstruction algorithms and/or amping up computational resources. Given the cost implications of bolstering computational power in tomographic systems, improving algorithmic efficiency stands out as the more attractive avenue.
The core of tomographic reconstruction algorithms lies in forward projection and backprojection. While both of them are linear operators, they depend on the CT setup. Various measurement setups, detailed in [23], are employed in tomograph designs. Each scheme corresponds to its specific pair of matched forward projection and backprojection operators. The scheme’s parameters encompass the scanning trajectory, probing beam shape, and the dimensions and layout of the position-sensitive detector. The scanning trajectory refers to the path followed, depending on whether the object or the emitter-detector pair is stationary. The probing beam can be conical [24], fan-shaped [25], or parallel [26]. Circular  [27] and helical [28] trajectories are the most commonly used for the emitter-detector pair. Some specialized tomographs utilize more intricate scanning methods [29], and non-planar detectors are utilized [30].
Calculating the forward projection and backprojection operators is considered to be the most computationally demanding aspect of tomographic reconstruction techniques. Streamlining this process holds great promise for speeding up the entire reconstruction process. The acceleration of classical algorithms of CT reconstruction is an on-going problem with main efforts being directed towards effective GPU implementations [31,32], although new fast algorithms are being developed as well [33,34]. There are fast algorithms for calculating the forward projection operator for various tomographic setups. For example, a fast approximate computation algorithm with asymptotic complexity of Θ N 3 N addition operations, where N is the linear size of the reconstruction, has been proposed for three-dimensional computer tomography [35]. Noteworthy, this algorithm can be adapted to any scanning scheme. Regrettably, these findings, on their own, do not facilitate the development of fast tomographic reconstruction methods for cone-beam and other three-dimensional scanning setups. This is due to the fact that while there are reconstruction algorithms that bypass the use of the forward projection operator (such as the Feldkamp algorithm [36]), there are no algorithms that do not employ the backprojection operator. To speed up reconstruction methods, fast algorithms for the forward projection operator, akin to those in [35], must be paired with equally fast algorithms for backprojection. These algorithms should be applicable across all scanning schemes. To address this challenge, this paper introduces a general approach to the construction of fast algorithms for calculating the transpose of operators. The proposed fast transposition algorithm efficiently handles a category of linear operators depicted as binary matrices, encompassing cases where application results rely solely on addition operations.

1.2. Related Works

Since the advent of CT, various approaches to speeding up tomographic reconstruction have emerged. They can be broadly categorized into three groups: those utilizing Fast Fourier Transform (FFT), methods employing hierarchical decomposition of the image to be recovered, and approaches applying hierarchical decomposition of linear integrals. Many researchers have proposed incorporating FFT in tomographic reconstruction algorithms. These algorithms can be fast due to projection and convolution operations in Fourier space, without completely offsetting this advantage during the transition to Fourier space. Achieving precise reconstruction through Fourier transform demands an infinite number of projections [37]. In practice, however, the available amount of projections is finite, thus the accuracy of the result depends on the approximation schemes embedded in the algorithms. As noted by Natterer [38], the presence of artifacts in reconstructed images, stemming from the Fourier transform application, reduces their practical effectiveness. Natterer delves into an analysis of the origins of these distortions in his work [38].
Various approximation schemes are tailored to maximize the accuracy of the resulting reconstruction, depending on the specific instrument configuration. In the case of parallel beams, where the three-dimensional problem can be efficiently deconstructed into a series of independent two-dimensional ones, Mersereau and Oppenheim [37] advocated for reducing the approximation error by substituting the uniform polar coordinate grid in Fourier space with a grid of concentric squares. For cone beams, aiming to refine reconstruction precision, Dusaussoy [39] proposed employing a grid of concentric cubes. Averbuch et al. [40] later introduced the pseudo-polar Fourier transform for fast computation of both forward and inverse Radon transforms, highlighting Radon transform’s potential in addressing parallel beam tomography challenges. In parallel schemes, an optimized Kaiser-Bessel interpolation was suggested for an algorithm utilizing non-uniform fast Fourier transform [41]. Sullivan [42] proposed improving reconstruction precision through the application of gridding methods in algorithms tailored for scenarios involving both parallel and fan beams. Subsequently, Arcadu et al. [43] further refined this algorithm.
The widely-used FDK (Feldkamp, Davis and Kress) reconstruction algorithm [36] is the go-to fast method for cone beams. Its primary limitation lies in the need for a gradual change in morphological structure along the rotational axis [39]. However, incorporating iterative strategies alongside the FDK algorithm [44] offers some flexibility in this regard. In conclusion, two key points emerge. Firstly, there is currently no universal approach to speeding up algorithms with FFT. Secondly, all known methods in this category are approximate, and their precision is a subject of scrutiny.
The following two approaches to fast reconstruction rely on hierarchical decomposition. In the first, the recoverable volume is decomposed and then the sub-volumes are aggregated. The second approach employs hierarchical decomposition of linear integrals [45,46] along with repeated use of common subsums. For example, in a parallel beam scheme [47], the square recoverable image is split into four images by vertical and horizontal lines through the center. Each of these images is reconstructed individually. This halves the linear size of the image and reduces the necessary projections by half compared to reconstructing a full-sized image, thereby reducing required operations. This approach is applied recursively. The computation speed depends on the stopping point in the hierarchical decomposition of the volume to be recovered. Reconstruction accuracy is crucially affected by the stopping point, the accuracy of interpolation used in the projection aggregation steps for reconstructing the small image, and the method used to merge the reconstructed small images. The hierarchical volume decomposition approach is not universally applicable. Different algorithms are needed and have been proposed for various measurement schemes. For a circular fan beam CT system, the algorithm is detailed in [48]. For a circular cone beam CT system, refer to [49,50]. Additionally, for a helical cone beam CT system, the algorithm is outlined in [51].
The hierarchical decomposition of linear integrals [45,46] presents a fundamentally different approach. It speeds up the computation of linear operators for both forward projection and backprojection, crucial elements in reconstruction algorithms. The key concept is based on the observation that in discrete space, lines share common subsequences of pixels, referred to as patterns, rather than intersecting at a single point. Clearly, reusing partial sums from these pattern intersections can significantly reduce the computational complexity for linear operators. To achieve fast reconstruction algorithms, it suffices to compute partial sums for a subset of possible patterns. The choice of patterns depends on the specific measurement scheme, emphasizing the need for universal transposing methods. In 1992, M. Brady and W. Yong introduced a method for fast computation of an approximate discrete Radon transform (Hough transform) on a plane, based on repeated use of common sub-sums [45]. This method, credited to Brady and Yong, appears to have been independently developed by Walter Götz, who published it in his 1993 dissertation in German. Götz’s work is referenced in an English-language article by Götz and Druckmüller published in 1995 [46]. Although the Brady–Yong algorithm may not provide the most accurate approximation of straight lines by discrete patterns, it maintains a bounded approximation error. Brady and Yong’s original work [45] provided an initial estimate for the deviation of position coordinates from the ideal straight line during summation. Subsequently, refined hypotheses were proposed [52,53], leading to the precise upper bound of the approximation error [54]. A recent algorithm, A S D 2 , has emerged for fast and accurate computation of the discrete Radon transform on a plane [55]. This approach [55] shares similarities with the Brady–Yong method but incorporates the most accurate discretization of lines (from possible discretizations), resulting in improved accuracy compared to the Brady–Yong algorithm. In 1998, T.-K. Wu and M. Brady first extended the Brady–Yong algorithm to the three-dimensional case [56]. The work [57] marked the first exploration of the precision of dyadic approximations in three-dimensional space. In 2020, a significant advancement was made by applying the Method of Four Russians, which modified the Wu–Brady technique to calculate sums over Θ ( N 3 ) lines, rather than the excessive Θ ( N 4 ) lines in the original Wu–Brady algorithm designed for three-dimensional tomography tasks (where N denotes the linear size of the reconstruction) [35]. This resulted in a fast algorithm for computing the forward projection operator. A notable advantage of this algorithm[35] lies in its versatility throughout CT schemes.

1.3. Contributions

This work presents a novel general method for developing efficient algorithms to compute both forward projection and backprojection linear operators. Our method transforms fast algorithms designed for forward projection, solely reliant on addition operations, into efficient algorithms for backprojection. It is noteworthy that the computational complexities for both forward projection and backprojection operators are comparable. Furthermore, this method is versatile, and applicable to computing the transpose of any operator represented by a binary matrix. The algorithm for computing the transpose of the operator exhibits a similar asymptotic computational complexity as that of the forward operator.
This work combines the Method of Four Russians with the Brady–Yong algorithm for fast computation of the forward projection operator in arbitrary measurement schemes in the 2D case. By integrating these approaches with a universal method for transposing summing algorithms, the study devises a fast algorithm for calculating backprojection operators for both parallel and fan beams within the circular CT scheme (2D case). The computational complexity of these algorithms depends on the chosen stopping point in calculating partial sums over patterns. Experimental results validate the theoretically grounded potential for fast reconstruction methods that involve computing the backprojection operator using the proposed approach of transposing algorithms.
In previous research [35], a combination of the Wu–Brady and the Method of Four Russians was utilized for the circular cone beam CT system (3D case). However, the construction of the backprojection operator was not addressed in that work. The proposed method, which transforms fast algorithms for computing forward projection operators into efficient ones for computing backprojection operators, was applied to the algorithm described in the same work [35]. An assessment of the complexity of the transposed algorithm from [35] was conducted. Implementing our method for transposing accumulative algorithms opens the door to obtaining efficient algorithms for computing backprojection operators from those for computing forward projection operators, applicable to any CT schemes in both two-dimensional and three-dimensional scenarios.

1.4. Paper Outline

The subsequent section defines the reconstruction problem and gives an overview of the employed methodologies. In the third section, we detail and justify a novel method for transposing summing algorithms. This method allows for deriving an algorithm to compute the transpose of an operator represented by a binary matrix through the original operator computation algorithm. The evaluation of the algorithmic complexity of the forward and transposed operators computation is presented within Section 3.3. In the fourth section, we apply this transposition method to established fast algorithms for computing forward projection operators in both 2D and 3D scenarios. Section 4 introduces novel fast algorithms for computing both forward projection and backprojection operators based on the fast Hough transform (FHT), accompanied by a complexity assessment. The 2D case is dealt with in Section 4.2 and the 3D case is the subject of Section 4.4. These algorithms are derived through the application of our proposed method for transposing algorithms, and a thorough complexity analysis is performed. The fifth section delves into the experimental validation of our proposed method of algorithm transposition. Finally, the paper concludes with a discussion highlighting the advantages of the proposed approach.

2. CT Reconstruction: Problem Statement and Solutions Overview

CT reconstruction entails creating a digital representation of an object based on a collection of measured X-ray images or tomographic projections. This digital depiction represents a discrete spatial distribution of the linear attenuation coefficient of X-ray radiation.

2.1. Projection Image Model in Parallel-Beam Circular CT

The classical mathematical model for generating a CT projection image is based on the following assumptions:
-
Monochromatic radiation is used for scanning;
-
The imaging process is approximated with geometric optics;
-
There are no radiation sources (primary or secondary) within the examined sample;
-
Radiation attenuation depends on the absorptive properties of the material and follows the Bouguer–Beer–Lambert law;
-
the detector only records the attenuated probing radiation. The detector cell has an infinitesimal size, a linear response function, and unit sensitivity.
The basic scheme for measuring projections with a parallel beam on an object is illustrated in Figure 1.
When an object is probed with a parallel beam and the axis of rotation is perpendicular to the beam, we can analyze the CT problem on a plane without loss of generality. Consider a parallel set of planes perpendicular to the rotation axis. Measurements in each of these regions are independent. Let us take advantage of this and focus on a single plane.
The Cartesian coordinate system x 0 y in the plane aligns with the center of rotation of the source-detector system, and the spatial distribution of the linear attenuation coefficient μ ( x , y ) of the probing radiation is described by a finite function. In this coordinate system, we will use the standard parameterization of a line: ( ρ , φ ) . All rays in the probing beam at one projection position share the same φ coordinate. The image is captured by a planar position-sensitive detector located perpendicular to the probing direction. The ρ coordinate precisely locates the detector cell. For an infinitely thin X-ray beam at an angle φ reaching the detector at ρ , the radiation attenuation law through the object is expressed as:
I ( ρ , φ ) = I 0 exp μ ( ρ cos φ l sin φ , ρ sin φ + l cos φ ) d l ,
where I 0 is the source’s radiant exitance, and I is the irradiance of the detector cell. Normalizing by I 0 and taking the logarithm yields the function g ( ρ , ϕ ) , referred to as the ray integral:
g ( ρ , ϕ ) = ln I ( ρ , ϕ ) I 0 = μ ( ρ cos ϕ l sin ϕ , ρ sin ϕ + l cos ϕ ) d l .
Through tomographic measurements, we acquire linear integrals of the function μ ( x , y ) along each straight line . This process, known as the Radon transform, maps a function from Euclidean space to its set of linear integrals. Radon introduced an explicit inversion formula for an infinite set of straight lines in 1917 [58]. Building on this, the problem of tomographic reconstruction involves recovering the function μ ( x , y ) when linear integrals are known for a finite set of straight lines. The arrangement of these lines is referred to as the measurement scheme, which is determined by the tomograph’s design, including the shape of the probing beam and the scanning scheme. Consequently, the reconstruction process must align with the chosen measurement scheme.
In practice, synchrotron radiation sources primarily employ monochromatic probing. Medical and industrial tomographs, on the other hand, utilize polychromatic radiation. Different approaches have been suggested to linearize the relationship between measurement results and the function describing attenuation. These methods vary from those needing extra calibration measurements [59,60] to fully automated techniques [61,62]. Note that using parallel beams for scanning is primarily a feature of synchrotrons. In laboratory settings, achieving a parallel scheme necessitates the incorporation of extra optical components in the tomograph setup. Conversely, when employing an X-ray tube to generate the beam without extra optical elements, a cone beam is formed. In this setup, the source is situated at the apex of the cone, while the detector rests in the plane of its base. The object being probed is located within the cone. The collection of rays composing a tomographic projection under the current angle in a cone scheme differs from that in a parallel scheme for the same angle.
Consider a circular scanning path. In tomographic research, measurements are taken along a specified finite number of directions C, which determine the set of projection angles Φ = { ϕ i } i = 1 C . For each projection angle, the linearized measurements are recorded as a vector g ¯ ( ϕ ) = ( g 1 , , g N ) T , where N represents the number of cells in the detector array (in the two-dimensional scenario). The measurements for all angles can be consolidated into a single vector G ¯ = ( g ¯ ( ϕ 1 ) , , g ¯ ( ϕ C ) ) T = ( g i : i Z 1 , N · C ) T . The set of rays constituting the vector for the current projection angle in a cone beam differs from the set of rays forming the vector in a parallel beam for the same angle.
To perform reconstruction, we need to produce measured data discretization first. For μ discretization, the reconstructed area is divided into a regular grid of square pixels. The number of pixels in the horizontal direction is the same as in the vertical direction, and it matches the number of detector cells N. This grid contains a total of N 2 elements, which can be expressed as a vector M ¯ = ( μ 1 , 1 , , μ 1 , N , μ 2 , 1 , , μ N , N ) T = ( μ i : i Z 1 , N 2 ) T .
The vector M ¯ represents a 2D slice of the reconstructed 3D image. Ray integral values (2) are approximated using sums. The model for generating projection data is expressed in matrix form:
W M ¯ = G ¯ ,
where W is the projection matrix of size N · C × N 2 . The elements 0 w i , j 1 in this matrix specify each pixel’s contribution to the sum. This W matrix defines the measurement setup.
The dimensions of the projection matrix depend on the number of detector cells and the quantity of projection angles. Its configuration is influenced by factors like scanning path, probe shape, the number of projection angles, and their absolute values.

2.2. CT Reconstruction Algorithms

In the formulation (3), the CT problem involves reconstructing a digital image of the studied object M ¯ given the known projection matrix W and the vector of linearized measured values G ¯ .
The matrix W is a sparse non-negative matrix, with elements representing weights calculated based on the chosen ray-pixel intersection model [23]. We will treat the projection matrix W as a binary matrix for the forward projection operator in the natural basis of the introduced Cartesian coordinate system x 0 y . The value w i , j of the projection matrix element at indices i, j equals 1 if and only if the X-ray beam intersects the pixel of the reconstructed object’s cross-section at coordinates i, j.
Various approaches have been proposed for reconstructing images from measured projections. Integral methods rely on the numerical implementation of the Radon transform inversion formulas. The most widely used method, found in most medical tomographs, is the convolution and backprojection technique.

2.3. CT Reconstruction Algorithms for Parallel Scanning Scheme

2.3.1. Convolution and Back-Projection Algorithms (Filtered BackProjection, FBP)

In tomography, a widely used family of algorithms for reconstructing images is based on the convolution and backprojection method, known as Filtered Back-Projection (FBP) [63]. Buzug et al. [23] demonstrated that with the matrix representation from (3), the following relationship can be derived:
M ¯ = ( W T · W ) 1 · W T · G ¯ = W T · ( W · W T ) 1 · G ¯ ,
where W T represents the backprojection operator matrix. The matrices ( W T · W ) 1 and ( W · W T ) 1 are filtering operators that can be applied either after or before the projection, respectively.
The convolution and backprojection method is a fast two-step approach. Initially, measured projections are convolved with a filter [64]. Subsequently, a backprojection operation is conducted. Proposed optimizations targeted hardware solutions. Myagotin et al. presented an implementation of the method on multi-core processors [65]. The method operates with low memory requirements due to its sequential execution. However, it requires a substantial number of projections. To ensure accurate reconstruction, projection angles should be evenly distributed within the 0 to 180-degree range, with the number of angles meeting the Nyquist criterion. Otherwise, additional data interpolation may be necessary [66], potentially impacting reconstruction accuracy. In cases of limited projections, an algebraic reconstruction approach has demonstrated effectiveness.

2.3.2. Algebraic Reconstruction: SIRT (Simultaneous Iterative Reconstruction Technique)

The algebraic approach to CT reconstruction, used alongside FBP [67], treats (3) as a system of linear algebraic equations, seeking a solution for M ¯ . Due to the system’s high dimensionality, direct methods for solving linear systems of equations are not feasible. Specialized algorithms have been proposed for this purpose [68,69]. In contrast to the algebraic reconstruction techniques proposed by Gordon et al. [68], where solution vector values are updated sequentially, the SIRT method, introduced by Gilbert in 1972 [69], updates all coordinates of the solution vector at each step. For each coordinate, the current value is calculated, taking into account all registered contributions. The solution at each iteration in matrix form is updated as follows:
M ¯ ( p + 1 ) = M ¯ ( p ) + λ p · R · W T · Q · ( G ¯ W · M ¯ ( p ) ) ,
where λ p is the relaxation parameter [70] and W T represents the matrix of the backprojection operator. The diagonal matrices R and Q are of dimensions N 2 × N 2 and N · C × N · C , respectively. Their elements are computed using the following formulas: r i i = 1 / j w i j and q j j = 1 / i w i j . When the number of projection angles and/or the angle range are restricted, employing the Total Variation (TV) method [71] for regularization yields a highly precise result. In such cases, the reconstruction is acquired by solving the convex optimization problem:
W M ¯ G ¯ + α M ¯ T V min M ¯ ,
where α is the regularization parameter [71]. The optimized expression, and thus the iteratively obtained solution, includes both forward projection and backprojection operations. While the algebraic approach to CT reconstruction demands more computational resources compared to the convolution and backprojection method, its significant advantage lies in its lack of strict limitations on the number and quality of tomographic projections, as well as its independence from uniform distribution of observed projection angles. Fast reconstruction using iterative algorithms can be achieved through two approaches. The first involves augmenting computational resources and devising methods for parallel processing [72]. The second focuses on designing and utilizing algorithms with reduced computational complexity. While given specific measurement schemes, there are efficient algorithms for forward projection operator computation [52], and fast computation of the backprojection operator is not guaranteed.

2.3.3. Neural Network-Based Reconstruction Techniques

The development of neural network-based approaches to tomographic reconstruction began in 1995 with the landmark work by Kerr et al. [73]. They demonstrated that a neural network could produce tomographic images of comparable accuracy to those used in its training. This was based on the neural network’s capacity to discern and evaluate intricate functional relationships. The neural network was trained using reconstructions obtained through the FBP method. Nowadays, the integration of neural network models into reconstruction algorithms facilitates high-quality reconstructions in few-view CT [74], especially in cases when the full range of angles is limited [75,76], or when the measured projections are noisy [77]. The method’s groundbreaking applications, such as tomographic measurements with nanometer resolution, real-time CT, and studying dynamic processes, pose challenges for traditional methods. In neural network approaches, the operations of forward projection and backprojection remain the mathematical cornerstone.

2.4. Fast Approximation Schemes

Efficient CT reconstruction algorithms can be built with fast approximate computational schemes that utilize the Fast Hough Transform (FHT), also known as the Fast Discrete Radon Transform [45,78,79,80,81]. The Brady–Yong algorithm [45] approximates straight lines using discrete dyadic patterns. While these patterns enable optimized summation calculations, they exhibit some degree of approximation inaccuracy. For comprehensive insights into the general properties and practical applications of FHT in 2D cases, refer to [82,83].
In [84], FHT was proposed for speeding up the Simultaneous Algebraic Reconstruction Technique (SART) [85] in 2D reconstruction. The authors applied FHT for both forward projection operator calculation of residuals and for transposed operator calculation of corrections in the iterative scheme. This led to a reduction in the asymptotic complexity of a single iteration from Θ N 3 to Θ N 2 log N . Additionally, in [86], a modification of the FBP algorithm, which employs FHT to speed up the most computationally intensive step (backprojection) is introduced. This modification allows for the image reconstruction with Θ N 2 log N addition operations and Θ N 2 multiplication operations.
In 3D CT reconstruction schemes, ref. [35] demonstrated that summing over Θ ( N 3 ) patterns can be computed in Θ ( N 7 / 2 ) addition operations by utilizing a generalization of FHT for 3D introduced by Ershov et al. [87]. However, ref. [35] provides only a theoretical estimate and a general outline of the algorithm without a specific implementation, and the algorithm for the fast computation of the transposed operator was not discussed.
Thus, reducing the constant in the computation complexity of the forward projection algorithm and devising fast schemes for the transposed operator will substantially improve the efficiency of various CT reconstruction algorithms.

3. Transposing the Summation Algorithm for Computing the Summation Operator, Represented as a Product of Boolean Matrices

3.1. Notations and Definitions

Let B represent a Boolean matrix defined as
B = def ( b y , x { 0 , 1 } : y Z 0 , H 1 , x Z 0 , W 1 ) ,
where y denotes the index of the row in matrix B , x represents the index of the column in matrix B , H H ( B ) signifies the number of matrix rows, and W W ( B ) indicates the number of matrix columns, H ( B ) , W ( B ) Z 1 , .
The number of ones in matrix B is denoted as C 1 ( B ) and can be computed using the formula:
C 1 ( B ) = def y , x b y , x .
The count of summations for a Boolean matrix B is denoted as C S u m m ( B ) and is calculated as follows:
C S u m m ( B ) = def C 1 ( B ) H ( B ) .
The number of duplications for a Boolean matrix B is denoted as C C o p y ( B ) and is determined as
C C o p y ( B ) = def C 1 ( B ) W ( B ) .
Let d e c p ( · ) represent a predefined decomposition of a Boolean matrix into the product of p Boolean matrices:
d e c p ( B ) = def i = p 1 d e c i p ( B ) = i = p 1 B i = B p B p 1 B 1 = B ,
where d e c i p ( B ) = def B i is the i-th component of the decomposition. In general, we assume d e c 1 ( B ) B .
Similar related functions C 1 ( d e c p ( · ) ) , H ( d e c p ( · ) ) , W ( d e c p ( · ) ) , C S u m m ( d e c p ( · ) ) and C C o p y ( d e c p ( · ) ) for the decomposition of a Boolean matrix into the product of p Boolean matrices are computed by summing the corresponding function over all matrices in the decomposition:
C 1 ( d e c p ( B ) ) = def i = 1 p C 1 ( d e c i p ( B ) ) ,
H ( d e c p ( B ) ) = def i = 1 p H ( d e c i p ( B ) ) ,
W ( d e c p ( B ) ) = def i = 1 p W ( d e c i p ( B ) ) ,
C S u m m ( d e c p ( B ) ) = def i = 1 p C S u m m ( d e c i p ( B ) ) = i = 1 p [ C 1 ( d e c i p ( B ) ) H ( d e c i p ( B ) ) ] = C 1 ( d e c p ( B ) ) H ( d e c p ( B ) ) ,
C C o p y ( d e c p ( B ) ) = def i = 1 p C C o p y ( d e c i p ( B ) ) = C 1 ( d e c p ( B ) ) W ( d e c p ( B ) ) .

3.2. Correspondence of Decompositions for Forward and Transposed Operators in Boolean Matrix Products

Let us consider the approximation of ray sum values in matrix form as shown in Equation (3). We use a linear operator:
B : R n R m .
The matrix B of this operator is Boolean within the previously introduced Cartesian coordinate system x 0 y ( n , m Z 1 , ). Without loss of generality, we further assume that the matrix B of the operator B and all matrices d e c i p ( B ) in the decomposition do not contain zero rows and columns. This is because, if they did, the algorithmic complexity of computing B M ¯ could be reduced by excluding a portion of the input data M ¯ from consideration, where M ¯ = μ i R : i Z 1 , n T .
The transposed operator (17) of B , denoted as B T , defined by the matrix B T :
B T : R m R n .
For every decomposition d e c p ( B ) of the matrix B representing the forward operator B , there is a corresponding decomposition d e c p ( B T ) of the matrix B T for the transposed operator B T . This is computed by considering the transposition of matrix products:
d e c p ( B T ) = B T = ( B p B p 1 B 1 ) T = B 1 T B p 1 T B p T = i = p 1 B p i + 1 T .

3.3. Summation-Based Algorithms and Matrix Decompositions for Forward and Transposed Operators as the Product of Boolean Matrices—Algorithmic Complexity in Computing Forward and Transposed Operators

Let us label an algorithm as summation-based if it relies solely on addition operations for data manipulation.
The algorithm for computing the result B M ¯ after applying the operator B defined as (17) to the input vector M ¯ , expressed as
B M ¯ = d e c p ( B ) M ¯ ,
employs a sequential (row-wise) multiplication of matrix components d e c i p ( B ) B i , i Z 1 , p , in the decomposition d e c p ( B ) of the operator’s matrix B with the input vector M ¯ . This process employs only summation, hence its algorithmic complexity is C S u m m ( d e c p ( B ) ) . In this algorithm, the j-th component of the vector V ¯ ( i ) = B i V ¯ ( i 1 ) , i Z 1 , p ( V ¯ ( 0 ) = M ¯ , V ¯ ( p ) = B M ¯ ), is computed as the product of the j-th row of the matrix B i and the column vector V ¯ ( i 1 ) . This computation requires one less addition operation than the number of non-zero elements in the j-th row of the matrix B i . Consequently, to compute all components of the vector V ¯ ( i ) = B i V ¯ ( i 1 ) , C 1 ( B i ) H ( B i ) C S u m m ( B i ) , i Z 1 , p addition operations are performed. In total, the algorithm requires i = 1 p C S u m m ( B i ) C S u m m ( d e c p ( B ) ) addition operations.
In a similar manner, the algorithm for computing the result of applying the transposed operator B T to the input vector, achieved through sequential (row-wise) multiplication of matrix components d e c i p ( B T ) B p i + 1 T , i Z 1 , p , in the decomposition d e c p ( B T ) of the matrix B T for the operator B T , is also summation-based only with an algorithmic complexity of C S u m m ( d e c p ( B T ) ) .
Now, let us determine the complexity relationship between the algorithms for computing the results of applying the forward and transposed operators when utilizing a pair of corresponding decompositions:
C S u m m ( d e c p ( B ) ) C S u m m ( d e c p ( B T ) ) = i = 1 p ( C 1 ( d e c i p ( B ) ) H ( d e c i p ( B ) ) ) i = 1 p ( C 1 ( d e c i p ( B T ) ) H ( d e c i p ( B T ) ) ) = i = 1 p ( C 1 ( d e c i p ( B ) ) C 1 ( d e c i p ( B T ) ) ) + i = 1 p ( H ( d e c i p ( B T ) ) H ( d e c i p ( B ) ) ) .
Since the number of non-zero elements in the corresponding matrix components of the decompositions d e c p ( B ) and d e c p ( B T ) for the matrices B and B T of the forward and transposed operators B and B T is the same (i.e., C 1 ( d e c p ( B ) ) = C 1 ( d e c p ( B T ) ) ), and for the decomposition d e c p ( B ) , the relationships H ( d e c i p ( B ) ) = W ( d e c i + 1 p ( B ) ) , i Z 1 , p 1 , hold due to the compatibility of the dimensions of the multiplied matrix components B j , j Z 1 , p in the decomposition d e c p ( B ) , equality (21) can be extended:
i = 1 p ( C 1 ( d e c i p ( B ) ) C 1 ( d e c i p ( B T ) ) ) + i = 1 p ( H ( d e c i p ( B T ) ) H ( d e c i p ( B ) ) ) = i = 1 p ( H ( d e c i p ( B T ) ) H ( d e c i p ( B ) ) ) = H ( d e c p p ( B T ) ) H ( d e c p p ( B ) ) = W ( d e c 1 p ( B ) ) H ( d e c p p ( B ) ) = W ( B ) H ( B ) .
Combining (21) and (22), we derive the following:
C S u m m ( d e c p ( B ) ) C S u m m ( d e c p ( B T ) ) = W ( B ) H ( B ) = n m .
Hence, the difference in algorithmic complexities when computing the results of applying forward and transposed operators using corresponding Boolean decompositions is a constant determined by the dimensions of the input and output vectors. The former result is illustrated in Figure 2.
Given that the precise computation of forward or transposed operators requires no less than n addition operations, the formula (23) implies that if n m ( n ) = Θ ( n ) as n , then C S u m m ( d e c p ( B T ) ) = Θ ( C S u m m ( d e c p ( B ) ) ) and C S u m m ( d e c p ( B ) ) = Θ ( C S u m m ( d e c p ( B T ) ) ) as n . In other words, the asymptotic algorithmic complexities for the precise computation of forward and transposed operators are of the same order.

3.4. Universal Method for Deriving Efficient Transposed Operator Calculation Algorithms

Consider a Boolean matrix B representing the linear operator B , which can be decomposed into the product of Boolean matrices as d e c p ( B ) = B p B p 1 B 1 , which aligns with established fast algorithm for computing the operator B . Subsequently, the decomposition derived from d e c p ( B ) , denoted as d e c p ( B T ) = B 1 T B 2 T B p T or the matrix B T of transposed operator B T , leads to an efficient algorithm for computing B T . The algorithmic complexity is asymptotically of the same order as that of computing the original operator B .
The described method is a versatile approach for transposing the algorithm used to compute the forward operator. This can be implemented in the form of sequential multiplications of Boolean matrices, which are derived from the known computationally efficient forward operator matrix decomposition, with an input data vector. It enables the derivation of an algorithm for computing the transposed operator with a well-defined asymptotic complexity, which is on par with that of computing the forward operator.
Next, we will explore the practical implementations of this generalized transposition method in the context of summation-based algorithms for computing backprojection operators in CT reconstruction.

4. Inverting the Projection Operator Calculation Algorithm (FHT Approximation)

4.1. Fundamental 2D FHT Concepts

Consider a standard rectangular Cartesian coordinate system denoted as X 0 Y associated with an image. This image can be represented as a 2 n × 2 n , n Z 1 , matrix, I 2 n . The bottom-left corner of the image corresponds to the point 0, and each element of the image, known as a pixel, is depicted as a square with a side length of 1 (see Figure 3).
Let us consider the parametrization ( s , t ) of straight lines on the plane X 0 Y  [81]. This defines a continuous straight line passing through points on the lines that bound the image (refer to Figure 3):
-
A predominantly vertical straight line (PVL) passes through points ( s C , 0 ) and ( s C + t C , 2 n ) , situated on the straight lines that bound the image from above and below;
-
A predominantly horizontal straight line (PHL) passes through points ( 0 , s C ) and ( 2 n , s C + t C ) , situated on the straight lines that bound the image from the left and right.
For simplicity (similar to [45]), we focus exclusively on predominantly vertical straight lines with a negative slope t 0 .
The discrete set
Ω = Z 0 , 2 n 1 × Z 0 , 2 n 1
defines the coordinates of the pixels ( x , y ) Ω . A discrete straight line (straight-line pattern) is a subset of Ω that approximates a continuous straight line. It is defined by a pair of pixels with coordinates ( s D , 0 ) Ω and ( s D t D , 2 n 1 ) Ω along the image boundaries. Here, s D Z 0 , 2 n 1 represents the integer shift at the pattern’s beginning, and t D Z 0 , 2 n 1 denotes the integer shift of the pattern’s end relative to its beginning (see Figure 3). The continuous straight lines extending beyond the image are approximated by a straight-line pattern defined as the pattern for the original continuous straight line within the region of the extended zero-padded image.
The creators of the FHT algorithm [45] proposed using a dyadic pattern as a discrete straight line, which can be defined both recursively [45] and analytically [53]. The properties of this pattern, including estimates of its maximum deviation from a continuous straight line, have been explored in [46,53,54].

4.2. Two-Dimensional Operator Computation Algorithms

In the development of CT reconstruction algorithms, particularly in cases with a small number of angles (few-view CT) [88] and/or monitored reconstruction (monitored CT) [89], the process of summing over all directions may prove to be excessive (refer to the analysis in Section 4.2.5). Hence, we will now explore a more adaptable approach. This entails computing sums for patterns of length 2 k , k < n , followed by aggregating these partial sums to derive totals along all discrete straight lines in the specified set of directions.

4.2.1. Two-Dimensional Projection Algorithm with FHT Termination and Aggregation (for Arbitrary Set of Straight Lines)

The original Brady–Yong algorithm [45] calculates sums for all discrete straight lines passing through the image. The algorithm d i r P F H T (Directed Partial Fast Hough Transform) is a modification of the Brady–Yong algorithm. For an input image I 2 n with dimensions 2 n × 2 n , it computes the output vector of sums s ¯ = ( s j ) j = 0 q 1 for q discrete straight lines parameterized in ( s , t ) , defined as a set L = { ( x j , a j ) } j = 0 q 1 .
Main Computation Stages:
  • (Right) zero padding the original image I 2 n to R 0 ;
  • Computing partial sums along patterns of length 2 k (using d i r P F H T k );
  • Aggregating sums for each discrete straight line from L (using d i r S U M k ).
Algorithm 1 provides pseudocode for two-dimensional projection algorithm with termination on kth iteration and aggregation.
Algorithm 1 Algorithm for forward FHT with termination and aggregation
1: procedure  d i r P F H T ( L , I 2 n , n , k )
2:      R 0 ( x , y , 0 ) I 2 n ( x , y ) x Z 0 , 2 n 1 , y Z 0 , 2 n 1 ▹ copy the original data
3:      R 0 ( x , y , 0 ) 0 x Z 2 n , 2 n + 1 1 , y Z 0 , 2 n 1 ▹ (right) zero padding
4:      R k d i r P F H T k ( R k , n , k ) ▹ compute sums along patterns with length 2 k
5:      s j d i r S U M k ( x j , a j , R k , n , k ) j Z 0 , q 1 , ( x j , a j ) L ▹ calculate sums along discrete straight lines from L
6:     return s
Algorithm 2 d i r P F H T k computes and stores the sums for the input tensor R 0 in the form of a tensor R k , summing over sub-patterns of length 2 k . When k = n , the result conforms to the output of the Brady algorithm [45].
Algorithm 2 Forward FHT with termination
1: procedure  d i r P F H T k ( R 0 , n , k )
2:     for  i = 1 to k do▹ along the exponent range of the pattern length
3:          for  a = 0 to 2 i 1  do▹ along the absolute 0 X shift of the pattern end
4:              for  y = 0 to 2 n 2 i step 2 i  do▹ along the 0 Y shift of the pattern beginning
5:                   for  x = 0 to 2 n + 1 1  do▹ along the 0 X shift of the pattern beginning
6:                          x 2 ( x a / 2 ) mod 2 n + 1 ▹ circular shift
7:                         y 2 y + 2 i 1
8:                         R i ( x , y , a ) R i 1 ( x , y , a / 2 ) + R i 1 ( x 2 , y 2 , a / 2 ) ▹ saving the sum of two sub-patterns
9:     return  R k
Algorithm 3 d i r S U M k aggregates the sum along a pattern with specified parameters ( x , a ) , where x Z 0 , 2 n 1 , a Z 0 , 2 n 1 , for the tensor R k with sums along sub-patterns of length 2 k . It does so through a recursive call to r e c S U M k .
Algorithm 3 Aggregation for the forward projection operator
1:
procedure  d i r S U M k ( x , a , R k , n , k )
2:
     return  r e c S U M k ( x , 0 , a , n , R k , n , k )
Algorithm 4 r e c S U M k recursively aggregates the precomputed partial sums over patterns of length 2 i , i Z k , n , in R k along a discrete straight line given by ( x , a ) .
Algorithm 4 Aggregation
1: procedure  r e c S U M k ( x , y , a , i , R k , n , k )
2:     if  i = k  then▹ precomputed sum from R k
3:           s R k ( x , y , a ) ▹ precomputed sum from R k
4:     else
5:           x 2 ( x a / 2 ) mod 2 n + 1 ▹ circular shift
6:           y 2 y + 2 i 1
7:           s r e c S U M k ( x , y , a / 2 , i 1 , R k , n , k ) + r e c S U M k ( x 2 , y 2 , a / 2 , i 1 , R k , n , k )
8:     return s
We will now introduce a matrix representation for the algorithm d i r P F H T with specified values of n , k , L .

4.2.2. Matrix Representation of the 2D Projection Algorithm with FHT Termination and Aggregation

For i Z 1 , k , the element of the tensor R i ( x , y , a ) stores the sum value along a pattern of length 2 i along the 0 Y axis. Here, the pattern initiates at point ( x , y ) , and the pattern’s end is shifted by a along the 0 X axis. The tensor R 0 is expanded through zero-padding of the original image
R 0 ( x , y , 0 ) = I 2 n ( x , y ) , x Z 0 , 2 n 1 , y Z 0 , 2 n 1 , 0 , x Z 2 n , 2 n + 1 1 , y Z 0 , 2 n 1 .
Let us denote the set of indices used in the tensor R i by X Y A ( i ) ,
( x , y , a ) X Y A ( i ) = X ( i ) × Y ( i ) × A ( i ) ,
where
x X ( i ) = { j } j = 0 2 n + 1 1 , y Y ( i ) = { j 2 i } j = 0 2 n i 1 , a A ( i ) = { j } j = 0 2 i 1 .
It is worth noting that
dim ( X Y A ( i ) ) = 2 2 n + 1 , i Z 0 , n .
For all i Z 0 , n , each element of R i can be represented in terms of V ¯ ( i ) . More precisely, let J ( i ) be the bijective function mapping index ( x , y , α ) X Y A ( i ) of the element of tensor R i to the index j of the corresponding element of vector V ¯ ( i ) :
J ( i ) : X Y A ( i ) Z 0 , 2 2 n + 1 1
is according to formula
J ( i ) ( x , y , a ) = x + y · 2 n + 1 i + a · 2 2 n + 1 i .
Then, the inverse mapping H ( i ) = ( J ( i ) ) 1 , H ( i ) : Z 0 , 2 2 n + 1 1 X Y A ( i ) is computed using
x = j mod 2 n + 1 , y = ( j 2 n + 1 mod 2 n i ) · 2 i , a = j 2 2 n + 1 i .
A 2 2 n + 1 × 2 2 n + 1 Boolean matrix B i allows for the calculation of
V ¯ ( i ) = B i V ¯ ( i 1 ) .
The value of element B i ( p , t ) for ( p , t ) Z 0 , 2 2 n + 1 1 × Z 0 , 2 2 n + 1 1 is determined by the following formula
B i ( p , t ) = 1 , if t = J ( i 1 ) ( x p , y p , a p / 2 ) t = J ( i 1 ) ( ( x p a p / 2 ) mod 2 n + 1 , y p + 2 i 1 a p / 2 ) , 0 , otherwise ,
where ( x p , y p , a p ) = H ( i ) ( p ) .
In each row and each column of the matrix B i , there are exactly two ones; therefore,
C 1 ( B i ) = 2 H ( B i ) = 2 · 2 2 n + 1 = 2 2 n + 2 .
According to the definition (9), the number of summations for the matrix B i is
C S u m m ( B i ) = C 1 ( B i ) H ( B i ) = 2 2 n + 2 2 2 n + 1 = 2 2 n + 1 .
For some n , k , the discrete straight line given by ( x , a ) defines a set U n k ( x , a ) of indices for elements in R k , the values of which are employed to aggregate sums during the recursive call r e c S U M k :
U n k ( x , a ) = { ( x j , y j , a j ) } j = 0 2 n k 1 X Y A ( k ) .
The aggregating matrix S is of size q × 2 2 n + 1 and allows for the computation of the vector of sums
s ¯ = S V ¯ ( k ) .
The values of the elements in S are determined by the set of discrete straight lines (L):
S ( p , t ) = 1 , ( x p , a p ) L H ( k ) ( t ) U n k ( x t , a t ) , 0 , otherwise .
According to the algorithm, each row of matrix S contains precisely 2 n k ones; hence,
C 1 ( S ) = q 2 n k
and following the definition in (9), the count of summations for matrix S is
C S u m m ( S ) = C 1 ( S ) H ( S ) = q 2 n k q = q ( 2 n k 1 ) .
The sequential application of (32) and (37) leads to the matrix representation of d i r P F H T , which takes the form
s ¯ = S B k B 1 V ¯ ( 0 ) .
The algorithm’s computational complexity, considering both (35) and (40), is determined as follows:
C S u m m ( S B k B 1 ) = C S u m m ( S ) + i = 1 k C S u m m ( B i ) = q ( 2 n k 1 ) + k 2 2 n + 1 .

4.2.3. Complexity Analysis of 2D Projection Algorithm with FHT Termination and Aggregation

Given that projection directions are uniformly distributed, let the parameter α ( 0 , 1 ] R determine the proportion of directions compared to the maximum possible number ( 2 n for each type of lines). This results in the number of distinct discrete straight lines for each type, as expressed below
q = dim ( s ¯ ) = 2 n α 2 n = α 2 2 n .
Considering the four types of lines in the 2D scenario, the total number of summations, denoted as T B ( n , k , α ) , in the implementation (42), is as follows
T B ( n , k , α ) = 4 C S u m m ( S B k B k 1 B 1 ) = 4 α 2 2 n ( 2 n k 1 ) + 4 k 2 2 n + 1 = α 2 3 n k + 2 α 2 2 n + 2 + k 2 2 n + 3 = 2 2 n + 2 ( α ( 2 n k 1 ) + 2 k ) .
Note that
T B ( n , 0 , 1 ) = 2 2 n + 2 ( 2 n 1 )
for naive summation along all discrete lines, considering image padding, and for the full FHT in the Brady–Yong method (without aggregation):
T B ( n , n , 1 ) = n 2 2 n + 3 .
Next, let us explore how k and α influence the number of operations, introducing the following notation
δ = n k , δ Z 0 , n .
We will analyze the normalized difference Δ n B ( δ , α ) between the complexity of the proposed algorithm T B ( n , n δ , α ) and the original Brady–Yong algorithm T B ( n , n , 1 ) , normalized by factor 2 2 n + 2 (representing the number of discrete straight lines for all types and shifts),
Δ n B ( δ , α ) = def 1 2 2 n + 2 ( T B ( n , n δ , α ) T B ( n , n , 1 ) ) = α ( 2 δ 1 ) 2 δ .
Without aggregation ( δ = 0 ), the complexities coincide: Δ n B ( 0 , α ) = 0 . To analyze the function’s properties, let us take the second derivative of Δ n B ( δ , α ) with respect to δ . This leads to
Δ n B δ = α 2 δ ln 2 2 ,
2 Δ n B 2 δ = α 2 δ ln 2 2 .
With 2 Δ n B 2 δ > 0 , the function Δ n B ( δ , α ) is convex with respect to δ . Let us determine the minimum value of the expression with respect to δ , yielding the following
Δ n B δ = α 2 δ ln 2 2 = 0 ,
2 δ = 2 α ln 2 ,
δ = log 2 2 α ln 2 = 1 log 2 ( α ln 2 ) .
Therefore, the function Δ n B ( δ , α ) attains its minimum at
δ m i n ( α ) = 1 log 2 ln 2 log 2 α .
The optimal value, δ o p t , can be approximated by rounding δ to the nearest integer:
δ o p t ( α ) = def argmin δ Z 0 , n Δ n B ( δ , α ) .
The optimal iteration for terminating the summation is
k o p t ( α ) = n δ o p t ( α ) .
Table 1 provides the values of δ o p t for various α .
Figure 4 illustrates the dependency of Δ n B ( δ , α ) on δ .
Our analysis demonstrates the advantage of the proposed 2D projection algorithm with FHT termination and aggregation compared to the classical Brady–Yong algorithm. Moreover, when α , the proportion of projections used in reconstruction, decreases, the potential computational gain grows.

4.2.4. Transposition of the 2D Projection Algorithm with FHT Termination and Aggregation

To develop a fast algorithm for the calculation of transposed operator, we will analyze the 2D projection algorithm with FHT termination and aggregation (see Section 4.2.1) as a sequential application of partial sum computations for sub-patterns followed by aggregation. In line with the general transposition scheme for summation-based algorithms, the transposition of this algorithm involves executing the transposed stages in reverse order:
(1)
“Spreading” the value of each ray-sum along each pattern across all sub-patterns of length 2 k (summing value to all sub-patterns of length 2 k that constitute it);
(2)
Transposing the algorithm for computing partial sums.
The implementation of this algorithm, denoted as r e v P F H T (Reversed Partial Fast Hough Transform), computes the transposed operator for the vector of ray-sums s ¯ = ( s i ) i = 0 q 1 and the set of discrete straight lines L = { ( x i , a i ) } i = 0 q 1 within the specified range of line directions. Key stages:
  • Preparation of the tensor R k ;
  • Utilizing r e v S U M k (transpose of aggregation algorithm d i r S U M k );
  • Employing r e v P F H T (transpose of algorithm d i r P F H T for computing sums over patterns of length 2 k ).
Algorithm 5 provides pseudocode for transposition two-dimensional projection Algorithm 1.
Algorithm 5 Transpose of the algorithm d i r P F H T for calculating sums along patterns of length 2 k
1: procedure  r e v P F H T ( L , s , n , k )
2:      R k ( x , y ) 0 x Z 0 , 2 n + 1 1 , y Z 0 , 2 n 1
3:      R k r e v S U M k ( s j , x j , a j , R k , n , k ) j Z 1 , q ▹ transpose of the aggregation algorithm
4:      R 0 r e v P F H T k ( R k , n , k ) ▹ transpose of the summation algorithm
5:      I 2 n ( x , y ) R 0 ( x , y ) x Z 0 , 2 n 1 , y Z 0 , 2 n 1 ▹ copying the result
6:     return  I 2 n
Given n and k, the r e v S U M k Algorithm 6, for a discrete straight line specified by ( x , a ) , stores the value s in the tensor R k corresponding to the sums over sub-patterns of length 2 k using a recursive call to the function r e c S P R k .
Algorithm 6 Transpose of the aggregation algorithm d i r S U M k
1:
procedure  r e v S U M k ( s , x , a , R k , n , k )
2:
      R k r e c S P R k ( s , x , 0 , a , n , R k , n , k )
3:
     return  R k
Algorithm 7 r e c S P R k recursively “spreads” (adds) the value s across (to) all elements of tensor R k , which are used to calculate the sum along the discrete straight line specified by ( x , a ) .
Algorithm 7 Algorithm for recursive spreading for computing the transposed operator
1:
procedure  r e c S P R k ( s , x , y , a , i , R k , n , k )
2:
     if  i = k  then
3:
           R k ( x , y + a ) R k ( x , y + a ) + s
4:
     else
5:
           R k r e c S P R k ( s , x , y , a / 2 , i 1 , R k , n , k )
6:
           R k r e c S P R k ( s , x a / 2 , y + 2 i 1 , a / 2 , i 1 , R k , n , k )
7:
     return  R k
To transpose the algorithm for calculating partial sums over sub-patterns of length 2 k , we “reverse” the loop with respect to the variable k and replace the sum accumulation with “spreading”. Algorithm 8 r e v P F H T k completes the process of computing the fully transposed operator for the matrix R k , which contains the results of transposing the aggregation algorithm (the results of calculating sums over sub-patterns of length 2 k ).
Algorithm 8 Completing the process of computing the fully transposed operator for the matrix R k
1:
procedure  r e v P F H T k ( R k , n , k )
2:
     for  i = k to 0 step 1  do
3:
           R i 1 ( x , y , 0 ) 0 x Z 0 , 2 n + 1 1 , y Z 0 , 2 n 1
4:
          for  a = 0 to 2 i 1  do
5:
              for  y = 0 to 2 n 1 step 2 i  do
6:
                   for  x = 0 to 2 n + 1 1  do
7:
                         v R i ( x , y , a ) + R i 1 ( x , y , a / 2 )
8:
                         R i 1 ( x , y , a / 2 ) v
9:
                         x 2 ( x a / 2 ) mod 2 n + 1                   ▹ circular shift
10:
                        v R i ( x , y , a ) + R i 1 ( x 2 , y + 2 i 1 , a / 2 )
11:
                        R i 1 ( x 2 , y + 2 i 1 , a / 2 ) v
12:
    return  R 0

4.2.5. Analyzing the Complexity of the Transposed 2D Projection Algorithm with FHT Termination and Aggregation

When transposing the 2D projection algorithm with FHT termination and aggregation from (19) and (41), the matrix representation of the transposed operator is as follows
B T = ( S B k B 1 ) T = B 1 T B k T S T .
The estimation of the complexity for computing the transposed 2D projection algorithm with FHT termination and aggregation, denoted as T B T ( n , k , α ) , is determined (see Section 4.2.3) as
T B T ( n , k , α ) = 4 C S u m m ( B 1 T B k T S T ) = 4 C S u m m ( B 1 T B k T ) + 4 C S u m m ( S T ) .
In general, the matrix S may have zero columns. Consequently, S T may have zero rows. The number of summations in the transposed aggregation algorithm is defined as
C S u m m ( S T ) = C 1 ( S T ) C N Z R ( S T ) ,
where C N Z R ( S T ) > 0 represents the number of non-zero rows in the matrix S T .
Let us introduce
T ^ B T ( n , k , α ) = def T B T ( n , k , α ) + 4 C N Z R ( S T ) = 4 C S u m m ( B 1 T B k T ) + 4 C 1 ( S T )
as an upper bound estimate for the number of summations
T B T ( n , k , α ) T ^ B T ( n , k , α ) .
Considering (21) and (35), we can calculate the number of summations in the transposed algorithm for partial sum calculation over sub-patterns:
C S u m m ( B 1 T B k T ) = C S u m m ( B k B 1 ) = i = 1 k C S u m m ( B i ) = k 2 2 n + 1 .
By taking into account (39) and (43), we can compute
C 1 ( S T ) = C 1 ( S ) = q 2 n k = α 2 2 n 2 n k = α 2 3 n k .
Substituting (62) and (63) into (60), we get
T ^ B T ( n , k , α ) = k 2 2 n + 3 + α 2 3 n k + 2 = 2 2 n + 2 ( 2 k + α 2 n k ) .
When transposing the original Brady algorithm, aggregation is not performed ( k = n , α = 0 ). Thus, the complexity estimate is as follows:
T B T ( n , n , 0 ) = T ^ B T ( n , n , 0 ) = T B ( n , n , 0 ) = 2 2 n + 3 n .
Similarly to the analysis of the forward operator calculation algorithm (Section 4.2.3) let us examine the normalized difference Δ n B T ( δ , α ) between the numbers of required operations
Δ n B T ( δ , α ) = def 1 2 2 n + 2 ( T ^ B T ( n , n δ , α ) T B T ( n , n , 0 ) ) = 2 k + α 2 n k 2 n = α 2 δ 2 δ .
Analytically (similarly Section 4.2.3), we can demonstrate that
n Z 2 , 0 < α < 1 δ Z 1 , n 1 Δ n B T ( δ , α ) < 0 .
Hence, when α < 1 , the proposed algorithm ( r e v P F H T ) for computing the transposed 2D projection operator is computationally more efficient when compared to the transposed Brady–Yong algorithm.
Figure 5 illustrates the dependency of Δ n B T ( δ , α ) on δ .

4.3. Fundamental 3D FHT Concepts

For detailed definitions of discrete patterns and extensions of the Brady–Yong algorithm for 3D straight lines, please refer to [87]. In 3D, the image is represented by a cubic matrix, where each element, called a voxel, is a unit cube with a side length of 1. The ( s , t ) parameterization classifies 3D lines into three types based on their alignment with respective coordinate axes, as outlined in [87]. A straight line is deemed predominantly aligned with a specific axis if the acute angle between it and that axis is less than or equal to the angle between this straight line and the other axes. In the following discussion, we focus on lines primarily aligned with the 0 Z axis. Each of these straight lines is defined by two points, ( s X , s Y , 0 ) and ( s Y + t X , s Y + t Y , 2 n 1 ) , through which it passes (see Figure 6).
Moreover, lines within each type are further classified into four subtypes based on the signs of parameters t X and t Y , determined by their orientation relative to the coordinate axes 0 X and 0 Y . In this study, we consider cases where there is a negative slope along both axes, i.e., t X < 0 , t Y < 0 .

4.4. Fast 3D Forward Projection and Back-Projection Algorithm with FHT and Aggregation

4.4.1. A Fast 3D Forward Projection Operator Algorithm for Arbitrary Set of Straight Lines with Termination at k-th Iteration and Aggregation

In [35,87], a technique based on FHT termination and aggregation was introduced for the efficient computation of the projection operator in 3D, akin to the d i r P F H T algorithm detailed in Section 4.2.1. Now, let us discuss the implementation of the d i r P F H T 3 D Algorithm 9, introduced in [35]. Its primary stages involve padding the original image I with zeros, computing partial sums, and aggregating the sum for each straight line from L.
Algorithm 9 Forward projection operator computation in 3D
1: procedure  d i r P F H T 3 D ( L , I , n , k )
2:      R 0 ( x , y , z , 0 , 0 ) I 2 n ( x , y , z ) x Z 0 , 2 n 1 , y Z 0 , 2 n 1 , z Z 0 , 2 n 1 ▹ copy the original data
3:      R 0 ( x , y , z , 0 , 0 ) 0 x Z 2 n , 2 2 n 1 , y Z 2 n , 2 2 n 1 , z Z 0 , 2 n 1 ▹ zero padding
4:      R k d i r P F H T k 3 D ( R 0 , n , k ) ▹ compute partial sums (along patterns with length 2 k )
5:      s j d i r S U M k 3 D ( x j , y j , a j , b j , R k , n , k ) j Z 0 , q 1 ( x j , y j , a j , b j ) L ▹ aggregation of sums along discrete straight lines from L
6:     return s
The tensor element R i ( x , y , z , a , b ) encapsulates the summation over a pattern of length 2 i . The pattern commences at point ( x , y , z ) , and its terminus shifts along the 0 X axis by a and along the 0 Y axis by b . Tensor R 0 represents the original image padded with zeros. The set of voxel indices employed at the i-th step of the algorithm in tensor R i ,
( x , y , z , a , b ) X Y Z A B ( i ) = X ( i ) × Y ( i ) × Z ( i ) × A ( i ) × B ( i ) ,
is defined as follows
x X ( i ) = { j } j = 0 2 n + 1 1 y Y ( i ) = { j } j = 0 2 n + 1 1 z Z ( i ) = { j · 2 i } j = 0 2 n i 1 a A ( i ) = { j } j = 0 2 i 1 b B ( i ) = { j } j = 0 2 i 1 .
Algorithm 10 for computing partial sums, denoted as d i r P F H T k 3 D , mirrors the previously described 2D version.
Algorithm 10 Partial Sums Computation Algorithm in 3D
1: procedure  d i r P F H T k 3 D ( R 0 , n , k )
2:     for  i = 1 to k do▹ along the pattern length 2 i
3:          for  a = 0 to 2 i 1  do▹ along the absolute 0 X shift of the pattern end
4:              for  b = 0 to 2 i 1  do▹ along the absolute 0 Y shift of the pattern end
5:                   for  x = 0 to 2 n + 1 1  do▹ along the 0 X shift of the pattern beginning
6:                        for  y = 0 to 2 n + 1 1  do▹ along the 0 Y shift of the pattern beginning
7:                             for  z = 0 to 2 n i step 2 i  do▹ along the 0 Z shift of the pattern beginning
8:                                  x 2 ( x a 2 ) mod 2 n + 1 ▹ circular shift along 0 X
9:                                  y 2 ( y b 2 ) mod 2 n + 1 ▹ circular shift along 0 Y
10:                                 z 2 z + 2 i 1
11:                                 R i ( x , y , z , a , b ) R i 1 ( x , y , z , a 2 , b 2 ) + R i 1 ( x 2 , y 2 , z 2 , a 2 , b 2 )
12:    return  R k
Algorithm 11 d i r S U M k 3 D operates on the tensor R k , containing sums over sub-patterns of length 2 k . It conducts aggregation of sums along a discrete 3D straight line with parameters ( x , y , a , b ) , using a recursive call to r e c S U M k 3 D .
Algorithm 11 Algorithm for 3D sums aggregation
1:
procedure  d i r S U M k 3 D ( x , y , a , b , R k , n , k )
2:
     return  r e c S U M k 3 D ( x , y , 0 , a , b , n , R k , n , k )
Algorithm 12 r e c S U M k 3 D recursively accumulates sums along the discrete straight line given by ( x , y , a , b ) from the precomputed partial sums over patterns of length 2 i , i Z k , n , stored in R k .
Algorithm 12 Recursive call for sums aggregation over patterns in 3D case
1: procedure  r e c S U M k 3 D ( x , y , z , a , b , i , R k , n , k )
2:     if  i = k  then
3:           s R k ( x , y , z , a , b ) ▹ precomputed sum from R k
4:     else
5:           x 2 ( x a 2 ) mod 2 n + 1 ▹ circular shift along 0 X
6:           y 2 ( y b 2 ) mod 2 n + 1 ▹ circular shift along 0 Y
7:           z 2 z + 2 i 1
8:           s r e c S U M k 3 D ( x , y , z , a 2 , b 2 , i 1 , R k , n , k ) + r e c S U M k 3 D ( x 2 , y 2 , z 2 , a 2 , b 2 , i 1 , R k , n , k )
9:    return s
Next, we will establish the matrix representation for the algorithm d i r P F H T 3 D given n , k , L .

4.4.2. Matrix Representation of 3D Forward Projection Operator Computation Algorithm

It is worth noting that
dim ( X Y Z A B ( i ) ) = 2 3 n + 2 + i , i Z 0 , n .
For any i Z 0 , n , we can represent R i as the vector V ¯ ( i ) . Let the index ( x , y , z , a , b ) X Y Z A B ( i ) of the tensor element R i and the index j of the corresponding element in the vector V ¯ ( i ) be bijectively mapped: J ( i ) : X Y Z A B ( i ) Z 0 , 2 3 n + 2 + i 1 . The inverse mapping is denoted as
H ( i ) = ( J ( i ) ) 1 .
The Boolean matrix B i of size 2 3 n + 2 + i × 2 3 n + 1 + i allows us to compute
V ¯ ( i ) = B i V ¯ ( i 1 ) .
The value of element B i ( p , t ) for ( p , t ) Z 0 , 2 3 n + 2 + i 1 × Z 0 , 2 3 n + 1 + i 1 is determined by the following formula:
B i ( p , t ) = 1 , if t = J ( i 1 ) ( x p , y p , z p , a p / 2 , b p / 2 ) t = J ( i 1 ) ( ( x p a p / 2 ) mod 2 n + 1 , ( y p b p / 2 ) mod 2 n + 1 , z p + 2 i 1 , a p / 2 , b p / 2 ) , 0 , otherwise ,
where ( x p , y p , a p ) = H ( i ) ( p ) . Each row of the matrix B i contains exactly 2 ones, hence
C 1 ( B i ) = 2 H ( B i ) = 2 · 2 3 n + 2 + i = 2 3 n + 3 + i .
By substituting into (9), we arrive at
C S u m m ( B i ) = C 1 ( B i ) H ( B i ) = H ( B i ) = 2 3 n + 2 + i .
The aggregation matrix S has dimensions q × 2 3 n + 2 + k and is employed to calculate the vector of sums:
s ¯ = S V ¯ ( k ) .
In each row of the matrix S , there are exactly 2 n k ones, thus, based on the (9), we find
C S u m m ( S ) = C 1 ( S ) H ( S ) = H ( S ) ( 2 n k 1 ) = q ( 2 n k 1 ) .
The successive application of (72) and (76) yields the following matrix representation of d i r P F H T 3 D
s ¯ = S B k B 1 V ¯ ( 0 ) .

4.4.3. Number of Addition Operations in the 3D Forward Projection Operator Calculation Algorithm with Aggregation

For one type of straight lines, the number of unique discrete straight lines is determined by the range of different origin and end shifts, which in 3D amounts to N 2 N 2 = N 4 .
Let the proportion of considered straight lines be α . Since there are 12 types of straight lines and taking into account (75) and (77), the number of addition operations T B ( n , k , α ) when computing the forward operator (Section 4.4.1) can be calculated as follows:
T B ( n , k , α ) = 12 C S u m m ( S B k B k 1 B 1 ) = 12 ( C S u m m ( S ) + i = 1 k C S u m m ( B i ) ) = 12 ( α 2 4 n ( 2 n k 1 ) + i = 1 k 2 3 n + 2 + i ) = 12 ( α 2 4 n ( 2 n k 1 ) + 2 3 n + 2 i = 1 k 2 i ) = 3 · 2 3 n + 2 ( α 2 n ( 2 n k 1 ) + 2 3 ( 2 k 1 ) ) .
Fast summation for all directions without aggregation has a complexity of
T B ( n , n , 1 ) = 3 · 2 3 n + 2 · 2 3 ( 2 n 1 ) .
In 3D, forward projection requires summation along Θ ( N 3 ) straight lines, corresponding to
α = 1 N = 1 2 n ,
which, when substituted, yields
T B ( n , k , 1 2 n ) = 3 · 2 3 n + 2 ( ( 2 n k 1 ) + 2 3 · ( 2 k 1 ) ) .
Naive summation corresponds to k = 0 , and the total number of summations during the operator calculation is
T B ( n , 0 , 1 2 n ) = 3 · 2 3 n + 2 ( 2 n 1 ) = 3 · 2 3 n + 2 ( 2 n 1 ) .
In [35], a theoretical estimate of the asymptotically optimal k = n 2 is provided for the summation algorithm along Θ ( 2 3 n ) lines, substituting which, we get
T B ( n , n 2 , 1 2 n ) = 3 · 2 3 n + 2 ( ( 2 n n 2 1 ) + 2 3 · ( 2 n 2 1 ) ) = 3 3 2 3 n + 2 ( 2 n 2 1 ) = Θ ( 2 7 2 n ) .
The complexity estimate derived from the analysis of the matrix decomposition of the algorithm proposed in [35], Θ ( 2 7 2 n ) , aligns with the theoretical estimate provided by authors [35].
Next, we will construct the algorithm for calculating the transposed operator, achieved by transposing the algorithm introduced in [35] for computing the forward operator in 3D (FHT with aggregation).

4.4.4. Transposing 3D Forward Projection Algorithm with FHT Termination at k-th Iteration and Aggregation

The transposition of the algorithm for computing the forward projection operator in 3D with aggregation closely follows the process in the 2D case (see Section 4.2.4). It entails the sequential application of the transposed aggregation algorithm, referred to as r e v S U M k 3 D , and the transposed algorithm for computing sums over patterns of length 2 k , denoted as r e v P F H T 3 D . Although line parameterization becomes more complex, and the aggregating tensor’s dimensionality increases, the fundamental structure of the algorithm remains unchanged.
Implemented as a function called r e v P F H T 3 D Algorithm 13 (Reversed Partial Fast Hough Transform 3D), the algorithm computes the transposed operator. Given parameters n , k , it takes a vector of ray sums s ¯ = ( s i ) i = 0 q 1 for q directions defined as a set of discrete straight lines L = { ( x i , y i , a i , b i ) } i = 0 q 1 , and returns a 2 n × 2 n × 2 n image I.
Algorithm 13 Algorithm for computing the transposed operator in 3D
1: procedure  r e v P F H T 3 D ( L , s , n , k )
2:      R k ( x , y , z , a , b ) 0 x , y Z 0 , 2 n + 1 1 , z = j 2 k , j Z 0 , 2 n k 1 , a , b Z 0 , 2 k 1
3:      R k r e v S U M k 3 D ( s j , x j , y j , a j , b j , R k , n , k ) j Z 1 , q ▹ transposed algorithm for aggregation in 3D
4:      R 0 r e v P F H T k 3 D ( R k , n , k ) ▹ transposed algorithm for summation in 3D
5:      I ( x , y , z ) R 0 ( x , y , z , 0 , 0 ) x Z 0 , 2 n 1 , y Z 0 , 2 n 1 , z Z 0 , 2 n 1 ▹ copying the result
6:     return I
Algorithm 14 r e v S U M k 3 D is the transposed aggregation algorithm. Given parameters n and k, it adds value s to the elements of tensor R k , corresponding to partial sums over sub-patterns of length 2 k , along the discrete straight line defined by starting point ( x , y ) and shift of the end ( a , b ) .
Algorithm 14 Transposed algorithm for aggregation when computing the transposed operator in 3D
1:
procedure  r e v S U M k 3 D ( s , x , y , a , b , R k , n , k )
2:
      R k r e c S P R k 3 D ( s , x , y , 0 , a , b , n , R k , n , k )
3:
     return  R k
The accumulation of partial summation results is accomplished through a recursive call to Algorithm 15 r e c S P R k 3 D .
Algorithm 15 Algorithm for recursive call when accumulating partial summation results
1:
procedure  r e c S P R k 3 D ( s , x , y , z , a , b , i , R k , n , k )
2:
     if  i = k  then
3:
           R k ( x , y , z , a , b ) R k ( x , y , z , a , b ) + s
4:
     else
5:
           x 2 ( x a 2 ) mod 2 n + 1
6:
           y 2 ( y b 2 ) mod 2 n + 1
7:
           R k r e c S P R k 3 D ( s , x , y , z , a 2 , b 2 , i 1 , R k , n , k )
8:
           R k r e c S P R k 3 D ( s , x 2 , y 2 , z + 2 i 1 , a 2 , b 2 , i 1 , R k , n , k )
9:
     return  R k
Algorithm 16 r e v P F H T k 3 D implements the transposed algorithm for computing partial sums along pattern of size 2 k .
Algorithm 16 Transposed algorithm for computing partial sums along pattern of size 2 k
1:
procedure  r e v P F H T k 3 D ( R k , n , k )
2:
     for  i = k to 1 step 1  do
3:
           R i 1 ( x , y , z , a , b ) 0 x , y Z 0 , 2 n + 1 1 , z = j 2 i , j Z 0 , 2 n i 1 , a , b Z 0 , 2 i
4:
          for  a = 0 to 2 i 1  do
5:
              for  b = 0 to 2 i 1  do
6:
                   for  x = 0 to 2 n + 1 1  do
7:
                        for  y = 0 to 2 n + 1 1  do
8:
                             for  z = 0 to 2 n i step 2 i  do
9:
                                  v R i ( x , y , z , a , b ) + R i 1 ( x , y , z , a 2 , b 2 )
10:
                                 R i 1 ( x , y , z , a 2 , b 2 ) v
11:
                                 x 2 ( x a 2 ) mod 2 n + 1
12:
                                 y 2 ( y b 2 ) mod 2 n + 1
13:
                                 v R i ( x , y , z , a , b ) + R i 1 ( x 2 , y 2 , z + 2 i 1 , a 2 , b 2 )
14:
                                 R i 1 ( x 2 , y 2 , z + 2 i 1 , a 2 , b 2 ) v
15:
     return  R 0
Next, let us evaluate the algorithmic complexity of the proposed approach.

4.4.5. Number of Addition Operations in the 3D Transposed Operator Algorithm with Aggregation

Just as in the 2D case (Section 4.2.5) considering (59), the complexity estimate T B T for the algorithm r e v P F H T 3 D computing the transposed operator in 3D can be expressed as
T B T ( n , k , α ) = 12 C S u m m ( B 1 T B k T S T ) = 12 ( i = 1 k C S u m m ( B i T ) + C S u m m ( S T ) ) = 12 ( i = 1 k C S u m m ( B i T ) + C 1 ( S T ) C N Z R ( S T ) ) ,
where C N Z R ( S T ) > 0 denotes the count of non-zero rows in the matrix S T .
Taking into account the matrix structure with the substitution (74), we derive
i = 1 k C S u m m ( B i T ) = i = 1 k ( C 1 ( B i T ) H ( B i T ) ) = i = 1 k ( 2 3 n + i + 3 2 3 n + i + 1 ) = 2 3 n + 1 6 ( 2 k 1 ) .
The transposed algorithm for fast summation across all directions, without aggregation, bears a complexity of
T B T ( n , n , 1 ) = 12 i = 1 n C S u m m ( B i T ) = 12 · 2 3 n + 1 6 ( 2 n 1 ) = 9 · 2 3 n + 4 ( 2 n 1 ) .
To provide an upper-bound estimate T ^ B T for the complexity of computing the transposed projection operator
T B T T ^ B T ,
let us define
T ^ B T = def T B T + 12 C N Z R ( S T ) = 12 ( i = 1 k C S u m m ( B i T ) + C 1 ( S T ) ) .
By substituting (75), (77), and (86), we find
T ^ B T ( n , k , α ) = 12 ( 2 3 n + 1 6 ( 2 k 1 ) + α 2 4 n 2 n k ) = 3 · 2 3 n + 2 ( 12 ( 2 k 1 ) + α 2 2 n k ) .
In 3D, CT reconstruction methods necessitate the calculation of the operator for Θ ( N 3 ) straight lines, corresponding to
α = 1 N = 1 2 n .
Substituting, we obtain
T ^ B T ( n , k , 1 2 n ) = 3 · 2 3 n + 2 ( 12 ( 2 k 1 ) + 2 n k ) .
The transposed algorithm [35] does not implement aggregation, which means k = 0 , so the number of summations is
T ^ B T ( n , 0 , 1 2 n ) = 3 · 2 3 n + 2 2 n = 3 · 2 4 n + 2 .
When employing aggregation (similar to the estimation for the forward operator, see Section 4.4.3), we adopt the asymptotically optimal value of k o p t = n 2 , and by substituting it, we acquire
T ^ B T ( n , n 2 , 1 2 n ) = 3 · 2 3 n + 2 ( 12 ( 2 n 2 1 ) + 2 n 2 ) = 3 · 2 3 n + 2 ( 13 · 2 n 2 12 ) .
Hence, the proposed and analyzed algorithm for fast computation of the transposed projection operator in 3D, obtained by transposing the algorithm proposed in [35], in the practical case of summing over Θ ( N 3 ) straight lines, exhibits an asymptotic complexity no less favorable than the algorithm outlined in [35] for fast computation of the forward operator Θ ( N 3 N ) (with N denoting the linear size of the reconstruction).

5. Experiments and Discussion

We tested the proposed general transposition method for summation-based algorithms by implementing efficient forward projection and backprojection operators for the two-dimensional scenario. The implementation, conducted in C++, followed the computational framework outlined in Section 4.2: FHT computation with termination and aggregation. The optimized implementation of the algorithms prompted numerical experiments to gauge the speed and quality of their performance. The realization of the method was performed using our closed libraries for fast image processing to achieve a realistic execution time together with adequate comparison to classical methods. For this reason, unfortunately, the implementation can not be provided in open access. Although we hope that our detailed pseudo code implementations would be enough to reproduce the results.
We compared the quality of reconstruction in the case of parallel-beam circular CT. As reference methods for CT reconstruction, we employed the classical FBP method [63] and the accelerated FBP for computed tomography image reconstruction (referred to as HFBP) from [86]. The modified HFBP method, utilizing fast forward and backprojection operators obtained through the proposed transposition method, is denoted as HFBP-L. The reconstructions were acquired using the Smart Tomo Engine v.2.0.0 software [90].
For our experiments, we selected the Shepp–Logan phantom model (see Figure 7) with dimensions of N × N , N = 1024 . The set of projections simulated measurements with a detector size of N for 4 N 3 source rotation angles around the object’s center, with a uniform angle step from 0 to π . The projections were simulated in parallel geometry of the experiment as a simple forward projection, which can be reproduced by any open-source software for CT simulation.
The reconstruction results, obtained using the listed methods, are displayed in Figure 7. The entire set of projection data was utilized for the reconstruction. It is important to note that in the case of the HFBP-L method, we applied optimal aggregation depth determined by corresponding formulas for δ o p t outlined in Section 4.2.3 and Section 4.2.5 for both forward projection and backprojection operators (with α = 1.0 ).
To evaluate the performance of the reconstruction methods (refer to Table 2), we employed numerical metrics including NRMSE, SSIM [91], and STRESS [92]. The NRMSE metric for the reconstructed image ( r ^ i : i Z 1 , N ) T is calculated as i = 1 N | r ^ i r i | 2 i = 1 N | r i | 2 , where ( r i : i Z 1 , N ) T represents the vector of pixel values in the ideal phantom image.
The consistency in accuracy metrics between the HFBP and HFBP-L reconstruction methods is not coincidental. This confirms the validity of our algorithm implementation and the proposed aggregation approach. The algorithm’s accuracy should not be influenced by the aggregation level δ , and the reconstruction outcome should be indistinguishable from the result of the same algorithm without aggregation, as evidenced by matching accuracy metrics. At the same time, according to the performance measurements, both HFBP and HFBP-L fall slightly behind the classical FBP algorithm, as they employ a less precise straight line approximation for the sake of speed. Nonetheless, the visual quality of reconstruction using HFBP and HFBP-L is quite satisfactory, with no significant distortions observed in the images (refer to Figure 7).
Comparing algorithm speed, it is crucial to highlight that the distinction between HFBP and HFBP-L methods lies solely in the computation of the backprojection operator. HFBP employs the transposed Brady–Yong algorithm without aggregation, whereas HFBP-L utilizes the r e v P F H T algorithm (refer to Section 4.2.5). When discussing execution time, the number of operations, or the algorithms’ asymptotic complexity, we specifically focus on the projection operators, excluding additional consistent costs for both methods.
Both HFBP and HFBP-L share identical asymptotic algorithmic complexities. However, HFBP-L displays superior speed, characterized by a constant factor. For instance, at α = 1.0 , HFBP-L requires at least 0.1 N 2 fewer addition operations than HFBP. And at α = 0.1 , this difference is at least 22.5 N 2 (assuming optimal aggregation depth as per formula (55)).
Experimental measurements of algorithm execution time were conducted on a Windows 10 system with x64 architecture, driven by an AMD Ryzen 7 2700X processor clocked at 3.70 GHz, featuring a 16 MB L3 cache, and 64 GB of RAM. All measurements were executed in a single thread, and each measurement was repeated 1000 times. The recorded time values were averaged with an assessment of the standard deviation. The recorded time corresponds to the computation duration of the projection operation. Additionally, although computing the forward projection operator is not essential for HFBP and HFBP-L algorithms, corresponding values were measured for completeness and for comparison with theoretical dependencies. These values are also annotated in reference to their respective algorithm.
The computation time of both the forward projection and backprojection operators for the HFBP method is independent of α and can be represented by a single value. Specifically, the computation time for the forward projection operator is t d i r H F B P = 44.0 ± 1.1 ms, while for the backprojection operator, it is t r e v H F B P = 79.8 ± 1.8 ms.
For the HFBP-L method, the computation time is influenced by both the number of projections, denoted by the parameter α , and the aggregation depth δ . The computation time for the forward projection and backprojection operators is detailed in Table 3 and Table 4 respectively. Case δ = 0 corresponds to computing the operator fully using the Brady–Yong method without termination.
As the number of projection directions increases, the overall computation time for the forward projection operator also rises. In the range of 0 < α 0.3 , aggregation leads to time savings (refer to Table 3). The most significant reduction in computation time for the forward projection operator, by 15.8 % , is achieved at α = 0.01 , δ = 3 .
In the computation of the backprojection operator within the HFBP-L method, optimizing the aggregation depth speeds up the process for α values within the range of 0 < α 0.4 for the considered phantom (see Table 4). The highest acceleration, 21.5%, is attainable at α = 0.01 , δ = 3 for computing the backprojection operator.
It is worth noting that even with a zero aggregation level ( δ = 0 ), the computation time of the operators is influenced by the number of projections, despite the absence of aggregation. This arises from additional overhead costs related to the explicit formation of the straight line list.
The time values for computing the forward projection and backprojection operators, provided in Table 3 and Table 4, are represented as plots in Figure 8a,b. The plot illustrates the difference in time compared to δ = 0 for the corresponding α value. This facilitates comparison with the theoretical plots in Section 4.2. The key distinction from theoretical calculations lies in the fact that the optimal aggregation depth, δ o p t , deviates from the theoretical value (refer to Section 4.2). Moreover, for α 0.5 , there is no optimal integer aggregation level that would lead to a time improvement in computing the forward projection and backprojection operators. Essentially, in these scenarios, δ = 0 proves to be optimal.
In summary, the conducted experiments validate the theoretically grounded potential for accelerating reconstruction methods. This is achieved through a fast algorithm for computing the forward projection operator using FHT with termination and aggregation, and a fast algorithm for computing the backprojection operator, achieved by applying a common transposition technique for summation-based algorithms to the forward projection calculation. Time is saved by reducing the number of projection frames used in the reconstruction process. The proposed methods enable accelerations of up to 15 % for the forward projection operator and 21 % for the backprojection operator when compared to fast calculation methods based on the Brady–Yong algorithm. This acceleration is most pronounced when dealing with a small number of angles and utilizing the optimal aggregation depth δ = δ o p t . This outcome holds particular significance in the realm of few-view CT.

6. Conclusions

This work introduces a versatile method for transposing summation-based algorithms, which rely exclusively on addition operations. This method facilitates the efficient computation of the transpose of linear operators represented as Boolean matrices, assuming a known fast computation algorithm is available. Importantly, this transposition technique maintains the asymptotic complexity of the algorithms, ensuring consistent asymptotic algorithmic complexity for both the original and derived algorithms.
The summation-based transposition method holds significant promise in computer tomography, particularly in algorithms for computing forward projection and backprojection operators, where matrices are transposed in pairs. By applying this method to a known algorithm for computing the forward projection operator, we can derive an algorithm for computing the backprojection operator. The number of addition operations in the resulting algorithm align with the copy operations in the original, and vice versa.
In this study, for the first time, we present fast summation-based algorithms for computing forward projection and backprojection operators in 2D tomographic reconstruction, tailored for parallel-beam CT, especially suitable for few-view CT (FVCT). Although the asymptotic complexity remains at Θ n 2 log n (where n represents the linear size of the reconstruction), consistent with the Brady–Yong algorithm, our proposed algorithms are superior in terms of constant factors.
We also extend the method to cone-beam 3D CT. Previously, a forward projection algorithm with a complexity of Θ n 7 / 2 was introduced for this setup [35]. While the number and linear dimensions of the projections scale proportionally with n, a fast algorithm for backprojection was lacking. In this study, we devised such an algorithm, also with a complexity of Θ n 7 / 2 .
Therefore, the summation-based transposition method serves as a versatile approach for developing fast algorithms in CT reconstruction, applicable across various measurement schemes, both in two and three-dimensional scenarios. Implementing this method with existing forward projection algorithms, such as [35], adaptable to diverse measurement schemes, facilitates the creation of fast algorithms for computing the backprojection operator in any measurement setup.
We implemented the computation of the backprojection operator by transposing the algorithm for computing the forward projection operator based on FHT with termination and aggregation. This allowed us to assess the accuracy and speed of the new algorithm, comparing it with theoretical dependencies and experimental baselines. The results confirm the theoretically justified potential for accelerating classical reconstruction methods using this transposition method. They also suggest a minimum time depending on the chosen aggregation level. Notably, applying the transposed algorithm, which employs FHT with termination and aggregation, to a fraction of 0.01 of the considered projection directions during reconstruction led to a 15% increase in speed for computing the forward projection operator and a 21% increase for the backprojection operator, with visually insignificant degradation of the result.
Further research into the dynamics of the parameters, particularly the fraction of projections used for reconstruction, where the application of aggregation allows for time savings, is considered worthwhile. This should be explored for various linear sizes of reconstruction. Moreover, the intriguing potential of applying the proposed summation-based transposition method to algorithms computing the forward projection operator, particularly those based on cutting-edge, accurate, fast algorithms for Hough transforms, warrants attention. Employing this method in tandem with precise, fast algorithms such as those in [55] could offer a promising avenue for achieving both speed and accuracy in reconstruction simultaneously.

Author Contributions

Conceptualization, D.N. and D.P.; methodology, D.N., D.P., M.G., P.K. and D.K.; software, P.K. and M.G.; validation, D.K., A.I., M.G. and P.K.; formal analysis, D.K., M.C. and P.K.; investigation, P.K., M.C. and D.P.; resources, A.I. and M.G.; data curation, D.P. and A.I.; writing—original draft preparation, D.N., M.G., D.K., M.C., A.I., P.K. and D.P.; writing—review and editing, D.N., M.G., D.K. and M.C.; visualization, P.K., D.P., A.I. and M.G.; supervision, D.N., M.C. and D.P.; project administration, D.N., M.C. and D.P.; funding acquisition, M.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by the Russian Science Foundation, project no. 23-21-00524.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Authors Dmitry Polevoy, Marat Gilmanov, Danil Kazimirov, Marina Chukalina, Anastasia Ingacheva, Petr Kulagin, Dmitry Nikolaev were partly employed by the company Smart Engines Service LLC. The research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CTcomputed tomography
FVCTfew-view computed tomography

Nomenclature

· rounding down to the nearest integer (integer part or floor
function)
· rounding up to the nearest integer (ceil function)
{ } set
( ) vector
dim ( · ) = · 0 number of elements of a set or number of vector components
Z set of integer numbers
Z a , b = { a , a + 1 , , b 1 , b }           set of integer numbers from a to b, a b
Z 0 , = {0,1,…}set of non-negative integer numbers
Z 1 , = N 1 = { 1 , 2 , } set of positive integer numbers
a ¯ = ( a 1 , . . . , a n ) T = ( a i : i Z 1 , n ) T column vector of length n
a ¯ T = ( a 1 , . . . , a n ) = ( a i : i Z 1 , n ) row vector of length n
W matrix
W ( y , x ) matrix element, where y denotes a row index and x stands for
a column index
W ¯ ( y , · ) row of matrix W with index y
W ¯ ( · , x ) column of matrix W with index x
B Boolean matrix
H ( B ) number of rows in matrix B
W ( B ) number of columns in matrix B
a ( i ) i-th iteration step (state of variable a at i-th iteration step)
N = 2 n number of linear detector cells
Cnumber of projection images (observed distinct projection
angles)
nexponent of the image linear size (input image is of size 2 n × 2 n )
kexponent of the pattern length (pattern is of length 2 k 2 n )
I 2 n image of size 2 n × 2 n
R k tensor of sums over patterns of length 2 k
qnumber of ray sums
( x , y ) point or pixel coordinates in the coordinate system X 0 Y
ashift of the pattern’s end (coordinate)
α number of projection images expressed as a proportion
of C m a x = 2 n + 2
Θ ( · ) asymptotic computational complexity (O-symbology)

References

  1. Withers, P.J.; Bouman, C.; Carmignato, S.; Cnudde, V.; Grimaldi, D.; Hagen, C.K.; Maire, E.; Manley, M.; Du Plessis, A.; Stock, S.R. X-ray computed tomography. Nat. Rev. Methods Primers 2021, 1, 18. [Google Scholar] [CrossRef]
  2. Arlazarov, V.; Nikolaev, D.; Arlazarov, V.; Chukalina, M. X-ray Tomography: The Way from Layer-by-layer Radiography to Computed Tomography. Comput. Opt. 2021, 45, 897–906. [Google Scholar] [CrossRef]
  3. Gonzalez, S.M. Interpretation Basics of Cone Beam Computed Tomography; John Wiley & Sons: Hoboken, NJ, USA, 2021. [Google Scholar]
  4. Kravchenko, D.; Karakostas, P.; Kuetting, D.; Meyer, C.; Brossart, P.; Behning, C.; Schäfer, V.S. The role of dual energy computed tomography in the differentiation of acute gout flares and acute calcium pyrophosphate crystal arthritis. Clin. Rheumatol. 2022, 41, 223–233. [Google Scholar] [CrossRef]
  5. Sibolt, P.; Andersson, L.M.; Calmels, L.; Sjöström, D.; Bjelkengren, U.; Geertsen, P.; Behrens, C.F. Clinical implementation of artificial intelligence-driven cone-beam computed tomography-guided online adaptive radiotherapy in the pelvic region. Phys. Imaging Radiat. Oncol. 2021, 17, 1–7. [Google Scholar] [CrossRef]
  6. Carmignato, S.; Dewulf, W.; Leach, R. Industrial X-ray Computed Tomography; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  7. Silomon, J.; Gluch, J.; Clausner, A.; Paul, J.; Zschech, E. Crack identification and evaluation in BEoL stacks of two different samples utilizing acoustic emission testing and nano X-ray computed tomography. Microelectron. Reliab. 2021, 121, 114137. [Google Scholar] [CrossRef]
  8. Potrahov, N.; Anoshkin, A.; Zuiko, V.; Osokin, B.; Pisarev, P.; Pelenev, K. Numerical and experimental study of composite bulkhead partition strength with in-situ X-ray monitoring. PNRPU Mech. Bull. 2017, 1, 118–133. [Google Scholar] [CrossRef]
  9. Kagan, A. Chapter 6—Counterterrorist Detection Techniques of Explosives by Vapor Sensors (Handheld). In Counterterrorist Detection Techniques of Explosives, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2022; pp. 235–251. [Google Scholar]
  10. Champley, K.M.; Willey, T.M.; Kim, H.; Bond, K.; Glenn, S.M.; Smith, J.A.; Kallman, J.S.; Brown, W.D.; Seetho, I.M.; Keene, L.; et al. Livermore tomography tools: Accurate, fast, and flexible software for tomographic science. NDT E Int. 2022, 126, 102595. [Google Scholar] [CrossRef]
  11. Hao, S.; Bailey, J.; Iacoviello, F.; Bu, J.; Grant, P.; Brett, D.; Shearing, P. 3D Imaging of Lithium Protrusions in Solid-State Lithium Batteries using X-ray Computed Tomography. Adv. Funct. Mater. 2021, 31, 2007564. [Google Scholar] [CrossRef]
  12. Grossmann, G. Tomographie. Fortschr. Röntgenstr. 1935, 51, 61–80. [Google Scholar]
  13. Chaoul, H. Über die Tomographie und insbesondere ihre Anwendung in der Lungendiagnostik. Fortschr. Röntgenstr. 1935, 51, 342. [Google Scholar]
  14. Inoue, Y.; Itoh, H.; Waga, A.; Sasa, R.; Mitsui, K. Radiation Dose Management in Pediatric Brain CT According to Age and Weight as Continuous Variables. Tomography 2022, 8, 985–998. [Google Scholar] [CrossRef]
  15. Göppel, S.; Frikel, J.; Haltmeier, M. Feature Reconstruction from Incomplete Tomographic Data without Detour. Mathematics 2022, 10, 1318. [Google Scholar] [CrossRef]
  16. Mamchur, D.; Peksa, J.; Le Clainche, S.; Vinuesa, R. Application and Advances in Radiographic and Novel Technologies Used for Non-Intrusive Object Inspection. Sensors 2022, 22, 2121. [Google Scholar] [CrossRef]
  17. Cnudde, V.; Boone, M. High-resolution X-ray computed tomography in geosciences: A review of the current technology and applications. Earth-Sci. Rev. 2013, 123, 1–17. [Google Scholar] [CrossRef]
  18. Loterie, D.; Delrot, P.; Moser, C. High-resolution tomographic volumetric additive manufacturing. Nat. Commun. 2021, 11, 852. [Google Scholar] [CrossRef]
  19. Tanaka, R.; Yoshioka, K.; Takagi, H.; Schuijf, J.; Arakita, K. Novel developments in non-invasive imaging of peripheral arterial disease with CT: Experience with state-of-the-art, ultra-high-resolution CT and subtraction imaging. Clin. Radiol. 2019, 74, 51–58. [Google Scholar] [CrossRef]
  20. Hata, A.; Yanagawa, M.; Honda, O.; Kikuchi, N.; Miyata, T.; Tsukagoshi, S.; Uranishi, A.; Tomiyama, N. Effect of Matrix Size on the Image Quality of Ultra-high-resolution CT of the Lung: Comparison of 512 × 512, 1024 × 1024, and 2048 × 2048. Acad. Radiol. 2018, 25, 869–876. [Google Scholar] [CrossRef]
  21. Sun, L.; Yao, J.; Hao, P.; Yang, Y.; Liu, Z.; Peng, R. Diagnostic Role of Four-Dimensional Computed Tomography for Preoperative Parathyroid Localization in Patients with Primary Hyperparathyroidism: A Systematic Review and Meta-Analysis. Diagnostics 2021, 11, 664. [Google Scholar] [CrossRef]
  22. Wu, D.; Engqvist, J.; Barbier, C.; Karlsson, C.; Hall, S. Unravelling the deformation process of a compacted paper: In-situ tensile loading, 4D X-ray tomography and image-based analysis. Int. J. Solids Struct. 2022, 242, 111539. [Google Scholar] [CrossRef]
  23. Buzug, T.M. Computed tomography. In Springer Handbook of Medical Technology; Springer: Berlin/Heidelberg, Germany, 2011; pp. 311–342. [Google Scholar]
  24. Hatcher, D.C. Operational Principles for Cone-Beam Computed Tomography. J. Am. Dent. Assoc. 2010, 141, 3S–6S. [Google Scholar] [CrossRef]
  25. Gong, H.; Tao, S.; Gagneur, D.; Liu, W.; Shen, J.; McCollogh, C.; Hu, Y.; Leng, S. Implementation and experimental evaluation of Mega-voltage fan-beam CT using a linear accelerator. Radiat. Oncol. 2021, 16, 139. [Google Scholar] [CrossRef]
  26. Yoneyama, A.; Baba, R.; Hyodo, K.; Takeda, T.; Nakano, H.; Maki, K.; Sumitani, K.; Hirai, Y. Development of high-resolution X-ray CT system using parallel beam geometry. AIP Conf. Proc. 2016, 1696, 020007. [Google Scholar] [CrossRef]
  27. Grass, M.; Kohler, T.; Proksa, R. 3D cone-beam CT reconstruction for circular trajectories. Phys. Med. Biol. 2000, 45, 329–348. [Google Scholar] [CrossRef]
  28. Heiken, J.; Brink, A.; Vannier, M. Spiral (helical) CT. Radiology 1993, 189, 647–656. [Google Scholar] [CrossRef]
  29. Valton, S.; Peyrin, F.; Sappey-Marinier, D. A FDK-Based Reconstruction Method for Off-Centered Circular Trajectory Cone Beam Tomography. IEEE Trans. Nucl. Sci. 2006, 53, 2736–2745. [Google Scholar] [CrossRef]
  30. Kulkarni, S.; Rumberger, J.; Jha, S. Electron Beam CT: A Historical Review. Am. J. Roentgenol. 2021, 216, 1222–1228. [Google Scholar] [CrossRef]
  31. Nikitin, V. TomocuPy—Efficient GPU-based tomographic reconstruction with asynchronous data processing. J. Synchrotron Radiat. 2023, 30, 179–191. [Google Scholar] [CrossRef]
  32. Nourazar, M.; Goossens, B. Accelerating iterative CT reconstruction algorithms using Tensor Cores. J. Real-Time Image Process. 2021, 18, 1979–1991. [Google Scholar] [CrossRef]
  33. Wang, X.; Zhang, Y.; Hong, X.; Wang, H. Fast Backprojection Filtration Algorithm in Circular Cone-Beam Computed Tomography. Int. J. Opt. 2023, 2023, 1749624. [Google Scholar] [CrossRef]
  34. Zhang, S.; Zhang, Y.; Tuo, M.; Zhang, H. Fast algorithm for Joseph’s forward projection in iterative computed tomography reconstruction. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 12535–12548. [Google Scholar] [CrossRef]
  35. Bulatov, K.B.; Chukalina, M.V.; Nikolaev, D.P. Fast X-ray Sum Calculation Algorithm for Computed Tomography Problem; Series of Mathematical Modelling, Programming & Computer Software; Bulletin of the South Ural State University: Chelyabinsk, Russia, 2020; Volume 13. [Google Scholar]
  36. Feldkamp, L.A.; Davis, L.C.; Kress, J.W. Practical cone-beam algorithm. J. Opt. Soc. Am. A 1984, 1, 612–619. [Google Scholar] [CrossRef]
  37. Mersereau, R.; Oppenheim, A. Digital reconstruction of multidimensional signals from their projections. Proc. IEEE 1974, 62, 1319–1338. [Google Scholar] [CrossRef]
  38. Natterer, F. The Mathematics of Computerized Tomography; Mir: Moscow, Russia, 1990; p. 288. [Google Scholar]
  39. Dusaussoy, N. VOIR: A volumetric image reconstruction algorithm based on Fourier techniques for inversion of the 3-D Radon transform. IEEE Trans. Med. Process. 1996, 5, 121–131. [Google Scholar] [CrossRef]
  40. Averbuch, A.; Coifman, R.; Donoho, D.; Israeli, M.; Waldén, J. Fast slant stack: A notion of Radon transform for data in a Cartesian grid which is rapidly computible, algebraically exact, geometrically faithful and invertible. SIAM J. Sci. Comput 2001, 37, 192–206. [Google Scholar]
  41. Matej, S.; Fessler, J.; Kazantsev, I. Iterative tomographic image reconstruction using Fourier-based forward and back-projectors. IEEE Trans. Med. Imaging 2004, 23, 401–412. [Google Scholar] [CrossRef]
  42. Sullivan, J. A fast sinc function gridding algorithm for fourier inversion in computer tomography. IEEE Trans. Med. Imaging 1985, 4, 200–207. [Google Scholar] [CrossRef]
  43. Arcadu, F.; Nilchian, M.; Studer, A.; Stampanoni, M.; Marone, F. A Forward Regridding Method with Minimal Oversampling for Accurate and Efficient Iterative Tomographic Algorithms. IEEE Trans. Med. Process. 2016, 23, 1207–1218. [Google Scholar] [CrossRef]
  44. Zhang, Z.; Han, X.; Pearson, E.; Pelizzari, C.; Sidky, E.; Pan, X. Artifact reduction in short-scan CBCT by use of optimization-based reconstruction. Phys. Med. Biol. 2016, 61, 3387–3406. [Google Scholar] [CrossRef]
  45. Brady, M.L.; Yong, W. Fast Parallel Discrete Approximation Algorithms for the Radon Transform. In Proceedings of the SPAA ’92, Fourth Annual ACM Symposium on Parallel Algorithms and Architectures, San Diego, CA, USA, 29 June–1 July 1992; pp. 91–99. [Google Scholar] [CrossRef]
  46. Götz, W.A.; Druckmüller, H.J. A fast digital Radon transform—An efficient means for evaluating the Hough transform. Pattern Recognit. 1995, 28, 1985–1992. [Google Scholar] [CrossRef]
  47. Basu, S.; Bresler, Y. O(N/sup 2/log/sub 2/N) filtered backprojection reconstruction algorithm for tomography. IEEE Trans. Image Process. 2000, 9, 1760–1773. [Google Scholar] [CrossRef]
  48. Xiao, S.; Bresler, Y.; Munson, D. O(N/sup 2/ log N) native fan-beam tomographic reconstruction. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Washington, DC, USA, 7–10 July 2002; pp. 824–827. [Google Scholar] [CrossRef]
  49. Xiao, S.; Bresler, Y.; Munson, D. Fast Feldkamp algorithm for cone-beam computer tomography. In Proceedings of the 2003 International Conference on Image Processing (Cat. No.03CH37429), Barcelona, Spain, 14–17 September 2003; p. II-819. [Google Scholar] [CrossRef]
  50. Brokish, J.; Bresler, Y. Ultra-fast hierarchical backprojection for Micro-CT reconstruction. In Proceedings of the IEEE Nuclear Science Symposium Conference Record, Honolulu, HI, USA, 26 October–3 November 2007; pp. 4460–4463. [Google Scholar] [CrossRef]
  51. Bresler, Y.; Brokish, J. A hierarchical algorithm for fast backprojection in helical cone-beam tomography. In Proceedings of the 2nd IEEE International Symposium on Biomedical Imaging: Nano to Macro (IEEE Cat No. 04EX821), Arlington, VA, USA, 18 April 2004; pp. 1420–1423. [Google Scholar] [CrossRef]
  52. Brady, M.L. A fast discrete approximation algorithm for the Radon transform. SIAM J. Comput. 1998, 27, 91–99. [Google Scholar] [CrossRef]
  53. Ershov, E.; Terekhin, A.; Nikolaev, D.; Postnikov, V.; Karpenko, S. Fast Hough transform analysis: Pattern deviation from line segment. In Proceedings of the SPIE 9875, Eighth International Conference on Machine Vision (ICMV 2015), Barcelona, Spain, 19–21 November 2015; Volume 9875, pp. 42–46. [Google Scholar] [CrossRef]
  54. Karpenko, S.M.; Ershov, E.I. Analysis of Properties of Dyadic Patterns for the Fast Hough Transform. Probl. Inf. Transm. 2021, 57, 292–300. [Google Scholar] [CrossRef]
  55. Nikolaev, D.; Ershov, E.; Kroshnin, A.; Limonova, E.; Mukovozov, A.; Faradzhev, I. On a Fast Hough/Radon Transform as a Compact Summation Scheme over Digital Straight Line Segments. Mathematics 2023, 11, 3336. [Google Scholar] [CrossRef]
  56. Wu, T.K.; Brady, M.L. A fast approximation algorithm for 3D image reconstruction. In Proceedings of the 1998 International Computer Symposium. Workshop in Image Processing and Character Recognition, Bombay, India, 4–7 January 1998; pp. 213–220. [Google Scholar]
  57. Ershov, E.I.; Terekhin, A.P.; Nikolaev, D.P. Fast Hough transform generalization for three-dimensional images. Inf. Process. 2017, 17, 294–308. (In Russian) [Google Scholar] [CrossRef]
  58. Radon, J. On the determination of functions from their integral values along certain manifolds. IEEE Trans. Med. Imaging 1986, 5, 170–176. [Google Scholar] [CrossRef]
  59. Ingacheva, A.S.; Chukalina, M.V. Polychromatic CT data improvement with one-parameter power correction. Math. Probl. Eng. 2019, 2019, 1405365. [Google Scholar] [CrossRef]
  60. Chukalina, M.V.; Buzmakov, A.V.; Ingacheva, A.S.; Shabelnikova, Y.L.; Vsadchikov, V.E.; Bukreeva, I.N.; Nikolaev, D.P. Analysis of the tomographic reconstruction from polychromatic projections for objects with highly absorbing inclusions. Inf. Technol. Comput. Syst. 2020, 3, 49–61. [Google Scholar] [CrossRef]
  61. Hammersberg, P.; Mångård, M. Correction for beam hardening artefacts in computerized tomography. J. X-ray Sci. Technol. 1998, 8, 75–93. [Google Scholar]
  62. Reiter, M.; de Oliveira, F.B.; Bartscher, M.; Gusenbauer, C.; Kastner, J. Case study of empirical beam hardening correction methods for dimensional X-ray computed tomography using a dedicated multi-material reference standard. J. Nondestruct. Eval. 2019, 38, 1–15. [Google Scholar] [CrossRef]
  63. Lewitt, R.M. Reconstruction algorithms: Transform methods. Proc. IEEE 1983, 71, 390–408. [Google Scholar] [CrossRef]
  64. Horbelt, S.; Liebling, M.; Unser, M.A. Filter design for filtered backprojection guided by the interpolation model. In Medical Imaging 2002: Image Processing, Proceedings of the Medical Imaging 2002, San Diego, CA, USA, 23–28 February 2002; SPIE: Bellingham, WA, USA, 2002; Volume 4684, pp. 806–813. [Google Scholar]
  65. Myagotin, A.; Voropaev, A.; Helfen, L.; Hänschke, D.; Baumbach, T. Efficient volume reconstruction for parallel-beam computed laminography by filtered backprojection on multi-core clusters. IEEE Trans. Image Process. 2013, 22, 5348–5361. [Google Scholar] [CrossRef]
  66. La Riviere, P.J.; Pan, X. Comparison of angular interpolation approaches in few-view tomography using statistical hypothesis testing. In Medical Imaging 1999: Image Processing, Proceedings of the Medical Imaging 1999, San Diego, CA, USA, 20–26 February 1999; SPIE: Bellingham, WA, USA, 1999; Volume 3661, pp. 398–407. [Google Scholar]
  67. Mileto, A.; Guimaraes, L.S.; McCollough, C.H.; Fletcher, J.G.; Yu, L. State of the art in abdominal CT: The limits of iterative reconstruction algorithms. Radiology 2019, 293, 491–503. [Google Scholar] [CrossRef]
  68. Gordon, R.; Bender, R.; Herman, G.T. Algebraic reconstruction techniques (ART) for three-dimensional electron microscopy and X-ray photography. J. Theor. Biol. 1970, 29, 471–481. [Google Scholar] [CrossRef]
  69. Gilbert, P. Iterative methods for the three-dimensional reconstruction of an object from projections. J. Theor. Biol. 1972, 36, 105–117. [Google Scholar] [CrossRef]
  70. Herman, G.T.; Meyer, L.B. Algebraic reconstruction techniques can be made computationally efficient (positron emission tomography application). IEEE Trans. Med. Imaging 1993, 12, 600–609. [Google Scholar] [CrossRef]
  71. Sidky, E.Y.; Pan, X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 2008, 53, 4777. [Google Scholar] [CrossRef]
  72. Satzoda, R.K.; Suchitra, S.; Srikanthan, T. Parallelizing the Hough transform computation. IEEE Signal Process. Lett. 2008, 15, 297–300. [Google Scholar] [CrossRef]
  73. Kerr, J.P.; Bartlett, E.B. Neural network reconstruction of single-photon emission computed tomography images. J. Digit. Imaging 1995, 8, 116–126. [Google Scholar] [CrossRef]
  74. Xie, H.; Shan, H.; Wang, G. Deep encoder-decoder adversarial reconstruction (DEAR) network for 3D CT from few-view data. Bioengineering 2019, 6, 111. [Google Scholar] [CrossRef]
  75. Ma, G.; Zhang, Y.; Zhao, X.; Wang, T.; Li, H. A neural network with encoded visible edge prior for limited-angle computed tomography reconstruction. Med. Phys. 2021, 48, 6464–6481. [Google Scholar] [CrossRef]
  76. Wang, W.; Xia, X.G.; He, C.; Ren, Z.; Lu, J. A model-based deep network for limited-angle computed tomography image reconstruction. Displays 2022, 73, 102166. [Google Scholar] [CrossRef]
  77. Yamaev, A.; Chukalina, M.; Nikolaev, D.; Sheshkus, A.; Chulichkov, A. Lightweight denoising filtering neural network for FBP algorithm. In Proceedings of the Thirteenth International Conference on Machine Vision, Rome, Italy, 2–6 November 2020; SPIE: Bellingham, WA, USA, 2021; Volume 1165, p. 116050L. [Google Scholar] [CrossRef]
  78. Götz, W. Eine Schnelle Diskrete Radon-Transformation Basierend auf Rekursiv Definierten Digitalen Geraden. Ph.D. Thesis, University of Innsbruck, Innsbruck, Austria, 1993. [Google Scholar]
  79. Vuillemin, J.E. Fast linear Hough transform. In Proceedings of the IEEE International Conference on Application Specific Array Processors (ASSAP’94), San Francisco, CA, USA, 22–24 August 1994; pp. 1–9. [Google Scholar]
  80. Frederick, M.T.; VanderHorn, N.A.; Somani, A.K. Real-time H/W implementation of the approximate discrete Radon transform. In Proceedings of the 2005 IEEE International Conference on Application-Specific Systems, Architecture Processors (ASAP’05), Samos, Greece, 23–25 July 2005; pp. 399–404. [Google Scholar]
  81. Nikolaev, D.P.; Karpenko, S.M.; Nikolaev, I.P.; Nikolayev, P.P. Hough transform: Underestimated tool in the computer vision field. In Proceedings of the ECMS 2008, Nicosia, Cyprus, 3–6 June 2008; pp. 238–243. [Google Scholar] [CrossRef]
  82. Ershov, E.I.; Karpenko, S. Fast Hough Transform and approximation properties of dyadic patterns. arXiv 2013, arXiv:1712.05615. [Google Scholar]
  83. Aliev, M.; Ershov, E.I.; Nikolaev, D.P. On the use of FHT, its modification for practical applications and the structure of Hough image. In Proceedings of the ICMV 2018, Munich, Germany, 1–3 November 2018; Volume 11041. [Google Scholar] [CrossRef]
  84. Prun, V.; Nikolaev, D.; Buzmakov, A.; Chukalina, M.; Asadchikov, V. Effective regularized algebraic reconstruction technique for computed tomography. Crystallogr. Rep. 2013, 58, 1063–1066. [Google Scholar] [CrossRef]
  85. Andersen, A.H.; Kak, A.C. Simultaneous algebraic reconstruction technique (SART): A superior implementation of the ART algorithm. Ultrason. Imaging 1984, 6, 81–94. [Google Scholar] [CrossRef]
  86. Dolmatova, A.; Chukalina, M.; Nikolaev, D. Accelerated FBP for computed tomography image reconstruction. In Proceedings of the IEEE ICIP 2020, Washington, DC, USA, 26–27 October 2020; Number CIS-02.2. pp. 3030–3034. [Google Scholar] [CrossRef]
  87. Ershov, E.; Terekhin, A.; Nikolaev, D. Generalization of the fast hough transform for three-dimensional images. J. Commun. Technol. Electron. 2018, 63, 626–636. [Google Scholar] [CrossRef]
  88. Sidky, E.Y.; Kao, C.M.; Pan, X. Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT. J. X-ray Sci. Technol. 2006, 14, 119–139. [Google Scholar]
  89. Bulatov, K.; Chukalina, M.; Buzmakov, A.; Nikolaev, D.; Arlazarov, V.V. Monitored Reconstruction: Computed Tomography as an Anytime Algorithm. IEEE Access 2020, 8, 110759–110774. [Google Scholar] [CrossRef]
  90. CT Software Smart Tomo Engine. Available online: https://smartengines.com/ocr-engines/tomo-engine/ (accessed on 22 November 2023).
  91. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  92. Kruskal, J.B. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 1964, 29, 1–27. [Google Scholar] [CrossRef]
Figure 1. A schematic illustration of parallel-beam CT. Red point in the center illustrates the rotation axis, dashed lines are the X-ray propagation directions, ϕ is the rotation angle, ρ is detector cell coordinate and the blue line is the effective signal registered by detector.
Figure 1. A schematic illustration of parallel-beam CT. Red point in the center illustrates the rotation axis, dashed lines are the X-ray propagation directions, ϕ is the rotation angle, ρ is detector cell coordinate and the blue line is the effective signal registered by detector.
Mathematics 11 04759 g001
Figure 2. Relations between dimensions of Boolean decomposition factors and the count of summations of forward and transposed decompositions.
Figure 2. Relations between dimensions of Boolean decomposition factors and the count of summations of forward and transposed decompositions.
Mathematics 11 04759 g002
Figure 3. Illustration of X 0 Y axes, I 2 4 , and the structure of the dyadic pattern approximating a straight line ( s = 10 , t = 5 ) . The black dots indicate the centers of the dyadic pattern pixels.
Figure 3. Illustration of X 0 Y axes, I 2 4 , and the structure of the dyadic pattern approximating a straight line ( s = 10 , t = 5 ) . The black dots indicate the centers of the dyadic pattern pixels.
Mathematics 11 04759 g003
Figure 4. Δ n B ( δ , α ) as a continuous function of δ for selected α = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 . Various colours correspond to different values of parameter α . Values of α decrease from left to right.
Figure 4. Δ n B ( δ , α ) as a continuous function of δ for selected α = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 . Various colours correspond to different values of parameter α . Values of α decrease from left to right.
Mathematics 11 04759 g004
Figure 5. Δ n B T ( δ , α ) as a continuous function of δ for selected values of parameter α = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 . Various colours correspond to different values of parameter α . Values of α decrease from left to right.
Figure 5. Δ n B T ( δ , α ) as a continuous function of δ for selected values of parameter α = 0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 , 1.0 . Various colours correspond to different values of parameter α . Values of α decrease from left to right.
Mathematics 11 04759 g005
Figure 6. Illustration of parametrization predominantly 0 Z -axis oriented line in 3D, t X < 0 , t Y < 0 . Different coloured dashed lines demonstrate the values of parameters used in parametrization.
Figure 6. Illustration of parametrization predominantly 0 Z -axis oriented line in 3D, t X < 0 , t Y < 0 . Different coloured dashed lines demonstrate the values of parameters used in parametrization.
Mathematics 11 04759 g006
Figure 7. The Shepp–Logan phantom with N = 1024 , the reconstruction results using the considered methods along with the difference maps compared to the ideal phantom image. The entire set of projection data was utilized for the reconstruction ( α = 1.0). The HFBP-L method employed an optimal aggregation depth δ , determined by the formulas for δ o p t from Section 4.2.3 and Section 4.2.5 for both forward projection and backprojection operators.
Figure 7. The Shepp–Logan phantom with N = 1024 , the reconstruction results using the considered methods along with the difference maps compared to the ideal phantom image. The entire set of projection data was utilized for the reconstruction ( α = 1.0). The HFBP-L method employed an optimal aggregation depth δ , determined by the formulas for δ o p t from Section 4.2.3 and Section 4.2.5 for both forward projection and backprojection operators.
Mathematics 11 04759 g007
Figure 8. The computation time for the forward projection and backprojection operators with FHT and aggregation at a specified depth l e v e l (the HFBP-L method) as the difference from the time at δ = 0 for the corresponding α value. (a) Dependency for the forward projection operator, (b) Dependency for the backprojection operator.
Figure 8. The computation time for the forward projection and backprojection operators with FHT and aggregation at a specified depth l e v e l (the HFBP-L method) as the difference from the time at δ = 0 for the corresponding α value. (a) Dependency for the forward projection operator, (b) Dependency for the backprojection operator.
Mathematics 11 04759 g008
Table 1. Dependence of values δ o p t on α .
Table 1. Dependence of values δ o p t on α .
α 0.10.20.30.40.50.60.70.80.91.0
δ o p t 54333 or 222222 or 1
Table 2. Reconstruction accuracy assessment for N = 1024 . NRMSE: Smaller values indicate higher accuracy; SSIM [ 0 , 1 ] : larger values indicate higher accuracy; STRESS: smaller values indicate higher accuracy. In the HFBP-L method, the aggregation depth is optimized and determined by formulas similar to Equation (55).
Table 2. Reconstruction accuracy assessment for N = 1024 . NRMSE: Smaller values indicate higher accuracy; SSIM [ 0 , 1 ] : larger values indicate higher accuracy; STRESS: smaller values indicate higher accuracy. In the HFBP-L method, the aggregation depth is optimized and determined by formulas similar to Equation (55).
MetricsFBP α = 1.0 HFBP α = 1.0 HFBP-L α = 1.0 FBP α = 0.1 HFBP α = 0.1 HFBP-L α = 0.1
NRMSE0.160.160.160.190.250.25
SSIM0.930.770.770.560.330.33
STRESS0.070.160.160.110.240.24
Table 3. The average time t d i r H F B P L of computing the forward projection operator for a phantom with N = 1024 . Here, α signifies the proportion of the total ( 4 N 3 ) projections. Time is measured in milliseconds. The minimum computation time for the forward projection operator, which corresponds to the experimental optimal aggregation depth, is emphasized in bold.
Table 3. The average time t d i r H F B P L of computing the forward projection operator for a phantom with N = 1024 . Here, α signifies the proportion of the total ( 4 N 3 ) projections. Time is measured in milliseconds. The minimum computation time for the forward projection operator, which corresponds to the experimental optimal aggregation depth, is emphasized in bold.
δ = 0 δ = 1 δ = 2 δ = 3 δ = 4 δ = 5
α = 0.01 42.66 ± 1.9 38.24 ± 1.9 36.90 ± 1.8 35.92 ± 1.8 36.27 ± 1.8 38.29 ± 1.7
α = 0.02 43.07 ± 2.0 39.03 ± 1.9 37.66 ± 1.8 37.33 ± 1.8 38.91 ± 1.7 43.43 ± 1.7
α = 0.03 42.91 ± 1.9 38.97 ± 1.8 38.13 ± 1.8 38.22 ± 1.7 41.17 ± 1.7 48.99 ± 1.7
α = 0.05 43.87 ± 2.0 39.71 ± 2.0 39.64 ± 1.8 40.85 ± 1.7 45.72 ± 1.7 59.82 ± 1.7
α = 0.1 44.56 ± 1.9 41.80 ± 1.9 42.85 ± 1.8 46.93 ± 1.8 57.74 ± 1.7 86.49 ± 1.8
α = 0.2 46.66 ± 1.9 45.17 ± 1.9 49.10 ± 1.7 58.77 ± 1.7 82.56 ± 1.3 139.10 ± 1.5
α = 0.3 48.88 ± 1.9 48.75 ± 1.8 55.32 ± 1.7 70.68 ± 1.3 105.61 ± 1.4 184.75 ± 1.6
α = 0.4 50.55 ± 1.9 51.64 ± 1.8 61.73 ± 1.8 82.56 ± 1.3 130.43 ± 1.4 243.39 ± 1.5
α = 0.5 52.21 ± 1.9 54.97 ± 1.7 66.88 ± 1.6 93.94 ± 1.4 152.30 ± 1.3 296.08 ± 1.6
α = 0.6 54.68 ± 1.8 58.22 ± 1.7 74.69 ± 1.6 106.06 ± 1.4 176.91 ± 1.3 344.08 ± 1.4
α = 0.7 55.98 ± 1.8 61.04 ± 1.8 79.14 ± 1.5 117.01 ± 1.3 200.84 ± 1.1 399.64 ± 1.5
α = 0.8 58.02 ± 1.9 65.02 ± 1.7 85.91 ± 1.5 128.92 ± 1.4 225.00 ± 1.3 446.75 ± 1.5
α = 0.9 60.51 ± 1.8 68.22 ± 1.7 92.12 ± 1.4 141.92 ± 1.3 247.06 ± 1.4 498.56 ± 1.4
α = 1.0 62.45 ± 1.8 70.93 ± 1.5 98.55 ± 1.5 152.94 ± 1.4 268.12 ± 1.4 536.28 ± 2.2
Table 4. The average computation time t r e v H F B P L for computing the backprojection operator during reconstruction using the HFBP-L algorithm for a phantom with N = 1024 . The parameter α corresponds to a fraction of the total ( 4 N 3 ) number of projections. Time is measured in milliseconds. The minimum computation time for the backprojection operator, corresponding to the experimental optimal aggregation depth, is shown in bold.
Table 4. The average computation time t r e v H F B P L for computing the backprojection operator during reconstruction using the HFBP-L algorithm for a phantom with N = 1024 . The parameter α corresponds to a fraction of the total ( 4 N 3 ) number of projections. Time is measured in milliseconds. The minimum computation time for the backprojection operator, corresponding to the experimental optimal aggregation depth, is shown in bold.
δ = 0 δ = 1 δ = 2 δ = 3 δ = 4 δ = 5
α = 0.01 64.43 ± 2.2 57.69 ± 2.1 53.86 ± 2.1 50.58 ± 2.0 51.82 ± 2.1 50.95 ± 2.0
α = 0.02 64.85 ± 2.2 58.87 ± 2.1 55.24 ± 2.1 52.48 ± 2.1 55.25 ± 2.19 58.81 ± 2.1
α = 0.03 65.38 ± 2.2 58.80 ± 2.1 55.60 ± 2.0 53.80 ± 2.0 57.94 ± 2.0 65.90 ± 1.9
α = 0.05 65.81 ± 2.3 59.83 ± 2.2 57.11 ± 2.1 56.92 ± 2.0 65.30 ± 1.9 81.89 ± 1.8
α = 0.1 66.82 ± 2.2 61.47 ± 2.1 61.28 ± 2.1 65.17 ± 1.9 83.76 ± 1.9 120.80 ± 1.8
α = 0.2 69.33 ± 2.2 66.10 ± 2.1 68.76 ± 2.0 83.08 ± 1.9 122.06 ± 2.0 198.38 ± 2.0
α = 0.3 72.35 ± 2.1 70.51 ± 2.0 77.19 ± 2.0 101.70 ± 1.9 158.65 ± 1.9 274.02 ± 2.3
α = 0.4 75.02 ± 2.1 74.34 ± 2.0 86.06 ± 2.0 120.17 ± 2.0 196.88 ± 2.1 349.87 ± 2.5
α = 0.5 77.70 ± 2.1 79.16 ± 2.0 94.90 ± 2.1 138.64 ± 2.0 232.26 ± 2.1 430.20 ± 2.4
α = 0.6 81.57 ± 2.1 83.62 ± 2.0 105.85 ± 2.3 160.70 ± 2.2 271.90 ± 2.2 505.73 ± 2.4
α = 0.7 84.13 ± 2.1 88.03 ± 2.1 114.66 ± 2.1 176.09 ± 2.2 309.91 ± 2.3 587.20 ± 2.5
α = 0.8 87.12 ± 2.1 93.55 ± 2.1 124.95 ± 2.2 194.32 ± 2.4 346.99 ± 2.6 658.04 ± 2.7
α = 0.9 91.21 ± 2.2 99.01 ± 2.2 135.23 ± 2.2 214.12 ± 2.4 382.09 ± 2.7 733.19 ± 2.9
α = 1.0 94.37 ± 2.2 104.53 ± 2.2 145.02 ± 2.2 232.94 ± 2.5 424.16 ± 2.8 811.04 ± 3.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Polevoy, D.; Gilmanov, M.; Kazimirov, D.; Chukalina, M.; Ingacheva, A.; Kulagin, P.; Nikolaev, D. Tomographic Reconstruction: General Approach to Fast Back-Projection Algorithms. Mathematics 2023, 11, 4759. https://doi.org/10.3390/math11234759

AMA Style

Polevoy D, Gilmanov M, Kazimirov D, Chukalina M, Ingacheva A, Kulagin P, Nikolaev D. Tomographic Reconstruction: General Approach to Fast Back-Projection Algorithms. Mathematics. 2023; 11(23):4759. https://doi.org/10.3390/math11234759

Chicago/Turabian Style

Polevoy, Dmitry, Marat Gilmanov, Danil Kazimirov, Marina Chukalina, Anastasia Ingacheva, Petr Kulagin, and Dmitry Nikolaev. 2023. "Tomographic Reconstruction: General Approach to Fast Back-Projection Algorithms" Mathematics 11, no. 23: 4759. https://doi.org/10.3390/math11234759

APA Style

Polevoy, D., Gilmanov, M., Kazimirov, D., Chukalina, M., Ingacheva, A., Kulagin, P., & Nikolaev, D. (2023). Tomographic Reconstruction: General Approach to Fast Back-Projection Algorithms. Mathematics, 11(23), 4759. https://doi.org/10.3390/math11234759

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop