1. Introduction
X-ray tomography has become a standard experimental technique for revealing the internal 3D structure of objects. For static samples, 3D volume reconstruction from a set of 2D projections is a well researched area and algorithms such as FBP, ART, SART, SIRT, to name a few, have gained wide spread use, with optimised open source implementations easily available, c.f. [
1,
2]. Over the past decade, attention has been drawn towards dynamic tomography, with rapidly deforming or mutating sample volumes, c.f. [
3]. The measured quantity from a tomography experiment usually comes in a pixelated format, as a photon count/intensity readout from a detector at discrete moments in time. The analytical form of these measurements with respect to time is not known and, commonly, the sought volume is reconstructed at individual instances of time. This imposes two competing requirements on the experiment, namely that (I) the studied object remains approximately constant over a measurement series and (II) that each measurement series be sufficiently sampled spatially to uniquely recover the object. For single beam geometries studying highly dynamic systems, characterised by continuous motion, these requirements may be difficult to fulfil simultaneously. Limitations include detector readout speed, low beam flux (resulting in a poor signal to noise ratio) and centrifugal and shear forces due to fast sample rotations. Although these issues can, to some extent, be resolved by hardware solutions [
4], well regularised reconstruction algorithms have played an important role in recent advances, e.g., [
5,
6,
7,
8].
Attention towards multi-beam imaging systems has been increasing in recent years, e.g., [
9,
10,
11,
12,
13,
14,
15]. These systems can acquire, simultaneously, a sparse set of 2D projections (∼10) of the studied sample, reducing the required sample rotation speed. For several specialised cases, such as steady flow in granular media [
12] and weakly deforming solids [
7], solutions in terms of hardware and algorithms have been found. Nevertheless, rotation is still required for samples undergoing large deformations through unsteady flow and the required rotation speed and associated centrifugal forces induced on the sample volume will finally limit what physical processes can and can not be studied. In cases when a sparse number of static projection angles is enough for the desired type of reconstruction, temporal resolution is instead only limited by the rate at which projections can be acquired. Compared to the synchrotron case of [
3] this could push temporal resolution in 4D imaging by at least an order of magnitude (from hHz to kHz). Likewise, if used in conjunction with Free Electron Lasers, such as the European XFEL, temporal resolution in the MHz range could be within reach. In addition, reconstruction schemes targeting static projection data have an impact on a wider scene as they equip multi-source lab tomography devices with high speed imaging capability.
We investigate a tomographic reconstruction scheme that operates from a sparse angularly-static set of 2D projections, attempting to remove the need for rotations during studies of dynamic processes by providing a solution strategy that is less specialised compared to existing methods. Since such acquisitions are still performed only as pilot experiments, with limited experimental data available, we use numerical simulations to verify our method. By making use of the LIGGGHTS Discrete Elements Method (DEM) software [
16], together with an independent analytical ray model, we establish a routine for generating physically realistic and challenging 4D phantoms of granular systems. Although the presented reconstruction scheme is not limited to granular systems, these phantoms serve to validate and illustrate our reconstruction method. As a side effect, the established phantom generation procedure could be used to evaluate the feasibility of granular flow experiments as well as to evaluate the accuracy of other reconstruction algorithms, specifically geared towards granular flow.
The proposed reconstruction method uses a single, initial, well-sampled reconstruction of the sample as the basis to track the time evolution of the sample. By enforcing the sample evolution to obey the continuity-flow differential equations with a prescribed reduced velocity basis, the problem is solved, despite the low angular sampling. Temporal propagation of the sought volume is carried out by numerical time integration using the Finite Volume Method resulting finally in a time series of 3D volumes of the density distribution of the studied object together with an associated series of 3D velocity fields. Similar to other tomographic reconstruction schemes our method relies on modelling X-ray attenuation by line integrals and is thus applicable to multi-beam setups which can establish such an approximation, either directly, or by some pre-processing of projections, to, for instance, make wavelength corrections, c.f. [
14], or to address phase effects [
17]. Additionally, as our method enforces mass conservation via the continuity-flow equation, we must require that no mass leaves the field of view of the imaging system, or, alternatively, that the boundary flow conditions of the system are well defined.
The paper is divided into two parts, where the first half of the paper consists of algorithmic details and the second part describes the numerical investigation for different phantom scenarios. The first part starts in
Section 2 by describing the mathematical formulation of the inverse problem. In
Section 3 and
Section 4 we introduce the Finite Volume discretization and velocity basis used to solve the problem. The numerical time integration scheme used is laid out in
Section 5 and, finally, the temporal interpolation scheme is described in
Section 6. The first part of the paper is closed in
Section 7 by an algorithm summary. The second part of the paper starts in
Section 8 with a general description of how dynamic phantoms can be generated.
Section 9 describes the phantoms used in the validation examples and the resulting reconstructions. In
Section 10, we provide a discussion on our findings. Finally, the paper is closed with conclusions in
Section 11.
2. Problem Formulation
Consider an X-ray attenuation experiment in which transmitted photon intensities are measured by a pixelated area detector. Let
denote the photon count of pixel number
k when no attenuating sample is placed between source and detector. Similarly let
denote the photon count as the sample has been put in place. We define the absorbance along ray number
k as
Let
be an attenuation function of space,
, and time
t. Assuming that Beer’s law holds we may establish an approximate line integral model of the measured absorbance at pixel
k as
where
is the X-ray entry point at the sample boundary,
s and
scalars,
is the X-ray propagation direction and
is the X-ray exit point at the sample boundary.
For acquisitions featuring sample rotation, both
will be functions of time. In a multi-beam setup, when the sample is kept fixed in position,
and
are static and the temporal evolution of
is driven solely by the internal dynamics of the studied sample. We seek to reconstruct
f from
A for the latter of these cases. However, as the number of measurements required to recover
f uniquely is typically much larger than the available data, we must make some additional model assumptions. If the sample forms a closed system, the underlying mass density distribution of the sample,
, will follow the partial differential equations of continuity,
where
is a velocity field and
is the divergence operator. The attenuation for X-rays is approximately proportional to the mass density of the sample,
, and thus, by dividing (
3) with the unknown scalar
c, we expect
which provides us with an additional constraint partly bridging the gap of missing data.
Typically, when studying dynamic processes over short time spans, the experimental setup will involve a triggering system to initiate the desired internal sample processes. This also means that the initial state of the sample will be static prior to any multi-beam measurements taking place. Thus, we will further assume that it is possible to sample f fully for at least a single time point, , by performing an initial classic single-beam, full-rotation tomography of the sample.
By time differentiation of (
2) we have
where we have introduced the right hand side shorthand integral notation where
is the line segment defining ray number
k. Inserting (
4) into (
5) and collecting our results we may now identify an initial value problem of the form
We propose to reconstruct the sample attenuation,
f, by finding solutions in time to (
6).
3. Finite Volume Discretization
Analytical solutions to the continuity equations (
3) for arbitrary density and velocity fields are not known. Numerically sophisticated Finite Volume Methods for propagating these equations in time do, however, exist. Below we derive a discretized format of the continuity equations needed for our algorithm. These results are standard and here included with the purpose of clarifying our notation and aid the reader not familiar with the Finite Volumes Method, for a general introduction to computational fluid dynamics c.f. [
18].
To make use of Finite Volume methods we must split space into identical cubic domains,
, each with volume
and cell faces aligned with a Cartesian coordinate system, as exemplified in
Figure 1.
To further easily refer to quantities associated to cell centroids or cell face centroids we denote the cell centroid coordinate of cell number
i as (
) and define
Furthermore, we denote the surface domain of any one cell as
, which is defined as the union of the six cell face surface domains;
where
and
denote the forward and backwards cell face surface domains in a dimension
corresponding
.
Integrating (
3) over a single cell yields
where
represents a cell average. Using the divergence theorem we may transform (
9) into a surface integral of the form
where
is the surface of the element region
and
is the unit normal vector of this surface. We may now once again divide through with an unknown constant scaling factor
c to represent (
10) in terms of attenuation
Using the fact that the cell is cubic and aligned with the coordinate axes we may rewrite (
11) as
The average density change in time is now easily identified as the sum of net fluxes measured at the cell interfaces. Many Finite Volume schemes for approximating these right hand side fluxes of (
12) exist. Such schemes can produce excellent approximations to the per cell average temporal derivative
. However, we seek to solve an initial value problem (
6) involving not
but
f. This distinction is not a problem for our purposes. In fact, due to the discrete nature of the measurements,
, any reconstruction of
f will always have limited resolution. Typically, in tomographic reconstruction,
f is assumed to be represented on a voxelated grid and numerical ray models, representing the integral operator
, will already assume that
f is piece-wise constant over a voxel. In state-of-the art projection algorithms, this assumption is a crucial component to maintain a computationally fast operator [
2]. If we select the Finite Volume mesh to have cells that coincide with the voxels of our initial reconstructed function
we may proceed by simply stating that the approximation
is valid whenever the ray model is to be executed upon
. When executing the Finite Volume scheme and approximating
, it will be important to distinguish
f from
in the sense that the value
f at some necessary evaluation coordinate in space must be extrapolated from
.
To approximate the per cell temporal average attenuation evolution,
, we have opted to use the central scheme by Kurganov and Tadmor [
19]. This is a so called Monotonic Upstream-centered Scheme for Conservation Laws (MUSCL) which can provide accurate solutions in the presence of sharp discontinuities in the solution field
f. This selected scheme can also be made to have a total variational diminishing (TVD) property if a suitable accompanying temporal integration scheme is selected. These two properties are especially desirable for dynamic systems where the constitutive particles exist in a solid state and undergo translations, rotations and deformations. In such a system, at the interface between constitutives,
f may feature sharp gradients which must not be smoothed as a result of temporal integration. It is possible that simpler schemes, of lower order accuracy, may yield good results for many systems, but such a study is out of scope for this paper.
Explicitly, using the central scheme by Kurganov and Tadmor [
19], the approximation scheme described above renders (
12) in discrete form as
Here the numerical fluxes at the cell faces are approximated as
where
denotes absolute value and
and
are the scalar velocity field component values, (
), at the
i-th backwards and forwards cell face centroid coordinate in dimension
d, i.e.,
The right (
) and left (
) extrapolated cell values are taken as
with
The super-bee limiter function
[
20] introduced in (
16) is defined as
and serves to limit the gradient of
f at strong discontinuities. Limiter functions are commonly used in higher order Finite Volume schemes to avoid spurious oscillations.
Before proceeding any further, to simplify notation, we introduce an operator
representing the action of the finite volume scheme on
and
together with a linear operator
representing the action of a numerical ray model on
,
We use the ASTRA-toolbox [
2] 3D-vector implementation of
; this implementation features GPU acceleration via the Nvidia CUDA platform, which is necessary for computational tractability of the method.
With this notation we may write the modified initial value problem to solve as
where the reader may note that we have now reformulated the problem in terms of cell average absorptivities,
.
4. Velocity Representation
We must now define the velocity field to be evaluated at the cell face centroid coordinates over the Finite Volume mesh. Importantly, we need a parametrisation of
that can accurately represent the true velocity field of the system, and, at the same time, be fully recoverable through solving the intermediate tomographic Equation (
21a) for a fixed time and fixed
. We introduce a finite basis approximation of
as
with
featuring three independent velocity coefficient components. In this work,
, is defined on a tetrahedral mesh in a Finite Element type style. We consider element number
e, connected to node number
j, with an interior domain
, and define the linear interpolation as
where the coefficients
are selected uniquely by requiring that
at node
j and
for all other nodes in the mesh.
We stress again that the basis should be selected to fulfil two competing requirements, namely, that of being recoverable from (
21a) for fixed
t and
, while, at the same time, being generic enough to represent the true velocity field well throughout the duration of the experiment. If no prior information on the velocity representation is known, this fundamentally limits the scope of the method to study phenomena where a generic basis selection, such as the above Finite Element basis, can capture the kinematics of the sample. Depending on the number of available data it could be difficult to reconstruct processes with high total variation in the velocity field. On the contrary, we note that the method puts no restriction on the attenuation function,
, itself which could have extremely rich spatial patterns with sharp gradients and a high total variation.
The reader may now be tempted to think that we have only transferred the problem of reconstructing the scalar field
from too few data to the problem of reconstructing the vector field
from the same data and equations. If this was the case, we would be at a loss, since we have but multiplied the number of unknowns by a factor of three, leaving us worse than we started out. However, we must recall that Equation (
21a) is fundamentally constrained by (I)
being known and (II) the operator
representing continuous flow. In other words the action of
upon
f is severely restricted. For instance, a velocity acting on an evacuated cell in the Finite Volume mesh will have no impact on the updated attenuation field
f. Similarly, it is not possible for
to increase the attenuation value of a single cell in the mesh without also affecting the attenuation of the neighbours of that cell. We raise this point to stress that the presented initial value problem is fundamentally different from solving the tomography equations themselves.
5. Time Integration and CFL Number
To propagate
in time through (21) we need to select a temporal integration scheme. To achieve TVD properties we have selected a Third-order Strong Stability Preserving Runge–Kutta (SSPRK3) scheme [
21].
Let
be an operator yielding the right hand side of the partial differential equation in (21b),
The SSPRK3 scheme is, thus, described by
Importantly the selection of
is ultimately limited by the Courant–Friedrichs–Lewy (CFL) condition, which states that the CFL number,
C, must fulfil the inequality
Especially this means that when the cells of the Finite Volume discretization are selected to match the lengths of the detector pixels the attenuation field, f, must feature sub-pixel motion for meaningful results to be obtained.
When executing
, the intermediate step of solving (
21a) for
must yield a solution that does not violate (
26) such that, when
is used in (21b) by the flow model
, stability is preserved. Note that (
21a) is an upper bound on the flow field, in practice, to maintain TVD for the scheme,
C will have to be smaller than unity.
We have implemented
by first solving (
21a) in a least squares sense for the velocity basis coefficients
after which
can be executed using the resultant velocity field. Specifically, we solve
using the L-BFGS-B minimisation scheme [
22]. To do this, we must compute the partial derivatives of the flow model with respect to the free variables; analytical expressions for these necessary gradients,
, are presented in
Appendix B. The resulting solution,
, is then brought back into the feasible domain by setting
where
We can here also mention that, when solving (
27) repeatedly for a sequence of time points, it can be useful to start the minimisation at the previously located minima to speed up convergence. If the velocity varies slowly between time steps optimal solutions may, thus, be found in a few iterations.
6. Temporal Measurement Interpolation Scheme
To execute the Runge–Kutta time integration scheme, the projected rates,
, must be available. We must now decide how to interpolate
from the available discrete data series (
). We have selected a quadratic interpolation scheme represented in a local basis such that
where
is any time at which
was measured. The polynomial coefficients,
, are determined by the data of the interval,
, and a continuity requirement on
such that
To close out the system of equations we require initially that
i.e., we require that the sample starts from a static state. The interpolation scheme of (
30) allows for evaluation of both
as well as
at arbitrary time-points,
t, and is the lowest order polynomial interpolation scheme that gives continuous
. Since the projection operator
is linear, (
30) implies that we expect
to evolve quadratically over a single timestep. If this approximation holds, the third order RK scheme could theoretically produce an exact integration using a timestep
.
In practice, considering model errors from both ray and flow models, as well as noise in the data, the integration will not be exact. Thus, the updated state
will not match the acquired projections
perfectly. Nonetheless, when integration is performed, the left hand side of (
21a) is requested as a function of the current state
. We must therefore make an interpretation of
for attenuation functions that do not represent the true states of the sample, and thus do not lie on the interpolated path defined by the measurements. Naively, we could set
to the original derivative found in the quadratic interpolation scheme. This will, however, be sensitive to errors in
, as any offset from the original data path will be preserved and amplified throughout time integration. To combat this we may perform, before each integration step, a re-interpolation, introducing the current projected state of
into the data series, replacing the current measurement. This ensures an interpolation path that connects the current erroneous state with the true data points. As a result the projection derivatives,
, on the interval
to
will be modified such that the projection of the updated state
has a tendency to return to the data path. Note that the re-interpolation is introduced to modify
and that the data points,
, described by the re-interpolation are meaningless in themselves.
To illustrate these concepts we have considered three different re-interpolation strategies as illustrated in
Figure 2. Two of these strategies makes use of the current projected state of
while the third naively neglects this current state. Considering a situation when
is reconstructed at a current time point
and we seek to integrate forward in time to
the three strategies may be summarised as follows:
Interpolation using only the original data, i.e., neglecting the current projected state of
during interpolation. The boundary condition for interpolation is applied statically (regardless of
) at
as in Equation (
32). This interpolation scheme will naively disregard any information related to the current state
.
Re-interpolation using original data with the exception of replacing the current data point at
with the current projected state of
. The boundary condition for interpolation is again applied statically (regardless of
) at
as in Equation (
32).
Re-interpolation using original data with the exception of replacing the current data point at
with the current projected state of
. The boundary condition for interpolation is now applied locally to
as
where
represents an interpolation using strategy (1.). This means that starting from
the re-interpolation is forced to join the original (
) interpolation.
To visualise the meaning of the three above interpolation strategies, without yet evaluating which is more accurate, we provide the illustration of
Figure 2.
To illustrate the impact of integration scheme upon the reconstructed time series we consider a single spherical particle moving along a straight line in 3D space. We assign the particle the velocity function
where
is the initial position of the sphere and
t goes from 0 to
. The CFL number now becomes variable in time, as
reaching a prescribed maximum
at
. Using simulated projections from 5 separate angles (described in detail in
Section 8 and
Section 9) we may define a simulated data time series. Reconstruction from these projection data is illustrated in
Figure 3 where the root mean squared error of both volume (
) as well as projections (
) are shown as functions of time. By acquiring full simulated reference scans at multiple selected time points and performing, from these, classical tomographic reconstructions, the volume residuals can be analysed. To provide some comparative value to the RMSE, we present, in
Figure 3, also histograms over the volume and sinogram values clipped at 1% of their maximum values.
Figure 3 indicates that it is desirable to include the current projected model state into the interpolation, as this creates resilience against error build up in the reconstruction, as the time integration will then be forced to return to the data path. In the following we select to use the local boundary condition scheme (bottom row
Figure 2 and orange markings
Figure 3). Although the selected re-interpolation scheme features some desirable properties, we do not claim to have an optimal selection. A deeper exploration of re-interpolation schemes is, however, out of scope for this paper.
8. Framework for Phantom Data Generation
To provide a challenging and realistic phantom test case for the proposed reconstruction algorithm, featuring physically realistic motion, we have simulated ensembles of spherical particles by the Discrete Element method, as implemented in LIGGGHTS [
16]. The LIGGGHTS software package enables time series characterisation of a wide range of kinematic data on the individual particles included in a simulation run. For this work, DEM simulations featuring mixtures of uniform density spherical particles with varying radii have been performed. To create projections from the ensemble state data available from the DEM, we have computed the integral (
2) analytically utilising the fact that the constitutive particles are spherical. Detailed derivations of the resulting integral expression can be found in
Appendix A; here we only present the final used projection equations.
Let us consider a set of particles centred around the origin,
, in a Cartesian coordinate system. Let further a particle be indexed by
and illuminated by a point like X-ray source at
as schematically illustrated in
Figure 4.
The projected attenuation for a given point in the image plane, (
), and sample rotation,
, is then associated to an X-ray propagating along the unit vector
as
where
and
where
S and
D are the source to rotation axis and detector to rotation axis distances, respectively,
are the
x and
y coordinates of the detector ray intersection point and
is a 3 × 3 rotation matrix rotating a vector negatively around the
z-axis. In this setting the line connecting the X-ray source,
, with the detector origin,
, is parallel to
, i.e., the optical axis of propagation is along
. The angle
describes a positive sample rotation around
. Further,
is the sphere attenuation,
the sphere radii,
the sphere centroid coordinate and
denotes real value. Here we have assumed that the detector is perfectly mounted, i.e.,
.
A single detector pixel value,
, of the detector can be assigned by (
36) using the pixel centroid coordinate. However, to better mimic real measurements conditions, we may consider that the measured detector values are in fact integral measures over the surface domains of the pixels. In this paper, we account for this effect by, for each pixel, evaluating (
36) at 9 separate locations located on an equidistant grid within the pixel square and assigning the value of pixel number
k as the average of these 9 samples.
10. Discussion
The projection residuals in the top row of
Figure 5 display an initial, close-to-zero error followed by a rapid evolution and (possibly) stabilisation, with typical values of about two times that of the sphere attenuation. As the constitutive spheres have a diameter of several voxels/cells (>10) this represents a relative projected error much less than unity. The error magnitudes can be assessed further by
Figure 7, where it is seen in the left histogram that the final MAE and RMSE in the real space reconstructions are far lower than the interior sphere values (peak in the histogram). This is also reflected in the 3D renderings of
Figure 5, where the reconstructed thresholded volume fits well to the overlaid true particle positions and is visually very close to the reference reconstructions. The initial close-to-zero projected residual (as opposed to a zero residual) present at
is due to analytical and numerical ray model discrepancy. In a best-case scenario, Algorithm 1 would propagate
in time without introducing any additional errors other than those that are inherent in the ray model. In
Figure 6, an error build up to a few times this inherent error is, however, observed. Interestingly, this error build up seems to stabilise after about 100 timesteps. One might be tempted to think that this error cannot be attributed to a poor spatial velocity basis representation, as the same phenomenon can be observed in
Figure 3 where the motion is uniform in
and, thus, perfectly represented by the finite basis expansion. One should however remember that, although the true underlying attenuation function may feature a velocity field that is uniform, the velocities needed to optimally update the discretised version of
f may very well not be uniform. It is, therefore, likely that the discrepancy between true and discretised attenuation field, together with a finite basis expansion of the velocities, is driving the observed error. This would also explain why the error plateaus after some initial iterations, as then the best possible dynamic sphere representation allowed by the velocity expansion and continuity requirement has been reached. This inherent error would then indeed be expected to be higher than that of the best possible static sphere representation allowed by a classical tomographic reconstruction, where each individual cell value may be changed arbitrarily and independently of neighbouring cells.
Examination of
Figure 5,
Figure 6 and
Figure 7, in comparison to
Figure 11,
Figure 12 and
Figure 13 and
Figure 15,
Figure 16 and
Figure 17, indicates better attenuation reconstructions by Algorithm 1 for the three-body system than for the multi-grain cases. These three examples serve to illustrate the impact of the velocity basis coarseness in relation to the true motion of the system. The three-body system features spheres that are both large and well separated in space compared to the dimension of the velocity basis element size, and it is possible for the basis expansion to take on fields that are close to those of the true underlying system. When the particle density grows successively in the DEM simulations the velocity basis discretisation must be made finer to capture rapid spatial velocity variations. Since the number of basis functions is kept fixed, however, it becomes increasingly more challenging for the algorithm to find velocity fields that yield good updates to the attenuation field. This explains how the largest RMSE and MAE errors are found in the small grains DEM phantom (
Figure 15 and
Figure 16) while the larger grain distribution DEM phantom (
Figure 11 and
Figure 12) have a reduced error and the three-body system (
Figure 5 and
Figure 6) has a close to zero RMSE and MAE. However, even in the most challenging test case, which features more particles than velocity basis functions (see
Table 1), the attenuation field reconstruction depicted in
Figure 15 still displays a reasonable distribution of the bulk mass of the system and is even able to capture the ejection of a single constitutive grain from the ensemble. We stress here that, apart from the initial conditions (
), the algorithm has no access to sample specific prior information, which is a result of the attempt to formulate a generic solution strategy. Additional prior information on the kinematics of the sample could even further improve the reconstruction quality and reduce the number of needed velocity basis functions and projections. The most obvious of such constraints lies in the selection of the velocity basis itself limiting what type of motion that is permitted.
We note that, in the DEM impact scenarios (
Figure 12 and
Figure 16), the MAE and RMSE are seen to increase more rapidly from the moment the impactor makes contact with the granular bed (as marked in the figures). Before the moment of impact, the errors associated with the reconstruction of the static bed would be expected to only be attributed to the inherent previously discussed errors, which is reflected in the MAE and RMSE indicating plateauing prior to impact.
Examining the velocity fields produced by the sub-iteration of Algorithm 1, we see in
Figure 8 that the motion of the three particle system is preserved throughout the full time-series with an error in the size of some small fraction of a sphere radius. We stress that the velocities produced by Algorithm 1 are optimised, not to capture the kinematics of the system, but to propagate the current reconstructed grid of attenuation cell values, through a Finite Volume model, to match the measured rate of the sinogram. From an algorithmic perspective, the velocities are but a vehicle to find allowable updates to the attenuation volume, which is the sought output from the algorithm. This means, in particular, that historical errors introduced into the attenuation volume, due to, for instance, finite discretisations or model approximations, may be corrected for later by introducing spurious velocities. Nevertheless, the results observed for the three-body system shows that these artificial velocities can capture the true kinematics of the system. Turning to the same metric for the DEM simulations we see that the velocities produced for the DEM with larger grains (
Figure 14) better preserve the sphere positions in time compared to the small grains distribution (
Figure 18). It is interesting to note that some correlation between high CFL numbers for the spheres and high position errors seem to be present. In particular, for the small grains DEM, the kinematics of the system are poorly captured, which may be partly explained by high CFL numbers and partly by the discretisation of attenuation and velocity field being coarse in comparison to the constitutive of the system. However, the exact conditions under which Algorithm 1 reliably reproduces the kinematics of the studied system remains a question for further study.
11. Conclusions
An algorithm for reconstruction of spatiotemporal attenuation fields, geared towards rotation-free, fast, multi-beam X-ray imaging, has been developed and numerically tested using an independent analytical ray model for phantom generation. The algorithm was found to produce reconstructions from a sparse set of projections (unknowns: equations > 10) with errors in the order of magnitude of those expected from classical full angle range tomography. Furthermore, it was found that the velocity fields produced by the algorithm sub-iterations can, under the right conditions, approximate the kinematics of the imaged system well. The proposed algorithm opens the door to unveiling the 3D evolution of physical processes unfolding at timescales currently inaccessible with existing techniques. In summary, the algorithm constrains the possible solutions by assuming conservation of the total linear X-ray attenuation within the field of view. For systems where no phase transformations are present, this is identical to enforcing mass conservation.
The presented algorithm opens for a rich survey and optimisation of parameter selections, including, but not limited to, velocity basis selection, Finite Volume scheme, time integration scheme and temporal interpolation scheme of the measurements. Improvements to the computational aspects of the algorithm, providing fast parallel implementations would also be desirable. To further evaluate the robustness of the algorithm for real data applications, the impact of X-ray phase propagation and detector noise in the measurement series are deemed to be key. Beyond these considerations, application to real, multi-beam, data could reveal further avenues of improvement. The proposed algorithm aims towards a generic as possible reconstruction scheme for multi-beam data. With this in mind, we suggest that the algorithm may serve as a basis to solve more advanced problems via the introduction of sample specific constraints.