RTK/Examples/4DROOSTERReconstruction: Difference between revisions

From Openrtk
Jump to navigation Jump to search
No edit summary
No edit summary
 
(33 intermediate revisions by one other user not shown)
Line 1: Line 1:
RTK provides a tool to reconstruct a 3D + time image which does not require explicit motion estimation. Instead, it uses total variation regularization along both space and time. The implementation is based on a paper that we have published ([http://www.creatis.insa-lyon.fr/site/fr/publications/MORY-14 article]). You should read the article to understand the basics of the algorithm before trying to use the software.
RTK provides a tool to reconstruct a 3D + time image which does not require explicit motion estimation. Instead, it uses total variation regularization along both space and time. The implementation is based on a paper that we have published ([http://www.creatis.insa-lyon.fr/site/fr/publications/MORY-14 article]). You should read the article to understand the basics of the algorithm before trying to use the software.


The algorithm requires a set of projection images with the associated RTK geometry, the respiratory phase of each projection image and a motion mask in which the region of the space where movement is expected is set to 1 and the rest is set to 0. Each piece of data is described in more details below and can be downloaded from [http://midas3.kitware.com/midas/folder/10278 MIDAS]. It is assumed that the breathing motion is periodic, which implies that the mechanical state of the chest depends only on the respiratory phase.
The algorithm requires a set of projection images with the associated RTK geometry, the respiratory phase of each projection image and a motion mask in which the region of the space where movement is expected is set to 1 and the rest is set to 0. Each piece of data is described in more details below and can be downloaded using [https://data.kitware.com/#collection/5a7706878d777f0649e04776 Girder]. It is assumed that the breathing motion is periodic, which implies that the mechanical state of the chest depends only on the respiratory phase.


= Projection images =
= Projection images =


This example is illustrated with a set of projection images of the [http://www.creatis.insa-lyon.fr/rio/popi-model_original_page POPI patient]. You can [http://midas3.kitware.com/midas/download/item/208648/projections.tgz download the projections] and the required tables of the Elekta database, [http://midas3.kitware.com/midas/download/item/208650/FRAME.DBF FRAME.DBF] and [http://midas3.kitware.com/midas/download/item/208649/IMAGE.DBF IMAGE.DBF]. The dataset is first used to reconstruct a blurry image:
This example is illustrated with a set of projection images of the [http://www.creatis.insa-lyon.fr/rio/popi-model_original_page POPI patient]. You can [https://data.kitware.com/api/v1/item/5be99af88d777f2179a2e144/download download the projections] and the required tables of the Elekta database, [https://data.kitware.com/api/v1/item/5be99a068d777f2179a2cf4f/download FRAME.DBF] and [https://data.kitware.com/api/v1/item/5be99a078d777f2179a2cf65/download IMAGE.DBF]. The dataset is first used to reconstruct a blurry image:


<source lang="bash">
<source lang="bash">
Line 15: Line 15:
   -u 1.3.46.423632.141000.1169042526.68
   -u 1.3.46.423632.141000.1169042526.68


# Reconstruct a 3D volume from all projection images. It implicitly assumes that the patient's chest is static, which is wrong. Therefore the reconstruction will be blurry
# Reconstruct a 3D volume from all projection images.  
# We implicitly assume that the patient's chest is static, which is wrong.  
# Therefore the reconstruction will be blurry. This blurry reconstruction is
# not required for 4D ROOSTER, but there is usually no other way to generate
# a motion mask, which is required. Additionally, the blurry reconstruction
# can be used to initialize the 4D ROOSTER reconstruction.
rtkfdk \
rtkfdk \
   -p . \
   -p . \
Line 22: Line 27:
   -g geometry.rtk \
   -g geometry.rtk \
   --hann 0.5 \
   --hann 0.5 \
   --pad 1.0
   --pad 1.0 \
  --dimension 160 \
  --spacing 2


# Keep only the field-of-view of the image
rtkfieldofview \
  --reconstruction fdk.mha \
  --output fdk.mha \
  --geometry geometry.rtk \
  --path . \
  --regexp '.*.his'
</source>
</source>


Line 39: Line 39:
= Motion mask =
= Motion mask =


The next piece of data is a 4D deformation vector field that describes a respiratory cycle. Typically, it can be obtained from the 4D planning CT with deformable image registration. Here, I have used [http://elastix.bigr.nl/ Elastix] with the [http://elastix.bigr.nl/wiki/index.php/Par0016 sliding module] developed by Vivien Delmon. The registration uses a [http://midas3.kitware.com/midas/download/item/208653/patient_50.mha patient mask] (red+green) and a [http://midas3.kitware.com/midas/download/item/208652/mm_50.mha motion mask] (red) as described in [http://www.creatis.insa-lyon.fr/site/fr/publications/VAND-12 Jef's publication]:
The next piece of data is a 3D motion mask: a volume that contains only zeros, where no movement is expected to occur, and ones, where movement is expected. Typically, during breathing, the whole chest moves, so it is safe to use the [https://data.kitware.com/api/v1/item/5be99a418d777f2179a2ddf4/download patient mask] (red+green). However, restricting the movement to a smaller region, e.g. the rib cage, can help reconstruct this region more accurately and mitigate artifacts, so we might want to use the [https://data.kitware.com/api/v1/item/5be99a2c8d777f2179a2dc4f/download rib cage mask] (red):


[[File:mm.jpg]]
[[File:mm.jpg]]


= Respiratory signal =


The registration can easily be scripted, here with bash, where each phase image of the POPI 4D CT has been stored in files 00.mhd to 50.mhd:
The 4D ROOSTER algorithm requires that we associate each projection image with the instant of the respiratory cycle at which it has been acquired. We used the Amsterdam shroud solution of Lambert Zijp (described [http://www.creatis.insa-lyon.fr/site/fr/publications/RIT-12a here]) which is implemented in RTK


<source lang="bash">
<source lang="bash">
for i in $(seq -w 0 10 90)
rtkamsterdamshroud --path . \
do
                  --regexp '.*.his' \
  mkdir $i
                  --output shroud.mha \
  elastix -f 50.mhd \
                  --unsharp 650
          -m $i.mhd \
rtkextractshroudsignal --input shroud.mha \
          -out $i \
                      --output signal.txt \
          -labels mm_50.mha \
                      --phase sphase.txt
          -fMask patient_50.mha \
          -p Par0016.multibsplines.lung.sliding.txt
done
</source>
</source>


Deformable Image Registration is a complex and long process so you will have to be patient here. Note that the reference frame is phase 50% and it is registered to each phase from 0% to 90%. One subtle step is that the vector field is a displacement vector field, i.e., each vector is the local displacement of the point at its location. Since I ran the registration on the 4D planning CT, the coordinate system is not that of the cone-beam CT. In order to produce the vector field in the cone-beam coordinate system, I have used the following bash script that combines transformix and several "clitk" tools that are provided along with VV:
to get the phase signal. Note that the phase must go from 0 to 1 where 0.3 corresponds to 30% in the respiratory cycle, i.e., frame 3 if you have a 10-frames 4D reconstruction or frame 6 if you have a 20-frames 4D reconstruction. The [https://data.kitware.com/api/v1/item/5be99af98d777f2179a2e160/download resulting phase] is in green on top of the blue respiratory signal and the detected end-exhale peaks:


<source lang="bash">
[[File:signal.gif]]
# Create 4x4 matrix that describes the CT to CBCT change of coordinate system.
# This matrix is a combination of the knowledge of the isocenter position / axes orientation
# and a rigid alignment that has been performed with Elastix
echo "-0.0220916855767852  0.9996655273534405 -0.0134458487848415 -83.6625731437426197"  >CT_CBCT.mat
echo " 0.0150924269790251 -0.0131141301144939 -0.9998000991394341  -4.0763571826687057" >>CT_CBCT.mat
echo " 0.9996420239647088  0.0222901999207823  0.0147976657359281  77.8903364738220034" >>CT_CBCT.mat
echo " 0.0000000000000000  0.0000000000000000  0.0000000000000000  1.0000000000000000" >>CT_CBCT.mat


# Transform 4x4 matrix that describes the transformation
= 4D ROOSTER for cone-beam CT reconstruction =
# from planning CT to CBCT to a vector field
clitkMatrixTransformToVF --like 50.mhd \
                        --matrix CT_CBCT.mat \
                        --output CT_CBCT.mha


# Inverse transformation. Also remove upper slices that are outside the
We now have all the pieces to perform a 3D + time reconstruction. The algorithm will perform "niter" iterations of the main loop, and run three inner loops at each iteration:
# planning CT CBCT_CT.mat is the inverse of CT_CBCT.mha
* conjugate gradient reconstruction with "cgiter" iterations
clitkMatrixInverse -i CT_CBCT.mat \
* total-variation regularization in space, with parameter "gamma_space" (the higher, the more regularized) and "tviter" iterations
                  -o CBCT_CT.mat
* total-variation regularization in time, with parameter "gamma_time" (the higher, the more regularized) and "tviter" iterations
clitkMatrixTransformToVF --origin -127.5,-107.5,-127.5 \
The number of iterations suggested here should work in most situations. The parameters gamma_space and gamma_time, on the other hand, must be adjusted carefully for each type of datasets (and, unfortunately, for each resolution). Unlike in analytical reconstruction methods like FDK, with 4D ROOSTER it is also very important that the reconstruction volume contains the whole patient's chest. You should therefore adapt the "dimension", "spacing" and "origin" parameters carefully, based on what you have observed on the blurry FDK reconstruction.
                        --spacing 1,1,1 \
                        --size 256,236,256 \
                        --matrix CBCT_CT.mat \
                        --output CBCT_CT.mha


# Go over each elastix output file, generate the vector field with
<source lang="bash">
# transformix and compose with the two rigid vector fields
# Reconstruct from all projection images with 4D ROOSTER
for i in $(seq -w 0 10 90)
rtkfourdrooster \
do
  -p . \
   transformix -in 50.mhd \
  -r .*.his \
              -out $i \
   -o rooster.mha \
              -tp $i/TransformParameters.0.txt \
  -g geometry.rtk \
              -def all -threads 16
  --signal sphase.txt \  
   clitkComposeVF --input1 CBCT_CT.mha \
  --motionmask MotionMask.mha \
                --input2 $i/deformationField.mhd \
   --gamma_time 0.0001 \
                --output $i/deformationField.mhd
  --gamma_space 0.0001 \
   clitkComposeVF --input1 $i/deformationField.mhd \
  --niter 30 \
                --input2 CT_CBCT.mha \
  --cgiter 4 \
                --output $i/deformationField.mhd
   --tviter 10 \
done
  --spacing 2 \
  --dimension 160 \
  --frames 5
</source>
</source>


This is a bit complicated and there are probably other ways of doing this. For example, Vivien has resampled the planning CT frames on the CBCT coordinate system before doing the registrations, in which case you do not need to do all this. Just pick one of your choice but motion-compensated CBCT reconstruction requires a 4D vector field that is nicely displayed on top of a CBCT image, for example the fdk.mha that has been produced in the first step (the vector field is downsampled and displayed with VV):
Depending on the resolution you choose, and even on powerful computers, the reconstruction time can range from a few minutes (very low resolution, typically 32³ * 5, only for tests) to many hours (standard resolution, typically 256³ * 10). To speed things up, it is recommended to use the CUDA version of the forward and back projectors by running this command instead:
 
<source lang="bash">
# Reconstruct from all projection images with 4D ROOSTER, using CUDA forward and back projectors
rtkfourdrooster \
  -p . \
  -r .*.his \
  -o rooster.mha \
  -g geometry.rtk \
  --signal sphase.txt \
  --motionmask MotionMask.mha \
  --fp CudaRayCast \
  --bp CudaVoxelBased \
  --gamma_time 0.0001 \
  --gamma_space 0.0001 \
  --niter 30 \
  --cgiter 4 \
  --tviter 10 \
  --spacing 2 \
  --dimension 160 \
  --frames 5
</source>


[[File:vf.gif]]
With a recent GPU, this should allow you to perform a standard resolution reconstruction in less than one hour.


The elastix output files and the transformed 4D DVF are available [http://midas3.kitware.com/midas/download/item/208662/dvf.tgz here].
Note that the reconstructed volume in this example does not fully contain the attenuating object, causing hyper-attenuation artifacts on the borders of the result. To avoid these artifacts, reconstruct a larger volume (--dimension 256) should be fine. Note that you will have to resize your motion mask as well, as 3D the motion mask is expected to have the same size, spacing and origin as the first 3 dimensions of the 4D output.


= Respiratory signal =
= Motion-Aware 4D ROOSTER (MA-ROOSTER) =


The motion model requires that we associate each projection image with one frame of the 4D vector field. We used the Amsterdam shroud solution of Lambert Zijp (described [http://www.creatis.insa-lyon.fr/site/fr/publications/RIT-12a here]) which is implemented in RTK
4D ROOSTER doesn't require explicit motion information, but can take advantage of it to guide TV-regularization if motion information is available. Please refer to the tutorial on Motion-Compensated FDK to learn how to obtain a valid 4D Displacement Vector Field (DVF). Once you have it, simply adding the option --dvf "4D_DVF_Filename" to the list of rtkfourdrooster arguments will run MA-ROOSTER instead of 4D ROOSTER.  


<source lang="bash">
<source lang="bash">
rtkamsterdamshroud --path . \
# Reconstruct from all projection images with MA ROOSTER
                  --regexp '.*.his' \
rtkfourdrooster \
                  --output shroud.mha \
  -p . \
                  --unsharp 650
  -r .*.his \
rtkextractshroudsignal --input shroud.mha \
  -o rooster.mha \
                      --output signal.txt
  -g geometry.rtk \
  --signal sphase.txt \  
  --motionmask MotionMask.mha \
  --gamma_time 0.0001 \
  --gamma_space 0.0001 \
  --niter 30 \
  --cgiter 4 \
  --tviter 10 \
  --spacing 2 \
  --dimension 160 \
  --frames 5 \
  --dvf deformationField_4D.mhd
</source>
</source>


in combination with Matlab post-processing to get the phase signal. Note that the phase must go from 0 to 1 where 0.3 corresponds to 30% in the respiratory cycle, i.e., frame 3 if you have a 10-frames 4D DVF or frame 6 if you have a 20-frames 4D DVF. The [http://midas3.kitware.com/midas/download/item/208655/sphase.txt resulting phase] is in green on top of the blue respiratory signal and the detected end-exhale peaks:
Making use of the motion information adds to computation time. If you have compiled RTK with RTK_USE_CUDA = ON and have a working and CUDA-enabled nVidia GPU, it will automatically be used to speed up that part of the process.


[[File:signal.gif]]
The article in which the theoretical foundations of MA-ROOSTER are presented (link to be added) contains results obtained on two patients. If you wish to reproduce these results, you can download all the necessary data here:
* Original projections, log-transformed projections with the table removed, motion mask, respiratory signal, and transform matrices to change from CT to CBCT coordinates and back
** [https://data.kitware.com/api/v1/item/5be97d688d777f2179a28e39/download Patient 1]
** [https://data.kitware.com/api/v1/item/5be99df68d777f2179a2e904/download Patient 2]
* Inverse-consistent 4D Displacement Vector Fields, to the end-exhale phase and from the end-exhale phase
** [https://data.kitware.com/api/v1/item/5be989e68d777f2179a29e95/download Patient 1]
** [https://data.kitware.com/api/v1/item/5be9a1388d777f2179a2f44d/download Patient 2]


= Motion-compensated cone-beam CT reconstruction =
Extract the data of each patient in a separate folder. From the folder containing the data of patient 1, run the following command line:


We now have all the pieces to do a motion-compensated reconstruction. Gathering all these pieces is probably the main difficulty. If you have set everything correctly, you now only have to run the following commands
<source lang="bash">
# Reconstruct patient 1 with MA ROOSTER
rtkfourdrooster \
  -p . \
  -r correctedProjs.mha \
  -o marooster.mha \
  -g geom.xml \
  --signal cphase.txt \
  --motionmask dilated_resampled_mm.mhd \
  --gamma_time 0.0002 \
  --gamma_space 0.00005 \
  --niter 10 \
  --cgiter 4 \
  --tviter 10 \
  --spacing "1, 1, 1, 1" \
  --dimension "220, 280, 370, 10" \
  --origin "-140, -140, -75, 0" \
  --frames 10 \
  --dvf toPhase50_4D.mhd \
  --idvf fromPhase50_4D.mhd
</source>
 
From the folder containing the data of patient 2, run the following command line (only the dimension and origin parameters are different):


<source lang="bash">
<source lang="bash">
# Reconstruct from all projection images with motion compensation
# Reconstruct patient 2 with MA ROOSTER
rtkfdk \
rtkfourdrooster \
   -p . \
   -p . \
   -r .*.his \
   -r correctedProjs.mha \
   -o fdk.mha \
   -o marooster.mha \
   -g geometry.rtk \
   -g geom.xml \
   --hann 0.5 \
   --signal cphase.txt \  
   --pad 1.0 \
   --motionmask dilated_resampled_mm.mhd \
   --signal sphase.txt \
   --gamma_time 0.0002 \
   --dvf deformationField_4D.mhd
   --gamma_space 0.00005 \
 
  --niter 10 \
# Keep only the field-of-view of the image
  --cgiter 4 \
rtkfieldofview \
  --tviter 10 \
   --reconstruction fdk.mha \
  --spacing "1, 1, 1, 1" \
   --output fdk.mha \
   --dimension "285, 270, 307, 10" \
   --geometry geometry.rtk \
   --origin "-167.5, -135, -205, 0" \
   --path . \
   --frames 10 \
   --regexp '.*.his'
   --dvf toPhase50_4D.mhd \
   --idvf fromPhase50_4D.mhd
</source>
</source>


At this stage, you should have the time to drink a coffee (several for some computers). A neat solution to appreciate the improvement is to toggle between uncorrected and motion-compensated reconstruction:
Note the option "--idvf", which allows to provide the inverse DVF. It is used to inverse warp the 4D reconstruction after the temporal regularization. MA-ROOSTER will work with and without the inverse DVF, and yield almost the same results in both cases. Not using the inverse DVF is approximately two times slower, as it requires MA-ROOSTER to perform the inverse warping by an iterative method.


[[File:blurred_vs_mc.gif]]
Again, if you have a CUDA-enabled GPU (in this case with at least 3 GB of VRAM), and have compiled RTK with RTK_USE_CUDA = ON, you can add the "--bp CudaVoxelBased" and "--fp CudaRayCast" to speed up the computation by performing the forward and back projections on the GPU.


You should be aware that we have constructed the 4D vector field with phase 50% as a reference. Therefore, the image is compensated towards that position. However, any other phase can be reconstructed by modifying the reference image of your 4D vector field. We usually like to reconstruct towards the time-average position.
You do not need the 4D planning CT data to perform the MA-ROOSTER reconstructions. It is only required to compute the DVFs, which can be downloaded above. We do provide it anyway, in case you want to use your own method, or the one described in Motion-Compensated FDK, to extract a DVF from it:
* 4D planning CT
** [https://data.kitware.com/api/v1/item/5be98bd28d777f2179a2a279/download Patient 1]
** [https://data.kitware.com/api/v1/item/5be9a1918d777f2179a2f568/download Patient 2]

Latest revision as of 11:00, 12 November 2018

RTK provides a tool to reconstruct a 3D + time image which does not require explicit motion estimation. Instead, it uses total variation regularization along both space and time. The implementation is based on a paper that we have published (article). You should read the article to understand the basics of the algorithm before trying to use the software.

The algorithm requires a set of projection images with the associated RTK geometry, the respiratory phase of each projection image and a motion mask in which the region of the space where movement is expected is set to 1 and the rest is set to 0. Each piece of data is described in more details below and can be downloaded using Girder. It is assumed that the breathing motion is periodic, which implies that the mechanical state of the chest depends only on the respiratory phase.

Projection images

This example is illustrated with a set of projection images of the POPI patient. You can download the projections and the required tables of the Elekta database, FRAME.DBF and IMAGE.DBF. The dataset is first used to reconstruct a blurry image:

# Convert Elekta database to RTK geometry
rtkelektasynergygeometry \
  -o geometry.rtk \
  -f FRAME.DBF \
  -i IMAGE.DBF \
  -u 1.3.46.423632.141000.1169042526.68

# Reconstruct a 3D volume from all projection images. 
# We implicitly assume that the patient's chest is static, which is wrong. 
# Therefore the reconstruction will be blurry. This blurry reconstruction is 
# not required for 4D ROOSTER, but there is usually no other way to generate 
# a motion mask, which is required. Additionally, the blurry reconstruction 
# can be used to initialize the 4D ROOSTER reconstruction. 
rtkfdk \
  -p . \
  -r .*.his \
  -o fdk.mha \
  -g geometry.rtk \
  --hann 0.5 \
  --pad 1.0 \
  --dimension 160 \
  --spacing 2

You should obtain something like that with VV:

Blurred.jpg

Motion mask

The next piece of data is a 3D motion mask: a volume that contains only zeros, where no movement is expected to occur, and ones, where movement is expected. Typically, during breathing, the whole chest moves, so it is safe to use the patient mask (red+green). However, restricting the movement to a smaller region, e.g. the rib cage, can help reconstruct this region more accurately and mitigate artifacts, so we might want to use the rib cage mask (red):

Mm.jpg

Respiratory signal

The 4D ROOSTER algorithm requires that we associate each projection image with the instant of the respiratory cycle at which it has been acquired. We used the Amsterdam shroud solution of Lambert Zijp (described here) which is implemented in RTK

rtkamsterdamshroud --path . \
                   --regexp '.*.his' \
                   --output shroud.mha \
                   --unsharp 650
rtkextractshroudsignal --input shroud.mha \
                       --output signal.txt \
                       --phase sphase.txt

to get the phase signal. Note that the phase must go from 0 to 1 where 0.3 corresponds to 30% in the respiratory cycle, i.e., frame 3 if you have a 10-frames 4D reconstruction or frame 6 if you have a 20-frames 4D reconstruction. The resulting phase is in green on top of the blue respiratory signal and the detected end-exhale peaks:

Signal.gif

4D ROOSTER for cone-beam CT reconstruction

We now have all the pieces to perform a 3D + time reconstruction. The algorithm will perform "niter" iterations of the main loop, and run three inner loops at each iteration:

  • conjugate gradient reconstruction with "cgiter" iterations
  • total-variation regularization in space, with parameter "gamma_space" (the higher, the more regularized) and "tviter" iterations
  • total-variation regularization in time, with parameter "gamma_time" (the higher, the more regularized) and "tviter" iterations

The number of iterations suggested here should work in most situations. The parameters gamma_space and gamma_time, on the other hand, must be adjusted carefully for each type of datasets (and, unfortunately, for each resolution). Unlike in analytical reconstruction methods like FDK, with 4D ROOSTER it is also very important that the reconstruction volume contains the whole patient's chest. You should therefore adapt the "dimension", "spacing" and "origin" parameters carefully, based on what you have observed on the blurry FDK reconstruction.

# Reconstruct from all projection images with 4D ROOSTER
rtkfourdrooster \
  -p . \
  -r .*.his \
  -o rooster.mha \
  -g geometry.rtk \
  --signal sphase.txt \ 
  --motionmask MotionMask.mha \
  --gamma_time 0.0001 \
  --gamma_space 0.0001 \
  --niter 30 \
  --cgiter 4 \
  --tviter 10 \
  --spacing 2 \
  --dimension 160 \
  --frames 5

Depending on the resolution you choose, and even on powerful computers, the reconstruction time can range from a few minutes (very low resolution, typically 32³ * 5, only for tests) to many hours (standard resolution, typically 256³ * 10). To speed things up, it is recommended to use the CUDA version of the forward and back projectors by running this command instead:

# Reconstruct from all projection images with 4D ROOSTER, using CUDA forward and back projectors
rtkfourdrooster \
  -p . \
  -r .*.his \
  -o rooster.mha \
  -g geometry.rtk \
  --signal sphase.txt \ 
  --motionmask MotionMask.mha \
  --fp CudaRayCast \
  --bp CudaVoxelBased \
  --gamma_time 0.0001 \
  --gamma_space 0.0001 \
  --niter 30 \
  --cgiter 4 \
  --tviter 10 \
  --spacing 2 \
  --dimension 160 \
  --frames 5

With a recent GPU, this should allow you to perform a standard resolution reconstruction in less than one hour.

Note that the reconstructed volume in this example does not fully contain the attenuating object, causing hyper-attenuation artifacts on the borders of the result. To avoid these artifacts, reconstruct a larger volume (--dimension 256) should be fine. Note that you will have to resize your motion mask as well, as 3D the motion mask is expected to have the same size, spacing and origin as the first 3 dimensions of the 4D output.

Motion-Aware 4D ROOSTER (MA-ROOSTER)

4D ROOSTER doesn't require explicit motion information, but can take advantage of it to guide TV-regularization if motion information is available. Please refer to the tutorial on Motion-Compensated FDK to learn how to obtain a valid 4D Displacement Vector Field (DVF). Once you have it, simply adding the option --dvf "4D_DVF_Filename" to the list of rtkfourdrooster arguments will run MA-ROOSTER instead of 4D ROOSTER.

# Reconstruct from all projection images with MA ROOSTER
rtkfourdrooster \
  -p . \
  -r .*.his \
  -o rooster.mha \
  -g geometry.rtk \
  --signal sphase.txt \ 
  --motionmask MotionMask.mha \
  --gamma_time 0.0001 \
  --gamma_space 0.0001 \
  --niter 30 \
  --cgiter 4 \
  --tviter 10 \
  --spacing 2 \
  --dimension 160 \
  --frames 5 \
  --dvf deformationField_4D.mhd

Making use of the motion information adds to computation time. If you have compiled RTK with RTK_USE_CUDA = ON and have a working and CUDA-enabled nVidia GPU, it will automatically be used to speed up that part of the process.

The article in which the theoretical foundations of MA-ROOSTER are presented (link to be added) contains results obtained on two patients. If you wish to reproduce these results, you can download all the necessary data here:

  • Original projections, log-transformed projections with the table removed, motion mask, respiratory signal, and transform matrices to change from CT to CBCT coordinates and back
  • Inverse-consistent 4D Displacement Vector Fields, to the end-exhale phase and from the end-exhale phase

Extract the data of each patient in a separate folder. From the folder containing the data of patient 1, run the following command line:

# Reconstruct patient 1 with MA ROOSTER
rtkfourdrooster \
  -p . \
  -r correctedProjs.mha \
  -o marooster.mha \
  -g geom.xml \
  --signal cphase.txt \ 
  --motionmask dilated_resampled_mm.mhd \
  --gamma_time 0.0002 \
  --gamma_space 0.00005 \
  --niter 10 \
  --cgiter 4 \
  --tviter 10 \
  --spacing "1, 1, 1, 1" \
  --dimension "220, 280, 370, 10" \
  --origin "-140, -140, -75, 0" \
  --frames 10 \
  --dvf toPhase50_4D.mhd \
  --idvf fromPhase50_4D.mhd

From the folder containing the data of patient 2, run the following command line (only the dimension and origin parameters are different):

# Reconstruct patient 2 with MA ROOSTER
rtkfourdrooster \
  -p . \
  -r correctedProjs.mha \
  -o marooster.mha \
  -g geom.xml \
  --signal cphase.txt \ 
  --motionmask dilated_resampled_mm.mhd \
  --gamma_time 0.0002 \
  --gamma_space 0.00005 \
  --niter 10 \
  --cgiter 4 \
  --tviter 10 \
  --spacing "1, 1, 1, 1" \
  --dimension "285, 270, 307, 10" \
  --origin "-167.5, -135, -205, 0" \
  --frames 10 \
  --dvf toPhase50_4D.mhd \
  --idvf fromPhase50_4D.mhd

Note the option "--idvf", which allows to provide the inverse DVF. It is used to inverse warp the 4D reconstruction after the temporal regularization. MA-ROOSTER will work with and without the inverse DVF, and yield almost the same results in both cases. Not using the inverse DVF is approximately two times slower, as it requires MA-ROOSTER to perform the inverse warping by an iterative method.

Again, if you have a CUDA-enabled GPU (in this case with at least 3 GB of VRAM), and have compiled RTK with RTK_USE_CUDA = ON, you can add the "--bp CudaVoxelBased" and "--fp CudaRayCast" to speed up the computation by performing the forward and back projections on the GPU.

You do not need the 4D planning CT data to perform the MA-ROOSTER reconstructions. It is only required to compute the DVFs, which can be downloaded above. We do provide it anyway, in case you want to use your own method, or the one described in Motion-Compensated FDK, to extract a DVF from it: