Speaker
Description
In the synchrotron radiation tomography experiment, sparse-view sampling is capable of reducing severe radiation damages of samples from X-ray, accelerating sampling rate and decreasing total volume of the experimental dataset. Consequently, the sparse-view CT reconstruction has been a hot topic nowadays. Generally, there are two types of traditional algorithms for CT reconstruction, i.e., the analytic and iterative algorithms. However, the widely used analytic CT reconstruction algorithms usually lead to severe stripe artifacts in the sparse-view reconstructed images, due to the Nyquist rule is not satisfied. While the more accurate iterative algorithms often result in prohibitively high computational costs and difficulty in selecting production parameters.
Recently, using machine learning to improve the image quality of analytic algorithms is proposed as an alternative, for which multiple promising results have been successively shown. Generally, the machine learning approach of CT sparse reconstruction involves two domains, i.e., image and sinogram domain. The image domain method aims at solving the reconstructed mapping problem from the perspective of computer vision, while the sinogram domain one from the perspective of statistics and physics. For the image domain method, the performance of denoising the stripe artifacts is distinguished, while the generalization ability is relatively poor, due to the image-to-image process procedure. That method mostly employs convolution neural networks to extract features, which is lack of consideration of the global correlation of the extracted features. For the sinogram domain method, the generalization ability is rather good, due to a direct estimate of unmeasured views on the sinogram by interpolations. Nevertheless, imperfect interpolations could introduce extra artifacts. Recently, some attempts on the hybrid method, which combines image and sinogram domain methods, have been reported. Up to now, the reported hybrid method merely employed those two methods in series, which could lead to uncertain interference in the reconstruction results due to the neglect of the asymmetry of information processing during the mapping process.
In this paper, we propose a new hybrid domain method based on fusion learning. In the image domain, we employ a UNet-like network which contains the Transformer module to consider the global correlation of the extracted features. In the sinogram domain, we employ a Laplacian Pyramid network to recover unmeasured data in the sinogram, which progressively reconstructs the sub-band residuals and can reduce the quantity of network parameters. Subsequently, we employ a deep fusion network to fuse the two reconstruction results at a feature-level, which can merge the useful information of the two reconstructed images. We also compared the performances of those single-domain methods and the hybrid domain method. Experimental results indicate that the proposed method is practical and effective for reducing the artifacts and preserving the quality of the reconstructed image.