Published online Aug 28, 2021. doi: 10.35711/aimi.v2.i4.86
Peer-review started: May 24, 2021
First decision: June 16, 2021
Revised: June 24, 2021
Accepted: August 17, 2021
Article in press: August 17, 2021
Published online: August 28, 2021
Processing time: 97 Days and 17.4 Hours
Abdominal magnetic resonance imaging (MRI) and computed tomography (CT) are commonly used for disease screening, diagnosis, and treatment guidance. However, abdominal MRI has disadvantages including slow speed and vulnerability to motions, while CT suffers from problems of radiation. It has been reported that deep learning reconstruction can solve such problems while maintaining good image quality. Recently, deep learning-based image reconstru
Core Tip: We summarized the current deep learning-based abdominal image reconstruction methods in this review. The deep learning reconstruction methods can solve the issues of slow imaging speed in magnetic resonance imaging and high-dose radiation in computed tomography while maintaining high image quality. Deep learning has a wide range of clinical applications in current abdominal imaging.
- Citation: Li GY, Wang CY, Lv J. Current status of deep learning in abdominal image reconstruction. Artif Intell Med Imaging 2021; 2(4): 86-94
- URL: https://www.wjgnet.com/2644-3260/full/v2/i4/86.htm
- DOI: https://dx.doi.org/10.35711/aimi.v2.i4.86
The emergence of deep learning has made intelligent image reconstruction a hot topic in the field of medical imaging. The applications of deep learning technology in image reconstruction have the advantages of reduced scan time and improved image quality. Magnetic resonance imaging (MRI) is a critical medical imaging technology with characteristics such as non-invasiveness, non-radiation, and high contrast. However, prolonged scanning time is the main obstacle that restrict the development of MRI technology[1]. Long acquisition time can cause discomfort to the patients and severe artifacts due to the patient's motion. In order to solve this issue, under-sampled k-space data can be acquired by reducing the measuring time during scans, and then an artifact-free image can be obtained through advanced reconstruction. Deep learning reconstruction (DLR) produces high-quality images while reducing scan time and patient discomfort. However, traditional MRI, has problems including low acceleration factor, long calculation time, and variability in parameter selection in the reconstruction algorithm[2]. Deep learning automatically captures high-level features from a large amount of data and builds non-linear mapping between the input and output. Wang et al[3] introduced deep learning into fast MRI reconstruction. The deep learning-based MRI reconstruction avoids the difficulty of parameter adjustment in traditional model-based reconstruction algorithms, which has the potential for a wide range of clinical applications. In addition, deep learning has also been used to solve the problem of abdominal motion. Presently, abdominal MRI reconstruction based on deep learning mainly adopts end-to-end remodeling. The current network structures for MRI reconstruction include the convolutional neural network (CNN)[4], U-net[5], generative adversarial network (GAN)[6], recurrent neural network (RNN)[7], and cascade-net[8].
On the other hand, CT imaging suffers from the problem of radiation. Low-dose CT (LDCT) is achieved by reducing the radiation dose. However, reduced radiation dose decreases the image quality, causing bias in the diagnosis. Therefore, an improved reconstruction algorithm is required for LDCT images. Traditional methods for reconstructing CT images include total variation[9], model-based iterative recon
In this review, we assessed the current status of deep learning in abdominal image reconstruction. Specifically, we reviewed the latest research on deep learning methods in abdominal image reconstruction, attempted to solve the related problems, and address the challenges in this field.
The deep learning method is obtained through a simple combination of non-linear layers. Each module can transform the initial low-level features into high-level representation. The core of deep learning is feature representation to obtain information at various levels through network layering. Compared to traditional machine learning algorithms, deep learning improves the accuracy of learning from a large amount of data. Another advantage of deep learning is that it does not require feature engineering. Typically, classic machine learning algorithms require complex feature engineering. Conversely, deep learning algorithms only need to feed data into the network and learn the representation. Finally, the deep learning network is highly adaptable and easily converted into different applications. Transfer learning makes the pre-trained deep networks suitable for similar applications.
At present, several studies have applied deep learning to different aspects of medical imaging, such as image detection[16,17], image segmentation[18,19], image denoising[20,21], super-resolution[22,23], and image reconstruction[3,24,25]. As described above, traditional model-based reconstruction algorithms require manual adjustment of the reconstruction parameters, which results in low reconstruction speed and unstable performance. With the increased acceleration factor, the image quality worsens. The reconstruction method based on deep learning avoids the difficulty of manual parameter adjustment. In the case of high acceleration, DLR can still perform well. After the network model is trained, the image can be reconstructed within seconds.
CNN has an excellent performance in image reconstruction[4]. In recent years, a large number of CNN-based abdominal image reconstruction methods have been proposed[26-36]. A major problem in abdominal imaging is the patient's motion, which blurs the image and produces severe artifacts. Breath holding while scanning can minimize these artifacts, but residual artifacts are persistent[37]. Self-gating techniques[38,39] can overcome this problem, but the reconstructed image at a low sampling rate causes additional streaking artifacts. In order to address the problem of free-breathing abdominal imaging under a high under-sampling rate, Lv et al[26] proposed a reconstruction algorithm based on a stacked convolutional autoencoder (SCAE). Experimental results showed that the SCAE method eliminates the streak artifacts caused by insufficient sampling. In order to realize high-resolution image reconstruction from radial under-sampled k-space data, Han et al[27] proposed a deep learning method with domain adaptation function. The network model was pre-trained with CT images, and then tuned for MRI with radial sampling. This method could be applied to limited training real-time data and multichannel reconstruction, which is in line with the clinical situation when multiple coils are used to acquire signals. Zhou et al[28] proposed a network combining parallel imaging (PI) and CNN for reconstruction. Real-time abdominal imaging was used to train and test the network; expected results were obtained.
In addition, CNN can also be applied to improve the quality of dynamic contrast-enhanced MRI. Tamada et al[29] proposed a multichannel CNN to reduce the artifacts and blur caused by the patient's motion. The detailed information on the MRI reconstruction methods mentioned above is described in Table 1.
Ref. | Task | Method | Images | Metric |
Kang et al[30], 2017 | Low-dose CT reconstruction | CNN | Abdominal CT images | PSNR: 34.55 |
Chen et al[31], 2017 | Low-dose CT reconstruction | RED-CNN | Low-dose abdominal CT images | PSNR: 43.79 ± 2.01; SSIM: 0.98 ± 0.01; RMSE: 0.69 ± 0.07 |
Han et al[27], 2018 | Accelerated projection-reconstruction MRI | U-netCNN | Low-dose abdominal CT images; synthetic radial abdominal MR images | PSNR: 31.55 |
Lv et al[26], 2018 | Undersampled radial free-breathing 3D abdominal MRI | Auto-encoderCNN | 3D golden angle-radial SOS liver MR images | P < 0.001 |
Ge et al[32], 2020 | CT image reconstruction directly from a sinogram | Residual encoder-decoder + CNN | Low-dose abdominal CT images | PSNR: 43.15 ± 1.93; SSIM: 0.97 ± 0.01; NRMSE: 0.71 ± 0.16 |
MacDougall et al[33], 2019 | Improving low-dose pediatric abdominal CT | CNN | Liver CT images;Spleen CT images | P < 0.001 |
Tamada et al[29], 2020 | DCE MR imaging of the liver | CNN | T1-weighted liver MR images | SSIM: 0.91 |
Zhou et al[28], 2019 | Applications in low-latency accelerated real-time imaging | PICNN | bSSFP cardiac MR images; bSSFP abdominal MR images | Abdominal: NRMSE: 0.08 ± 0.02; SSIM: 0.90 ± 0.02 |
Zhang et al[34], 2020 | Reconstructing 3D liver vessel morphology from contrasted CT images | GNNCNN | Multi-phase contrasted liver CT images | F1 score: 0.8762 ± 0.0549 |
Zhou et al[35], 2020 | Limited view tomographic reconstruction | Residual dense spatial-channel attention + CNN | Whole body CT images | LAR: PSNR: 35.82; SSIM: 0.97 SVR: PSNR: 41.98; SSIM: 0.97 |
Kazuo et al[36], 2021 | Image reconstructionin low-dose and sparse-view CT | CS + CNN | Low-dose abdominal CT images; Sparse-view abdominal CT images | Low-Dose CT case: PSNR: 33.2; SSIM: 0.91 Sparse-View CT case: PSNR: 29.2; SSIM: 0.91 |
In addition to the above application in abdominal MRI, CNN-based reconstruction methods show satisfactory results in CT images. Kang et al[30] used a deep CNN with residuals for LDCT imaging. The experimental results showed that this method reduces the noise level in the reconstructed image. Chen et al[31] proposed a residual encoder-decoder CNN by adding the autoencoder, deconvolution, and short jump connection to the residual encoder-decoder for LDCT imaging. This method had great advantages over the conventional method in terms of noise suppression, structure preservation, and lesion detection. Ge et al[32] proposed an ADAPTIVE-NET that directly reconstructs CT from sinograms. CNN can also be applied to pediatric LDCT images[33]. Zhang et al[34] proposed a graph attention neural network and CNN to reconstruct liver vessels.
Limited view tomographic reconstruction aimed to reconstruct images with a limited number of sinograms that could lead to high noise and artifacts. Zhou et al[35] proposed a novel residual dense reconstruction network architecture with spatial attention and channel attention to address this problem. The network used sinogram consistency layer interleaved to ensure that the output by the intermediate loop block was consistent with the sampled sinogram input. This method used the AAPM LDCT dataset[40] for validation and achieved the desired performance in both limited-angle and sparse-view reconstruction. In order to further improve the quality of sparse-view CT and low-dose CT reconstruction, Kazuo et al[36] proposed a reconstruction framework that combined CS and CNN. This method input a degraded filtered back projection image and multiplied CS reconstructed images obtained using various regularization items into a CNN. The detailed information on the abdominal CT reconstruction methods mentioned above is listed in Table 1.
GAN is optimized and learned through the game between generator G and discriminator D. This method is also suitable for abdominal image reconstruction. Mardani et al[41] used GAN for abdominal MRI reconstruction. This method also solves the problem of poor reconstruction performance of traditional CS-MRI[42,43] due to its slow iteration process and artifacts caused by noise. This method used least-squares GAN[44] and pixel-wise L1 as the cost function during training. The data showed that the reconstructed abdominal MR image was superior to that obtained using the traditional CS method with respect to image quality and reconstruction speed. Lv et al[45] compared the performance of GAN-based image reconstruction with DAGAN[46], ReconGAN[25], RefineGAN[25], and KIGAN[47]. Among these, the RefineGAN method was slightly better than DAGAN and KIGAN. In addition, Lv et al[48] combined PI and GAN for end-to-end reconstruction. The network added data fidelity items and regularization terms to the generator to obtain the information from multiple coils.
Most supervised learning methods require a large amount of fully sampled data for training. However, it is difficult or even impossible to obtain the full sampled data, and hence, unsupervised learning is necessary under the circumstances. Cole et al[49] proposed an unsupervised reconstruction method based on GAN. The detailed information on the reconstruction methods is described in Table 2.
Ref. | Task | Method | Images | Metric |
Mardani et al[41], 2017 | Compressed sensing automates MRI reconstruction | GANCS | Abdominal MR images | SNR: 20.48; SSIM: 0.87 |
Yang et al[50], 2018 | Low dose CT image denoising | WGAN | Abdominal CT images | PSNR: 23.39; SSIM: 0.79 |
Kuanar et al[52], 2019 | Low-dose abdominal CT image reconstruction | Auto-encoderWGAN | Abdominal CT images | PSNR: 37.76; SSIM: 0.94; RMSE: 0.92 |
Lv et al[45], 2021 | A comparative study of GAN-based fast MRI reconstruction | DAGANKIGANReconGANRefineGAN | T2-weighted liver images; 3D FSE CUBE knee images; T1-weighted brain images | Liver: PSNR: 36.25 ± 3.39; SSIM: 0.95 ± 0.02; RMSE: 2.12 ± 1.54; VIF: 0.93 ± 0.05; FID: 31.94 |
Zhang et al[53], 2020 | 3D reconstruction for super-resolution CT images | Conditional GAN | 3D-IRCADb-01database liver CT images | Male: PSNR: 34.51; SSIM: 0.90Female: PSNR: 34.75; SSIM: 0.90 |
Cole et al[49], 2020 | Unsupervised MRI reconstruction | UnsupervisedGAN | 3D FSE CUBE knee images; DCE abdominal MR images | PSNR: 31.55; NRMSE: 0.23; SSIM: 0.83 |
Lv et al[48], 2021 | Accelerated multichannel MRI reconstruction | PIGAN | 3D FSE CUBE knee MR images; abdominal MR images | Abdominal: PSNR: 31.76 ± 3.04; SSIM: 0.86 ± 0.02; NMSE: 1.22 ± 0.97 |
Zhang et al[54], 2019 | 4D abdominal and in utero MR imaging | Self-supervised RNN | bSSFP uterus MR images; bSSFP kidney MR images | PSNR: 36.08 ± 1.13; SSIM: 0.96 ± 0.01 |
The usage of GAN can also improve the quality of abdominal LDCT images. Yang et al[50] used GAN combined with Wasserstein distance and perceptual loss for LDCT abdominal image denoising. Based on Wasserstein GAN (WGAN)[51], Kuanar et al[52] proposed an end-to-end RegNet-based autoencoder network model, in which GAN was used in the autoencoder. The loss function of this network was composed of RegNet perceptual loss[52] and WGAN adversarial loss[51]. The experimental results showed that this method improves the quality of the reconstructed image while reducing the noise.
Zhang et al[53] proposed the use of conditional GAN (CGAN) to reconstruct super-resolution CT images. The edge detection loss function was proposed in the CGAN to minimize the loss of the image edge. In addition, this study used appropriate bounding boxes to reduce the number of rays when performing 3D reconstruction. The reconstruction methods are described in Table 2.
RNN is suitable for processing data with sequence information. The dynamic abdominal images were collected from the currently collected frame and were similar to the previous and following frames. Unlike other networks, the nodes between the hidden layers of RNN are connected. Zhang et al[54] proposed a self-supervised RNN to estimate the breathing motion of the abdomen and in utero 4D MRI. The network used a self-supervised RNN to estimate breathing motion and then a 3D deconvolution network for super-resolution reconstruction. Compared to slice-to-volume registration, the experimental results of this method predicted the respiratory motion and reconstructed high-quality images accurately. The detailed information on the reconstruction method mentioned above is shown in Table 2.
Deep learning can also be applied to abdominal motion correction. Lv et al[55] proposed a CNN-based image registration algorithm to obtain images during the respiratory cycle. In addition, methods based on U-net and GAN can also be applied to abdominal motion correction. Jiang et al[56] proposed a densely connected U-net and GAN for abdominal MRI respiration correction. Küstner et al[57] combined non-rigid registration with 4D reconstruction networks for motion correction. The detailed information on the reconstruction methods mentioned above is summarized in Table 3.
Ref. | Task | Method | Images | Metric |
Lv et al[55], 2018 | Respiratory motion correction for free-breathing 3D abdominal MRI | CNN | 3D golden angle-radial SOS abdominal images | SNR: 207.42 ± 96.73 |
Jiang et al[56], 2019 | Respiratory motion correction in abdominal MRI | U-NetGAN | T1-weighted abdominal images | FSE: 0.920; GRE: 0.910; Simulated motion: 0.928 |
Küstner et al[57], 2020 | Motion-corrected image reconstruction in 4D MRI | U-netCNN | T1-weighted in-vivo 4D MR images | EPE: 0.17 ± 0.26; EAE: 7.9 ± 9.9; SSIM: 0.94 ± 0.04; NRMSE: 0.5 ± 0.1 |
Akagi et al[58], 2019 | Improving image quality of abdominal U-HRCT using DLR method | DLR | U-HRCT abdominal CT images | P < 0.01 |
Nakamura et al[59], 2019 | To evaluate the effect of a DLR method | DLR | Abdominal CT images | P < 0.001 |
The DLR developed by Canon Medical Systems’ Advanced Intelligent Clear-IQ Engine is a commercial deep learning tool for image reconstruction. Some studies have confirmed the feasibility and effectiveness of this tool for abdominal image reconstruction. Akagi et al[58] used DLR for abdominal ultra-high-resolution computed tomography (U-HRCT) image reconstruction. The present study proved that DLR reconstruction has clinical applicability in U-HRCT. Compared to hybrid-IR and MBIR[10], DLR reduces the noise of abdominal U-HRCT and improves image quality. In addition, the DLR method is applicable to widely-used CT images. Nakamura et al[59] evaluated the effectiveness of the DLR method on hypovascular hepatic metastasis on abdominal CT images. The detailed information on the reconstruction methods mentioned is summarized in Table 3.
In summary, deep learning provides a powerful tool for abdominal image reconstruction. However, deep learning-based abdominal image reconstruction has several challenges. First, collecting a large amount of data for training the neural networks is rather challenging. Supervised learning means that a large amount of fully sampled data is required, which is time-consuming in clinical medicine. In addition, it is difficult or even impossible to obtain full sampling data in some specific applications[49]. Therefore, some semi-supervised learning is necessary. In addition, some researchers have proposed the use of self-supervised learning methods[54,60,61]. Self-supervised learning does not require training labels. It is suitable for image reconstruction problems when fully sampled data cannot be obtained easily. Therefore, self-supervised learning has great development potential and is one of the major research directions in the future. Second, deep learning is difficult to explain even if satisfactory reconstruction is achieved.
The current workflow of abdominal imaging starts from data acquisition to image reconstruction and then to diagnosis, deeming it possible to perform multiple tasks at the same time. For example, SegNetMRI[62] realizes image segmentation and image reconstruction simultaneously. Joint-FR-Net[63] can directly use k-space data for image segmentation. Thus, future studies could use the k-space data for lesion detection, classification, and other clinical applications directly.
We summarized the current deep learning-based abdominal image reconstruction methods in this review. The DLR methods can solve the issues of slow imaging speed in MRI and high-dose radiation in CT while maintaining high image quality. Deep learning has a wide range of clinical applications in current abdominal imaging. More advanced techniques are expected to be utilized in future studies.
Manuscript source: Invited manuscript
Specialty type: Engineering, biomedical
Country/Territory of origin: China
Peer-review report’s scientific quality classification
Grade A (Excellent): 0
Grade B (Very good): B
Grade C (Good): C
Grade D (Fair): 0
Grade E (Poor): 0
P-Reviewer: Maheshwarappa RP, Vernuccio F S-Editor: Liu M L-Editor: Webster JR P-Editor: Guo X
1. | Lustig M, Donoho DL, Santos JM, Pauly JM. Compressed sensing MRI. IEEE Signal Process Mag. 2008;25:72-82. [DOI] [Cited in This Article: ] |
2. | Ravishankar S, Bresler Y. MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans Med Imaging. 2011;30:1028-1041. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 692] [Cited by in F6Publishing: 395] [Article Influence: 30.4] [Reference Citation Analysis (0)] |
3. | Wang S, Su Z, Ying L, Peng X, Zhu S, Liang F, Feng D, Liang D. Accelerating magnetic resonance imaging via deep learning. Proc IEEE Int Symp Biomed Imaging. 2016;2016:514-517. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 308] [Cited by in F6Publishing: 245] [Article Influence: 30.6] [Reference Citation Analysis (0)] |
4. | Wang SS, Xiao TH, Liu QG, Zheng HR. Deep learning for fast MR imaging: a review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control. 2021;68:102579. [DOI] [Cited in This Article: ] [Cited by in Crossref: 10] [Cited by in F6Publishing: 6] [Article Influence: 2.0] [Reference Citation Analysis (0)] |
5. | Zbontar J, Knoll F, Sriram A, Murrel T, Huang ZN, Muckley MJ, Defazio A, Stern R, Johnson P, Bruno M, Parente M, Geras KJ, Katsnelson J, Chandarana H, Zhang ZZ, Drozdzal M, Romero A, Rabbat M, Vincent P, Yakubova N, Pinkerton J, Wang D, Owens E, Zitnick CL, Recht MP, Sodickson DK, Lui YW. fastMRI: An open dataset and benchmarks for accelerated MRI. 2018 Preprint. Available from: arXiv: 1811.08839. [Cited in This Article: ] |
6. | Wang JB, Wang HT, Wang LS, Li LP, Xv J, Xv C, Li XH, Wu YH, Liu HY, Li BJ, Yu H, Tian X, Zhang ZY, Wang Y, Zhao R, Liu JY, Wang W, Gu Y. Epidemiological and clinical characteristics of fifty-six cases of COVID-19 in Liaoning Province, China. World J Clin Cases. 2020;8:5188-5202. [PubMed] [DOI] [Cited in This Article: ] [Cited by in CrossRef: 7] [Cited by in F6Publishing: 7] [Article Influence: 1.8] [Reference Citation Analysis (0)] |
7. | Qin C, Schlemper J, Caballero J, Price AN, Hajnal JV, Rueckert D. Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans Med Imaging. 2019;38:280-290. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 275] [Cited by in F6Publishing: 217] [Article Influence: 43.4] [Reference Citation Analysis (0)] |
8. | Schlemper J, Caballero J, Hajnal JV, Price AN, Rueckert D. A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans Med Imaging. 2018;37:491-503. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 641] [Cited by in F6Publishing: 536] [Article Influence: 89.3] [Reference Citation Analysis (0)] |
9. | Olinescu A, Hristescu S, Poliopol M, Agache F, Kerek F. The effects of Boicil on some immunocompetent cells. II. In vitro and in vivo modulation of the mouse cellular and humoral immune response. Arch Roum Pathol Exp Microbiol. 1987;46:147-158. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 191] [Cited by in F6Publishing: 143] [Article Influence: 4.0] [Reference Citation Analysis (0)] |
10. | Li K, Tang J, Chen GH. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: experimental assessment of noise performance. Med Phys. 2014;41:041906. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 84] [Cited by in F6Publishing: 90] [Article Influence: 9.0] [Reference Citation Analysis (0)] |
11. | Xu Q, Yu H, Mou X, Zhang L, Hsieh J, Wang G. Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans Med Imaging. 2012;31:1682-1697. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 420] [Cited by in F6Publishing: 267] [Article Influence: 22.3] [Reference Citation Analysis (0)] |
12. | Zhu JY, Krähenbühl P, Shechtman E, Efros AA. Generative visual manipulation on the natural image manifold. In: European conference on computer vision. Springer, 2016: 597-613. [DOI] [Cited in This Article: ] [Cited by in Crossref: 356] [Cited by in F6Publishing: 343] [Article Influence: 42.9] [Reference Citation Analysis (0)] |
13. | Johnson J, Alahi A, Li FF. Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision. Springer, 2016: 694-711. [DOI] [Cited in This Article: ] [Cited by in Crossref: 2332] [Cited by in F6Publishing: 2271] [Article Influence: 283.9] [Reference Citation Analysis (0)] |
14. | Gatys LA, Ecker AS, Bethge M. A neural algorithm of artistic style. 2015 Preprint. Available from: arXiv: 1508.06576. [Cited in This Article: ] |
15. | Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014 Preprint. Available from: arXiv: 1409.1556. [Cited in This Article: ] |
16. | Xu J, Xiang L, Liu Q, Gilmore H, Wu J, Tang J, Madabhushi A. Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images. IEEE Trans Med Imaging. 2016;35:119-130. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 566] [Cited by in F6Publishing: 328] [Article Influence: 41.0] [Reference Citation Analysis (0)] |
17. | Qi Dou, Hao Chen, Lequan Yu, Lei Zhao, Jing Qin, Defeng Wang, Mok VC, Lin Shi, Pheng-Ann Heng. Automatic Detection of Cerebral Microbleeds From MR Images via 3D Convolutional Neural Networks. IEEE Trans Med Imaging. 2016;35:1182-1195. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 436] [Cited by in F6Publishing: 292] [Article Influence: 36.5] [Reference Citation Analysis (0)] |
18. | Guo Y, Gao Y, Shen D. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE Trans Med Imaging. 2016;35:1077-1089. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 171] [Cited by in F6Publishing: 123] [Article Influence: 15.4] [Reference Citation Analysis (0)] |
19. | Zhang W, Li R, Deng H, Wang L, Lin W, Ji S, Shen D. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Neuroimage. 2015;108:214-224. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 546] [Cited by in F6Publishing: 371] [Article Influence: 41.2] [Reference Citation Analysis (0)] |
20. | Burger HC, Schuler CJ, Harmeling S. Image denoising: Can plain neural networks compete with BM3D? IEEE conference on computer vision and pattern recognition. 2012 Jun 16-21; Providence, RI, United States: IEEE, 2012: 2392-2399. [DOI] [Cited in This Article: ] |
21. | Solomon J, Samei E. A generic framework to simulate realistic lung, liver and renal pathologies in CT imaging. Phys Med Biol. 2014;59:6637-6657. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 46] [Cited by in F6Publishing: 53] [Article Influence: 5.3] [Reference Citation Analysis (0)] |
22. | Bahrami K, Shi F, Rekik I, Shen DG. Convolutional neural network for reconstruction of 7T-like images from 3T MRI using appearance and anatomical features. In: Deep Learning and Data Labeling for Medical Applications. Springer, 2016: 39-47. [DOI] [Cited in This Article: ] [Cited by in Crossref: 62] [Cited by in F6Publishing: 61] [Article Influence: 7.6] [Reference Citation Analysis (0)] |
23. | Dong C, Loy CC, He KM, Tang X. Learning a deep convolutional network for image super-resolution. In: European conference on computer vision. Springer, 2014: 184-199. [DOI] [Cited in This Article: ] [Cited by in Crossref: 1388] [Cited by in F6Publishing: 1377] [Article Influence: 137.7] [Reference Citation Analysis (0)] |
24. | Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE international conference on computer vision; 2017 Oct 22-29; Venice, Italy. IEEE, 2017: 2223-2232. [DOI] [Cited in This Article: ] |
25. | Quan TM, Nguyen-Duc T, Jeong WK. Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network With a Cyclic Loss. IEEE Trans Med Imaging. 2018;37:1488-1497. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 339] [Cited by in F6Publishing: 249] [Article Influence: 41.5] [Reference Citation Analysis (0)] |
26. | Lv J, Chen K, Yang M, Zhang J, Wang X. Reconstruction of undersampled radial free-breathing 3D abdominal MRI using stacked convolutional auto-encoders. Med Phys. 2018;45:2023-2032. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 7] [Cited by in F6Publishing: 7] [Article Influence: 1.2] [Reference Citation Analysis (0)] |
27. | Han Y, Yoo J, Kim HH, Shin HJ, Sung K, Ye JC. Deep learning with domain adaptation for accelerated projection-reconstruction MR. Magn Reson Med. 2018;80:1189-1205. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 163] [Cited by in F6Publishing: 154] [Article Influence: 25.7] [Reference Citation Analysis (0)] |
28. | Zhou Z, Han F, Ghodrati V, Gao Y, Yin W, Yang Y, Hu P. Parallel imaging and convolutional neural network combined fast MR image reconstruction: Applications in low-latency accelerated real-time imaging. Med Phys. 2019;46:3399-3413. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 17] [Cited by in F6Publishing: 23] [Article Influence: 4.6] [Reference Citation Analysis (0)] |
29. | Tamada D, Kromrey ML, Ichikawa S, Onishi H, Motosugi U. Motion Artifact Reduction Using a Convolutional Neural Network for Dynamic Contrast Enhanced MR Imaging of the Liver. Magn Reson Med Sci. 2020;19:64-76. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 50] [Cited by in F6Publishing: 55] [Article Influence: 11.0] [Reference Citation Analysis (0)] |
30. | Kang E, Min J, Ye JC. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med Phys. 2017;44:e360-e375. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 389] [Cited by in F6Publishing: 324] [Article Influence: 54.0] [Reference Citation Analysis (0)] |
31. | Chen H, Zhang Y, Kalra MK, Lin F, Chen Y, Liao P, Zhou J, Wang G. Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network. IEEE Trans Med Imaging. 2017;36:2524-2535. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 1046] [Cited by in F6Publishing: 629] [Article Influence: 89.9] [Reference Citation Analysis (0)] |
32. | Ge Y, Su T, Zhu J, Deng X, Zhang Q, Chen J, Hu Z, Zheng H, Liang D. ADAPTIVE-NET: deep computed tomography reconstruction network with analytical domain transformation knowledge. Quant Imaging Med Surg. 2020;10:415-427. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 21] [Cited by in F6Publishing: 24] [Article Influence: 6.0] [Reference Citation Analysis (0)] |
33. | MacDougall RD, Zhang Y, Callahan MJ, Perez-Rossello J, Breen MA, Johnston PR, Yu H. Improving Low-Dose Pediatric Abdominal CT by Using Convolutional Neural Networks. Radiol Artif Intell. 2019;1:e180087. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 7] [Cited by in F6Publishing: 10] [Article Influence: 2.0] [Reference Citation Analysis (0)] |
34. | Zhang DH, Liu SQ, Chaganti S, Gibson E, Xu ZB, Grbic S, Cai WD, Comaniciu D. Graph Attention Network based Pruning for Reconstructing 3D Liver Vessel Morphology from Contrasted CT Images. 2020 Preprint. Available from: arXiv: 2003.07999. [Cited in This Article: ] |
35. | Zhou B, Zhou SK, Duncan JS, Liu C. Limited View Tomographic Reconstruction Using a Deep Recurrent Framework with Residual Dense Spatial-Channel Attention Network and Sinogram Consistency. 2020 Preprint. Available from: arXiv: 2009.01782. [Cited in This Article: ] |
36. | Kazuo S, Kawamata K, Kudo H. Combining compressed sensing and deep learning using multi-channel CNN for image reconstruction in low-dose and sparse-view CT. International Forum on Medical Imaging in Asia 2021; 2021 Apr 20; Taipei, Taiwan. Proc. SPIE, 2021: 117920M. [DOI] [Cited in This Article: ] |
37. | Bernstein MA, King KF, Zhou XJ. Handbook of MRI pulse sequences. Burlington, MA: Elsevier, 2004. [Cited in This Article: ] |
38. | Cruz G, Atkinson D, Buerger C, Schaeffter T, Prieto C. Accelerated motion corrected three-dimensional abdominal MRI using total variation regularized SENSE reconstruction. Magn Reson Med. 2016;75:1484-1498. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 69] [Cited by in F6Publishing: 59] [Article Influence: 7.4] [Reference Citation Analysis (0)] |
39. | Stehning C, Börnert P, Nehrke K, Eggers H, Stuber M. Free-breathing whole-heart coronary MRA with 3D radial SSFP and self-navigated image reconstruction. Magn Reson Med. 2005;54:476-480. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 182] [Cited by in F6Publishing: 175] [Article Influence: 9.2] [Reference Citation Analysis (0)] |
40. | McCollough C. TU-FG-207A-04: Overview of the Low Dose CT Grand Challenge. Fifty-eighth annual meeting of the american association of physicists in medicine; 2016. Medical physics, 2016: 3759-3760. [DOI] [Cited in This Article: ] [Cited by in Crossref: 44] [Cited by in F6Publishing: 44] [Article Influence: 5.5] [Reference Citation Analysis (0)] |
41. | Mardani M, Gong E, Cheng JY, Vasanawala S, Zaharchuk G, Alley M, Thakur N, Han S, Dally W, Pauly JM, Xing L. Deep generative adversarial networks for compressed sensing automates MRI. 2017 Preprint. Available from: arXiv: 1706.00051. [Cited in This Article: ] |
42. | Donoho DL. Compressed sensing. IEEE Trans Inf Theory. 2006;52:1289-1306. [DOI] [Cited in This Article: ] |
43. | Jaspan ON, Fleysher R, Lipton ML. Compressed sensing MRI: a review of the clinical literature. Br J Radiol. 2015;88:20150487. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 166] [Cited by in F6Publishing: 201] [Article Influence: 22.3] [Reference Citation Analysis (0)] |
44. | Mao X, Li Q, Xie HR, Lau RY, Wang Zhen, Smolley SP. Least squares generative adversarial networks. Proceedings of the IEEE international conference on computer vision; 2017 Oct 22-29; Venice, Italy. IEEE, 2017: 2794-2802. [DOI] [Cited in This Article: ] |
45. | Lv J, Zhu J, Yang G. Which GAN? Philos Trans A Math Phys Eng Sci. 2021;379:20200203. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 16] [Cited by in F6Publishing: 11] [Article Influence: 3.7] [Reference Citation Analysis (0)] |
46. | Yang G, Yu S, Dong H, Slabaugh G, Dragotti PL, Ye X, Liu F, Arridge S, Keegan J, Guo Y, Firmin D, Yang G. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans Med Imaging. 2018;37:1310-1321. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 553] [Cited by in F6Publishing: 405] [Article Influence: 67.5] [Reference Citation Analysis (0)] |
47. | Shaul R, David I, Shitrit O, Riklin Raviv T. Subsampled brain MRI reconstruction by generative adversarial neural networks. Med Image Anal. 2020;65:101747. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 26] [Cited by in F6Publishing: 27] [Article Influence: 6.8] [Reference Citation Analysis (0)] |
48. | Lv J, Wang C, Yang G. PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction. Diagnostics (Basel). 2021;11:61. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 35] [Cited by in F6Publishing: 25] [Article Influence: 8.3] [Reference Citation Analysis (0)] |
49. | Cole EK, Pauly JM, Vasanawala SS, Ong F. Unsupervised MRI Reconstruction with Generative Adversarial Networks. 2020 Preprint. Available from: arXiv: 2008.13065. [Cited in This Article: ] |
50. | Yang Q, Yan P, Zhang Y, Yu H, Shi Y, Mou X, Kalra MK, Sun L, Wang G. Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss. IEEE Trans Med Imaging. 2018;37:1348-1357. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 951] [Cited by in F6Publishing: 556] [Article Influence: 92.7] [Reference Citation Analysis (0)] |
51. | Arjovsky M, Chintala S, Bottou L. Wasserstein GAN. 2017 Preprint. Available from: arXiv: 1701.07875. [Cited in This Article: ] |
52. | Kuanar S, Athitsos V, Mahapatra D, Rao KR, Akhtar Z, Dasgupta D. Low dose abdominal CT image reconstruction: An unsupervised learning based approach. IEEE International Conference on Image Processing (ICIP); 2019 Sept 22-25; Taipei, Taiwan. IEEE, 2019: 1351-1355. [DOI] [Cited in This Article: ] |
53. | Zhang J, Gong LR, Yu K, Qi X, Wen Z, Hua QZ, Myint SH. 3D reconstruction for super-resolution CT images in the Internet of health things using deep learning. IEEE Access. 2020;8:121513-121525. [DOI] [Cited in This Article: ] |
54. | Zhang T, Jackson LH, Uus A, Clough JR, Story L, Rutherford MA, Hajnal JV, Deprez M. Self-supervised Recurrent Neural Network for 4D Abdominal and In-utero MR Imaging. In: International Workshop on Machine Learning for Medical Image Reconstruction. Springer, 2019: 16-24. [DOI] [Cited in This Article: ] [Cited by in Crossref: 4] [Cited by in F6Publishing: 4] [Article Influence: 0.8] [Reference Citation Analysis (0)] |
55. | Lv J, Yang M, Zhang J, Wang X. Respiratory motion correction for free-breathing 3D abdominal MRI using CNN-based image registration: a feasibility study. Br J Radiol. 2018;91:20170788. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 34] [Cited by in F6Publishing: 35] [Article Influence: 5.8] [Reference Citation Analysis (0)] |
56. | Jiang WH, Liu ZY, Lee KH, Chen SH, Ng YL, Dou Q, Chang HC, Kwok KW. Respiratory motion correction in abdominal MRI using a densely connected U-Net with GAN-guided training. 2019 Preprint. Available from: arXiv: 1906.09745. [Cited in This Article: ] |
57. | Küstner T, Pan JZ, Gilliam C, Qi HK, Cruz G, Hammernik K, Yang B, Blu T, Rueckert D, Botnar R, Prieto C, Gatidis S. Deep-learning based motion-corrected image reconstruction in 4D magnetic resonance imaging of the body trunk. 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC); 2020 Dec 7-10; Auckland, New Zealand. IEEE, 2020: 976-985. [Cited in This Article: ] |
58. | Akagi M, Nakamura Y, Higaki T, Narita K, Honda Y, Zhou J, Yu Z, Akino N, Awai K. Deep learning reconstruction improves image quality of abdominal ultra-high-resolution CT. Eur Radiol. 2019;29:6163-6171. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 149] [Cited by in F6Publishing: 218] [Article Influence: 43.6] [Reference Citation Analysis (0)] |
59. | Nakamura Y, Higaki T, Tatsugami F, Zhou J, Yu Z, Akino N, Ito Y, Iida M, Awai K. Deep Learning-based CT Image Reconstruction: Initial Evaluation Targeting Hypovascular Hepatic Metastases. Radiol Artif Intell. 2019;1:e180011. [PubMed] [DOI] [Cited in This Article: ] [Cited by in Crossref: 29] [Cited by in F6Publishing: 53] [Article Influence: 10.6] [Reference Citation Analysis (0)] |
60. | Yaman B, Hosseini SH, Moeller S, Ellermann J, Uğurbil K, Akcakaya M. Self-supervised physics-based deep learning MRI reconstruction without fully-sampled data. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); 2020 Apr 3-7; Iowa City, IA, United States. IEEE, 2020: 921-925. [DOI] [Cited in This Article: ] |
61. | Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. Proceedings of the 37th International Conference on Machine Learning. 2020: 1597-1607. [Cited in This Article: ] |
62. | Sun LY, Fan ZW, Ding XH, Huang Y, Paisley J. Joint cs-mri reconstruction and segmentation with a unified deep network. In: Chung A, Gee J, Yushkevich P, Bao S. Information Processing in Medical Imaging. Springer, 2019: 492-504. [DOI] [Cited in This Article: ] [Cited by in Crossref: 16] [Cited by in F6Publishing: 16] [Article Influence: 3.2] [Reference Citation Analysis (0)] |
63. | Huang QY, Yang D, Yi JR, Axel L, Metaxas D. FR-Net: Joint reconstruction and segmentation in compressed sensing cardiac MRI. In: Coudière Y, Ozenne V, Vigmond E, Zemzemi N. Functional Imaging and Modeling of the Heart. Springer, 2019: 352-360. [DOI] [Cited in This Article: ] [Cited by in Crossref: 9] [Cited by in F6Publishing: 9] [Article Influence: 1.8] [Reference Citation Analysis (0)] |