Sort by:
Page 98 of 1311301 results

CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction Across Various Sampling Rates.

Yang L, Huang J, Yang G, Zhang D

pubmed logopapersJun 1 2025
Sparse views X-ray computed tomography has emerged as a contemporary technique to mitigate radiation dose. Because of the reduced number of projection views, traditional reconstruction methods can lead to severe artifacts. Recently, research studies utilizing deep learning methods has made promising progress in removing artifacts for Sparse-View Computed Tomography (SVCT). However, given the limitations on the generalization capability of deep learning models, current methods usually train models on fixed sampling rates, affecting the usability and flexibility of model deployment in real clinical settings. To address this issue, our study proposes a adaptive reconstruction method to achieve high-performance SVCT reconstruction at various sampling rate. Specifically, we design a novel imaging degradation operator in the proposed sampling diffusion model for SVCT (CT-SDM) to simulate the projection process in the sinogram domain. Thus, the CT-SDM can gradually add projection views to highly undersampled measurements to generalize the full-view sinograms. By choosing an appropriate starting point in diffusion inference, the proposed model can recover the full-view sinograms from various sampling rate with only one trained model. Experiments on several datasets have verified the effectiveness and robustness of our approach, demonstrating its superiority in reconstructing high-quality images from sparse-view CT scans across various sampling rates.

A Foundation Model for Lesion Segmentation on Brain MRI With Mixture of Modality Experts.

Zhang X, Ou N, Doga Basaran B, Visentin M, Qiao M, Gu R, Matthews PM, Liu Y, Ye C, Bai W

pubmed logopapersJun 1 2025
Brain lesion segmentation is crucial for neurological disease research and diagnosis. As different types of lesions exhibit distinct characteristics on different imaging modalities, segmentation methods are typically developed in a task-specific manner, where each segmentation model is tailored to a specific lesion type and modality. However, the use of task-specific models requires predetermination of the lesion type and imaging modality, which complicates their deployment in real-world scenarios. In this work, we propose a universal foundation model for brain lesion segmentation on magnetic resonance imaging (MRI), which can automatically segment different types of brain lesions given input of various MRI modalities. We develop a novel Mixture of Modality Experts (MoME) framework with multiple expert networks attending to different imaging modalities. A hierarchical gating network is proposed to combine the expert predictions and foster expertise collaboration. Moreover, to avoid the degeneration of each expert network, we introduce a curriculum learning strategy during training to preserve the specialisation of each expert. In addition to MoME, to handle the combination of multiple input modalities, we propose MoME+, which uses a soft dispatch network for input modality routing. We evaluated the proposed method on nine brain lesion datasets, encompassing five imaging modalities and eight lesion types. The results show that our model outperforms state-of-the-art universal models for brain lesion segmentation and achieves promising generalisation performance onto unseen datasets.

Score-Based Diffusion Models With Self-Supervised Learning for Accelerated 3D Multi-Contrast Cardiac MR Imaging.

Liu Y, Cui ZX, Qin S, Liu C, Zheng H, Wang H, Zhou Y, Liang D, Zhu Y

pubmed logopapersJun 1 2025
Long scan time significantly hinders the widespread applications of three-dimensional multi-contrast cardiac magnetic resonance (3D-MC-CMR) imaging. This study aims to accelerate 3D-MC-CMR acquisition by a novel method based on score-based diffusion models with self-supervised learning. Specifically, we first establish a mapping between the undersampled k-space measurements and the MR images, utilizing a self-supervised Bayesian reconstruction network. Secondly, we develop a joint score-based diffusion model on 3D-MC-CMR images to capture their inherent distribution. The 3D-MC-CMR images are finally reconstructed using the conditioned Langenvin Markov chain Monte Carlo sampling. This approach enables accurate reconstruction without fully sampled training data. Its performance was tested on the dataset acquired by a 3D joint myocardial $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ mapping sequence. The $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ maps were estimated via a dictionary matching method from the reconstructed images. Experimental results show that the proposed method outperforms traditional compressed sensing and existing self-supervised deep learning MRI reconstruction methods. It also achieves high quality $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ parametric maps close to the reference maps, even at a high acceleration rate of 14.

Ultra-Sparse-View Cone-Beam CT Reconstruction-Based Strictly Structure-Preserved Deep Neural Network in Image-Guided Radiation Therapy.

Song Y, Zhang W, Wu T, Luo Y, Shi J, Yang X, Deng Z, Qi X, Li G, Bai S, Zhao J, Zhong R

pubmed logopapersJun 1 2025
Radiation therapy is regarded as the mainstay treatment for cancer in clinic. Kilovoltage cone-beam CT (CBCT) images have been acquired for most treatment sites as the clinical routine for image-guided radiation therapy (IGRT). However, repeated CBCT scanning brings extra irradiation dose to the patients and decreases clinical efficiency. Sparse CBCT scanning is a possible solution to the problems mentioned above but at the cost of inferior image quality. To decrease the extra dose while maintaining the CBCT quality, deep learning (DL) methods are widely adopted. In this study, planning CT was used as prior information, and the corresponding strictly structure-preserved CBCT was simulated based on the attenuation information from the planning CT. We developed a hyper-resolution ultra-sparse-view CBCT reconstruction model, known as the planning CT-based strictly-structure-preserved neural network (PSSP-NET), using a generative adversarial network (GAN). This model utilized clinical CBCT projections with extremely low sampling rates for the rapid reconstruction of high-quality CBCT images, and its clinical performance was evaluated in head-and-neck cancer patients. Our experiments demonstrated enhanced performance and improved reconstruction speed.

Adaptive Weighting Based Metal Artifact Reduction in CT Images.

Wang H, Wu Y, Wang Y, Wei D, Wu X, Ma J, Zheng Y

pubmed logopapersJun 1 2025
Against the metal artifact reduction (MAR) task in computed tomography (CT) imaging, most of the existing deep-learning-based approaches generally select a single Hounsfield unit (HU) window followed by a normalization operation to preprocess CT images. However, in practical clinical scenarios, different body tissues and organs are often inspected under varying window settings for good contrast. The methods trained on a fixed single window would lead to insufficient removal of metal artifacts when being transferred to deal with other windows. To alleviate this problem, few works have proposed to reconstruct the CT images under multiple-window configurations. Albeit achieving good reconstruction performance for different windows, they adopt to directly supervise each window learning in an equal weighting way based on the training set. To improve the learning flexibility and model generalizability, in this paper, we propose an adaptive weighting algorithm, called AdaW, for the multiple-window metal artifact reduction, which can be applied to different deep MAR network backbones. Specifically, we first formulate the multiple window learning task as a bi-level optimization problem. Then we derive an adaptive weighting optimization algorithm where the learning process for MAR under each window is automatically weighted via a learning-to-learn paradigm based on the training set and validation set. This rationality is finely substantiated through theoretical analysis. Based on different network backbones, experimental comparisons executed on five datasets with different body sites comprehensively validate the effectiveness of AdaW in helping improve the generalization performance as well as its good applicability. We will release the code at https://github.com/hongwang01/AdaW.

The Pivotal Role of Baseline LDCT for Lung Cancer Screening in the Era of Artificial Intelligence.

De Luca GR, Diciotti S, Mascalchi M

pubmed logopapersJun 1 2025
In this narrative review, we address the ongoing challenges of lung cancer (LC) screening using chest low-dose computerized tomography (LDCT) and explore the contributions of artificial intelligence (AI), in overcoming them. We focus on evaluating the initial (baseline) LDCT examination, which provides a wealth of information relevant to the screening participant's health. This includes the detection of large-size prevalent LC and small-size malignant nodules that are typically diagnosed as LCs upon growth in subsequent annual LDCT scans. Additionally, the baseline LDCT examination provides valuable information about smoking-related comorbidities, including cardiovascular disease, chronic obstructive pulmonary disease, and interstitial lung disease (ILD), by identifying relevant markers. Notably, these comorbidities, despite the slow progression of their markers, collectively exceed LC as ultimate causes of death at follow-up in LC screening participants. Computer-assisted diagnosis tools currently improve the reproducibility of radiologic readings and reduce the false negative rate of LDCT. Deep learning (DL) tools that analyze the radiomic features of lung nodules are being developed to distinguish between benign and malignant nodules. Furthermore, AI tools can predict the risk of LC in the years following a baseline LDCT. AI tools that analyze baseline LDCT examinations can also compute the risk of cardiovascular disease or death, paving the way for personalized screening interventions. Additionally, DL tools are available for assessing osteoporosis and ILD, which helps refine the individual's current and future health profile. The primary obstacles to AI integration into the LDCT screening pathway are the generalizability of performance and the explainability.

Semi-Supervised Gland Segmentation via Feature-Enhanced Contrastive Learning and Dual-Consistency Strategy.

Yu J, Li B, Pan X, Shi Z, Wang H, Lan R, Luo X

pubmed logopapersJun 1 2025
In the field of gland segmentation in histopathology, deep-learning methods have made significant progress. However, most existing methods not only require a large amount of high-quality annotated data but also tend to confuse the internal of the gland with the background. To address this challenge, we propose a new semi-supervised method named DCCL-Seg for gland segmentation, which follows the teacher-student framework. Our approach can be divided into follows steps. First, we design a contrastive learning module to improve the ability of the student model's feature extractor to distinguish between gland and background features. Then, we introduce a Signed Distance Field (SDF) prediction task and employ dual-consistency strategy (across tasks and models) to better reinforce the learning of gland internal. Next, we proposed a pseudo label filtering and reweighting mechanism, which filters and reweights the pseudo labels generated by the teacher model based on confidence. However, even after reweighting, the pseudo labels may still be influenced by unreliable pixels. Finally, we further designed an assistant predictor to learn the reweighted pseudo labels, which do not interfere with the student model's predictor and ensure the reliability of the student model's predictions. Experimental results on the publicly available GlaS and CRAG datasets demonstrate that our method outperforms other semi-supervised medical image segmentation methods.

FeaInfNet: Diagnosis of Medical Images With Feature-Driven Inference and Visual Explanations.

Peng Y, He L, Hu D, Liu Y, Yang L, Shang S

pubmed logopapersJun 1 2025
Interpretable deep-learning models have received widespread attention in the field of image recognition. However, owing to the coexistence of medical-image categories and the challenge of identifying subtle decision-making regions, many proposed interpretable deep-learning models suffer from insufficient accuracy and interpretability in diagnosing images of medical diseases. Therefore, this study proposed a feature-driven inference network (FeaInfNet) that incorporates a feature-based network reasoning structure. Specifically, local feature masks (LFM) were developed to extract feature vectors, thereby providing global information for these vectors and enhancing the expressive ability of FeaInfNet. Second, FeaInfNet compares the similarity of the feature vector corresponding to each subregion image patch with the disease and normal prototype templates that may appear in the region. It then combines the comparison of each subregion when making the final diagnosis. This strategy simulates the diagnosis process of doctors, making the model interpretable during the reasoning process, while avoiding misleading results caused by the participation of normal areas during reasoning. Finally, we proposed adaptive dynamic masks (Adaptive-DM) to interpret feature vectors and prototypes into human-understandable image patches to provide an accurate visual interpretation. Extensive experiments on multiple publicly available medical datasets, including RSNA, iChallenge-PM, COVID-19, ChinaCXRSet, MontgomerySet, and CBIS-DDSM, demonstrated that our method achieves state-of-the-art classification accuracy and interpretability compared with baseline methods in the diagnosis of medical images. Additional ablation studies were performed to verify the effectiveness of each component.

A Machine Learning Algorithm to Estimate the Probability of a True Scaphoid Fracture After Wrist Trauma.

Bulstra AEJ

pubmed logopapersJun 1 2025
To identify predictors of a true scaphoid fracture among patients with radial wrist pain following acute trauma, train 5 machine learning (ML) algorithms in predicting scaphoid fracture probability, and design a decision rule to initiate advanced imaging in high-risk patients. Two prospective cohorts including 422 patients with radial wrist pain following wrist trauma were combined. There were 117 scaphoid fractures (28%) confirmed on computed tomography, magnetic resonance imaging, or radiographs. Eighteen fractures (15%) were occult. Predictors of a scaphoid fracture were identified among demographics, mechanism of injury and examination maneuvers. Five ML-algorithms were trained in calculating scaphoid fracture probability. ML-algorithms were assessed on ability to discriminate between patients with and without a fracture (area under the receiver operating characteristic curve), agreement between observed and predicted probabilities (calibration), and overall performance (Brier score). The best performing ML-algorithm was incorporated into a probability calculator. A decision rule was proposed to initiate advanced imaging among patients with negative radiographs. Pain over the scaphoid on ulnar deviation, sex, age, and mechanism of injury were most strongly associated with a true scaphoid fracture. The best performing ML-algorithm yielded an area under the receiver operating characteristic curve, calibration slope, intercept, and Brier score of 0.77, 0.84, -0.01 and 0.159, respectively. The ML-derived decision rule proposes to initiate advanced imaging in patients with radial-sided wrist pain, negative radiographs, and a fracture probability of ≥10%. When applied to our cohort, this would yield 100% sensitivity, 38% specificity, and would have reduced the number of patients undergoing advanced imaging by 36% without missing a fracture. The ML-algorithm accurately calculated scaphoid fracture probability based on scaphoid pain on ulnar deviation, sex, age, and mechanism of injury. The ML-decision rule may reduce the number of patients undergoing advanced imaging by a third with a small risk of missing a fracture. External validation is required before implementation. Diagnostic II.

<i>Radiology: Cardiothoracic Imaging</i> Highlights 2024.

Catania R, Mukherjee A, Chamberlin JH, Calle F, Philomina P, Mastrodicasa D, Allen BD, Suchá D, Abbara S, Hanneman K

pubmed logopapersJun 1 2025
<i>Radiology: Cardiothoracic Imaging</i> publishes research, technical developments, and reviews related to cardiac, vascular, and thoracic imaging. The current review article, led by the <i>Radiology: Cardiothoracic Imaging</i> trainee editorial board, highlights the most impactful articles published in the journal between November 2023 and October 2024. The review encompasses various aspects of cardiac, vascular, and thoracic imaging related to coronary artery disease, cardiac MRI, valvular imaging, congenital and inherited heart diseases, thoracic imaging, lung cancer, artificial intelligence, and health services research. Key highlights include the role of CT fractional flow reserve analysis to guide patient management, the role of MRI elastography in identifying age-related myocardial stiffness associated with increased risk of heart failure, review of MRI in patients with cardiovascular implantable electronic devices and fractured or abandoned leads, imaging of mitral annular disjunction, specificity of the Lung Imaging Reporting and Data System version 2022 for detecting malignant airway nodules, and a radiomics-based reinforcement learning model to analyze serial low-dose CT scans in lung cancer screening. Ongoing research and future directions include artificial intelligence tools for applications such as plaque quantification using coronary CT angiography and growing understanding of the interconnectedness of environmental sustainability and cardiovascular imaging. <b>Keywords:</b> CT, MRI, CT-Coronary Angiography, Cardiac, Pulmonary, Coronary Arteries, Heart, Lung, Mediastinum, Mitral Valve, Aortic Valve, Artificial Intelligence © RSNA, 2025.
Page 98 of 1311301 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.