Sort by:
Page 110 of 1411410 results

Harnessing Artificial Intelligence to Predict Spontaneous Stone Passage: Development and Testing of a Machine Learning-Based Calculator.

Gupta K, Ricapito A, Lundon D, Khargi R, Connors C, Yaghoubian AJ, Gallante B, Atallah WM, Gupta M

pubmed logopapersJun 2 2025
<b><i>Objective:</i></b> We sought to use artificial intelligence (AI) to develop and test calculators to predict spontaneous stone passage (SSP) using radiographical and clinical data. <b><i>Methods:</i></b> Consecutive patients with solitary ureteral stones ≤10 mm on CT were prospectively enrolled and managed according to American Urological Association guidelines. The first 70% of patients were placed in the "training group" and used to develop the calculators. The latter 30% were enrolled in the "testing group" to externally validate the calculators. Exclusion criteria included contraindication to trial of SSP, ureteral stent, and anatomical anomaly. Demographic, clinical, and radiographical data were obtained and fed into machine learning (ML) platforms. SSP was defined as passage of stone without intervention. Calculators were derived from data using multivariate logistic regression. Discrimination, calibration, and clinical utility/net benefit of the developed models were assessed in the validation cohort. Receiver operating characteristic curves were constructed to measure their discriminative ability. <b><i>Results:</i></b> Fifty-one percent of 131 "training" patients spontaneously passed their stones. Passed stones were significantly closer to the bladder (8.6 <i>vs</i> 11.8 cm, p = 0.01) and smaller in length, width, and height. Two ML calculators were developed, one supervised machine learning (SML) and the other unsupervised machine learning (USML), and compared to an existing tool Multi-centre Cohort Study Evaluating the role of Inflammatory Markers In Patients Presenting with Acute Ureteric Colic (MIMIC). The SML calculator included maximum stone width (MSW), ureteral diameter above the stone (UDA), and distance from ureterovesical junction to bottom of stone and had an area under the curve (AUC) of 0.737 upon external validation of 58 "test" patients. Parameters selected by USML included MSW, UDA, and use of an anticholinergic, and it had an AUC of 0.706. The MIMIC calculator's AUC was 0.588 (0.489-0.686). <b><i>Conclusion:</i></b> We used AI to develop calculators that outperformed an existing tool and can help providers and patients make a better-informed decision for the treatment of ureteral stones.

Decision support using machine learning for predicting adequate bladder filling in prostate radiotherapy: a feasibility study.

Saiyo N, Assawanuwat K, Janthawanno P, Paduka S, Prempetch K, Chanphol T, Sakchatchawan B, Thongsawad S

pubmed logopapersJun 2 2025
This study aimed to develop a model for predicting the bladder volume ratio between daily CBCT and CT to determine adequate bladder filling in patients undergoing treatment for prostate cancer with external beam radiation therapy (EBRT). The model was trained using 465 datasets obtained from 34 prostate cancer patients. A total of 16 features were collected as input data, which included basic patient information, patient health status, blood examination laboratory results, and specific radiation therapy information. The ratio of the bladder volume between daily CBCT (dCBCT) and planning CT (pCT) was used as the model response. The model was trained using a bootstrap aggregation (bagging) algorithm with two machine learning (ML) approaches: classification and regression. The model accuracy was validated using other 93 datasets. For the regression approach, the accuracy of the model was evaluated based on the root mean square error (RMSE) and mean absolute error (MAE). By contrast, the model performance of the classification approach was assessed using sensitivity, specificity, and accuracy scores. The ML model showed promising results in the prediction of the bladder volume ratio between dCBCT and pCT, with an RMSE of 0.244 and MAE of 0.172 for the regression approach, sensitivity of 95.24%, specificity of 92.16%, and accuracy of 93.55% for the classification approach. The prediction model could potentially help the radiological technologist determine whether the bladder is full before treatment, thereby reducing the requirement for re-scan CBCT. HIGHLIGHTS: The bagging model demonstrates strong performance in predicting optimal bladder filling. The model achieves promising results with 95.24% sensitivity and 92.16% specificity. It supports therapists in assessing bladder fullness prior to treatment. It helps reduce the risk of requiring repeat CBCT scans.

Performance Comparison of Machine Learning Using Radiomic Features and CNN-Based Deep Learning in Benign and Malignant Classification of Vertebral Compression Fractures Using CT Scans.

Yeom JC, Park SH, Kim YJ, Ahn TR, Kim KG

pubmed logopapersJun 2 2025
Distinguishing benign from malignant vertebral compression fractures is critical for clinical management but remains challenging on contrast-enhanced abdominal CT, which lacks the soft tissue contrast of MRI. This study evaluates and compares radiomic feature-based machine learning and convolutional neural network-based deep learning models for classifying VCFs using abdominal CT. A retrospective cohort of 447 vertebral compression fractures (196 benign, 251 malignant) from 286 patients was analyzed. Radiomic features were extracted using PyRadiomics, with Recursive Feature Elimination selecting six key texture-based features (e.g., Run Variance, Dependence Non-Uniformity Normalized), highlighting textural heterogeneity as a malignancy marker. Machine learning models (XGBoost, SVM, KNN, Random Forest) and a 3D CNN were trained on CT data, with performance assessed via precision, recall, F1 score, accuracy, and AUC. The deep learning model achieved marginally superior overall performance, with a statistically significant higher AUC (77.66% vs. 75.91%, p < 0.05) and better precision, F1 score, and accuracy compared to the top-performing machine learning model (XGBoost). Deep learning's attention maps localized diagnostically relevant regions, mimicking radiologists' focus, whereas radiomics lacked spatial interpretability despite offering quantifiable biomarkers. This study underscores the complementary strengths of machine learning and deep learning: radiomics provides interpretable features tied to tumor heterogeneity, while DL autonomously extracts high-dimensional patterns with spatial explainability. Integrating both approaches could enhance diagnostic accuracy and clinician trust in abdominal CT-based VCF assessment. Limitations include retrospective single-center data and potential selection bias. Future multi-center studies with diverse protocols and histopathological validation are warranted to generalize these findings.

AI model using CT-based imaging biomarkers to predict hepatocellular carcinoma in patients with chronic hepatitis B.

Shin H, Hur MH, Song BG, Park SY, Kim GA, Choi G, Nam JY, Kim MA, Park Y, Ko Y, Park J, Lee HA, Chung SW, Choi NR, Park MK, Lee YB, Sinn DH, Kim SU, Kim HY, Kim JM, Park SJ, Lee HC, Lee DH, Chung JW, Kim YJ, Yoon JH, Lee JH

pubmed logopapersJun 1 2025
Various hepatocellular carcinoma (HCC) prediction models have been proposed for patients with chronic hepatitis B (CHB) using clinical variables. We aimed to develop an artificial intelligence (AI)-based HCC prediction model by incorporating imaging biomarkers derived from abdominal computed tomography (CT) images along with clinical variables. An AI prediction model employing a gradient-boosting machine algorithm was developed utilizing imaging biomarkers extracted by DeepFore, a deep learning-based CT auto-segmentation software. The derivation cohort (n = 5,585) was randomly divided into the training and internal validation sets at a 3:1 ratio. The external validation cohort included 2,883 patients. Six imaging biomarkers (i.e. abdominal visceral fat-total fat volume ratio, total fat-trunk volume ratio, spleen volume, liver volume, liver-spleen Hounsfield unit ratio, and muscle Hounsfield unit) and eight clinical variables were selected as the main variables of our model, PLAN-B-DF. In the internal validation set (median follow-up duration = 7.4 years), PLAN-B-DF demonstrated an excellent predictive performance with a c-index of 0.91 and good calibration function (p = 0.78 by the Hosmer-Lemeshow test). In the external validation cohort (median follow-up duration = 4.6 years), PLAN-B-DF showed a significantly better discrimination function compared to previous models, including PLAN-B, PAGE-B, modified PAGE-B, and CU-HCC (c-index, 0.89 vs. 0.65-0.78; all p <0.001), and maintained a good calibration function (p = 0.42 by the Hosmer-Lemeshow test). When patients were classified into four groups according to the risk probability calculated by PLAN-B-DF, the 10-year cumulative HCC incidence was 0.0%, 0.4%, 16.0%, and 46.2% in the minimal-, low-, intermediate-, and high-risk groups, respectively. This AI prediction model, integrating deep learning-based auto-segmentation of CT images, offers improved performance in predicting HCC risk among patients with CHB compared to previous models. The novel predictive model PLAN-B-DF, employing an automated computed tomography segmentation algorithm, significantly improves predictive accuracy and risk stratification for hepatocellular carcinoma in patients with chronic hepatitis B (CHB). Using a gradient-boosting algorithm and computed tomography metrics, such as visceral fat volume and myosteatosis, PLAN-B-DF outperforms previous models based solely on clinical and demographic data. This model not only shows a higher c-index compared to previous models, but also effectively classifies patients with CHB into different risk groups. This model uses machine learning to analyze the complex relationships among various risk factors contributing to hepatocellular carcinoma occurrence, thereby enabling more personalized surveillance for patients with CHB.

ScreenDx, an artificial intelligence-based algorithm for the incidental detection of pulmonary fibrosis.

Touloumes N, Gagianas G, Bradley J, Muelly M, Kalra A, Reicher J

pubmed logopapersJun 1 2025
Nonspecific symptoms and variability in radiographic reporting patterns contribute to a diagnostic delay of the diagnosis of pulmonary fibrosis. An attractive solution is the use of machine-learning algorithms to screen for radiographic features suggestive of pulmonary fibrosis. Thus, we developed and validated a machine learning classifier algorithm (ScreenDx) to screen computed tomography imaging and identify incidental cases of pulmonary fibrosis. ScreenDx is a deep learning convolutional neural network that was developed from a multi-source dataset (cohort A) of 3,658 cases of normal and abnormal CT's, including CT's from patients with COPD, emphysema, and community-acquired pneumonia. Cohort B, a US-based cohort (n = 381) was used for tuning the algorithm, and external validation was performed on cohort C (n = 683), a separate international dataset. At the optimal threshold, the sensitivity and specificity for detection of pulmonary fibrosis in cohort B was 0.91 (95 % CI 88-94 %) and 0.95 (95 % CI 93-97 %), respectively, with AUC 0.98. In the external validation dataset (cohort C), the sensitivity and specificity were 1.0 (95 % 99.9-100.0) and 0.98 (95 % CI 97.9-99.6), respectively, with AUC 0.997. There were no significant differences in the ability of ScreenDx to identify pulmonary fibrosis based on CT manufacturer (Phillips, Toshiba, GE Healthcare, or Siemens) or slice thickness (2 mm vs 2-4 mm vs 4 mm). Regardless of CT manufacturer or slice thickness, ScreenDx demonstrated high performance across two, multi-site datasets for identifying incidental cases of pulmonary fibrosis. This suggest that the algorithm may be generalizable across patient populations and different healthcare systems.

Comparing efficiency of an attention-based deep learning network with contemporary radiological workflow for pulmonary embolism detection on CTPA: A retrospective study.

Singh G, Singh A, Kainth T, Suman S, Sakla N, Partyka L, Phatak T, Prasanna P

pubmed logopapersJun 1 2025
Pulmonary embolism (PE) is the third most fatal cardiovascular disease in the United States. Currently, Computed Tomography Pulmonary Angiography (CTPA) serves as diagnostic gold standard for detecting PE. However, its efficacy is limited by factors such as contrast bolus timing, physician-dependent diagnostic accuracy, and time taken for scan interpretation. To address these limitations, we propose an AI-based PE triaging model (AID-PE) designed to predict the presence and key characteristics of PE on CTPA. This model aims to enhance diagnostic accuracy, efficiency, and the speed of PE identification. We trained AID-PE on the RSNA-STR PE CT (RSPECT) Dataset, N = 7279 and subsequently tested it on an in-house dataset (n = 106). We evaluated efficiency in a separate dataset (D<sub>4</sub>, n = 200) by comparing the time from scan to report in standard PE detection workflow versus AID-PE. A comparative analysis showed that AID-PE had an AUC/accuracy of 0.95/0.88. In contrast, a Convolutional Neural Network (CNN) classifier and a CNN-Long Short-Term Memory (LSTM) network without an attention module had an AUC/accuracy of 0.5/0.74 and 0.88/0.65, respectively. Our model achieved AUCs of 0.82 and 0.95 for detecting PE on the validation dataset and the independent test set, respectively. On D<sub>4</sub>, AID-PE took an average of 1.32 s to screen for PE across 148 CTPA studies, compared to an average of 40 min in contemporary workflow. AID-PE outperformed a baseline CNN classifier and a single-stage CNN-LSTM network without an attention module. Additionally, its efficiency is comparable to the current radiological workflow.

The Pivotal Role of Baseline LDCT for Lung Cancer Screening in the Era of Artificial Intelligence.

De Luca GR, Diciotti S, Mascalchi M

pubmed logopapersJun 1 2025
In this narrative review, we address the ongoing challenges of lung cancer (LC) screening using chest low-dose computerized tomography (LDCT) and explore the contributions of artificial intelligence (AI), in overcoming them. We focus on evaluating the initial (baseline) LDCT examination, which provides a wealth of information relevant to the screening participant's health. This includes the detection of large-size prevalent LC and small-size malignant nodules that are typically diagnosed as LCs upon growth in subsequent annual LDCT scans. Additionally, the baseline LDCT examination provides valuable information about smoking-related comorbidities, including cardiovascular disease, chronic obstructive pulmonary disease, and interstitial lung disease (ILD), by identifying relevant markers. Notably, these comorbidities, despite the slow progression of their markers, collectively exceed LC as ultimate causes of death at follow-up in LC screening participants. Computer-assisted diagnosis tools currently improve the reproducibility of radiologic readings and reduce the false negative rate of LDCT. Deep learning (DL) tools that analyze the radiomic features of lung nodules are being developed to distinguish between benign and malignant nodules. Furthermore, AI tools can predict the risk of LC in the years following a baseline LDCT. AI tools that analyze baseline LDCT examinations can also compute the risk of cardiovascular disease or death, paving the way for personalized screening interventions. Additionally, DL tools are available for assessing osteoporosis and ILD, which helps refine the individual's current and future health profile. The primary obstacles to AI integration into the LDCT screening pathway are the generalizability of performance and the explainability.

Adaptive Weighting Based Metal Artifact Reduction in CT Images.

Wang H, Wu Y, Wang Y, Wei D, Wu X, Ma J, Zheng Y

pubmed logopapersJun 1 2025
Against the metal artifact reduction (MAR) task in computed tomography (CT) imaging, most of the existing deep-learning-based approaches generally select a single Hounsfield unit (HU) window followed by a normalization operation to preprocess CT images. However, in practical clinical scenarios, different body tissues and organs are often inspected under varying window settings for good contrast. The methods trained on a fixed single window would lead to insufficient removal of metal artifacts when being transferred to deal with other windows. To alleviate this problem, few works have proposed to reconstruct the CT images under multiple-window configurations. Albeit achieving good reconstruction performance for different windows, they adopt to directly supervise each window learning in an equal weighting way based on the training set. To improve the learning flexibility and model generalizability, in this paper, we propose an adaptive weighting algorithm, called AdaW, for the multiple-window metal artifact reduction, which can be applied to different deep MAR network backbones. Specifically, we first formulate the multiple window learning task as a bi-level optimization problem. Then we derive an adaptive weighting optimization algorithm where the learning process for MAR under each window is automatically weighted via a learning-to-learn paradigm based on the training set and validation set. This rationality is finely substantiated through theoretical analysis. Based on different network backbones, experimental comparisons executed on five datasets with different body sites comprehensively validate the effectiveness of AdaW in helping improve the generalization performance as well as its good applicability. We will release the code at https://github.com/hongwang01/AdaW.

Ultra-Sparse-View Cone-Beam CT Reconstruction-Based Strictly Structure-Preserved Deep Neural Network in Image-Guided Radiation Therapy.

Song Y, Zhang W, Wu T, Luo Y, Shi J, Yang X, Deng Z, Qi X, Li G, Bai S, Zhao J, Zhong R

pubmed logopapersJun 1 2025
Radiation therapy is regarded as the mainstay treatment for cancer in clinic. Kilovoltage cone-beam CT (CBCT) images have been acquired for most treatment sites as the clinical routine for image-guided radiation therapy (IGRT). However, repeated CBCT scanning brings extra irradiation dose to the patients and decreases clinical efficiency. Sparse CBCT scanning is a possible solution to the problems mentioned above but at the cost of inferior image quality. To decrease the extra dose while maintaining the CBCT quality, deep learning (DL) methods are widely adopted. In this study, planning CT was used as prior information, and the corresponding strictly structure-preserved CBCT was simulated based on the attenuation information from the planning CT. We developed a hyper-resolution ultra-sparse-view CBCT reconstruction model, known as the planning CT-based strictly-structure-preserved neural network (PSSP-NET), using a generative adversarial network (GAN). This model utilized clinical CBCT projections with extremely low sampling rates for the rapid reconstruction of high-quality CBCT images, and its clinical performance was evaluated in head-and-neck cancer patients. Our experiments demonstrated enhanced performance and improved reconstruction speed.

CT-SDM: A Sampling Diffusion Model for Sparse-View CT Reconstruction Across Various Sampling Rates.

Yang L, Huang J, Yang G, Zhang D

pubmed logopapersJun 1 2025
Sparse views X-ray computed tomography has emerged as a contemporary technique to mitigate radiation dose. Because of the reduced number of projection views, traditional reconstruction methods can lead to severe artifacts. Recently, research studies utilizing deep learning methods has made promising progress in removing artifacts for Sparse-View Computed Tomography (SVCT). However, given the limitations on the generalization capability of deep learning models, current methods usually train models on fixed sampling rates, affecting the usability and flexibility of model deployment in real clinical settings. To address this issue, our study proposes a adaptive reconstruction method to achieve high-performance SVCT reconstruction at various sampling rate. Specifically, we design a novel imaging degradation operator in the proposed sampling diffusion model for SVCT (CT-SDM) to simulate the projection process in the sinogram domain. Thus, the CT-SDM can gradually add projection views to highly undersampled measurements to generalize the full-view sinograms. By choosing an appropriate starting point in diffusion inference, the proposed model can recover the full-view sinograms from various sampling rate with only one trained model. Experiments on several datasets have verified the effectiveness and robustness of our approach, demonstrating its superiority in reconstructing high-quality images from sparse-view CT scans across various sampling rates.
Page 110 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.