Sort by:
Page 28 of 2172170 results

CXR-LLaVA: a multimodal large language model for interpreting chest X-ray images.

Lee S, Youn J, Kim H, Kim M, Yoon SH

pubmed logopapersJul 1 2025
This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists. For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network. Then, the model was fine-tuned, primarily using Dataset 2. The model's diagnostic performance for major pathological findings was evaluated, along with the acceptability of radiologic reports by human radiologists, to gauge its potential for autonomous reporting. The model demonstrated impressive performance in test sets, achieving an average F1 score of 0.81 for six major pathological findings in the MIMIC internal test set and 0.56 for six major pathological findings in the external test set. The model's F1 scores surpassed those of GPT-4-vision and Gemini-Pro-Vision in both test sets. In human radiologist evaluations of the external test set, the model achieved a 72.7% success rate in autonomous reporting, slightly below the 84.0% rate of ground truth reports. This study highlights the significant potential of multimodal LLMs for CXR interpretation, while also acknowledging the performance limitations. Despite these challenges, we believe that making our model open-source will catalyze further research, expanding its effectiveness and applicability in various clinical contexts. Question How can a multimodal large language model be adapted to interpret chest X-rays and generate radiologic reports? Findings The developed CXR-LLaVA model effectively detects major pathological findings in chest X-rays and generates radiologic reports with a higher accuracy compared to general-purpose models. Clinical relevance This study demonstrates the potential of multimodal large language models to support radiologists by autonomously generating chest X-ray reports, potentially reducing diagnostic workloads and improving radiologist efficiency.

Noninvasive identification of HER2 status by integrating multiparametric MRI-based radiomics model with the vesical imaging-reporting and data system (VI-RADS) score in bladder urothelial carcinoma.

Luo C, Li S, Han Y, Ling J, Wu X, Chen L, Wang D, Chen J

pubmed logopapersJul 1 2025
HER2 expression is crucial for the application of HER2-targeted antibody-drug conjugates. This study aims to construct a predictive model by integrating multiparametric magnetic resonance imaging (mpMRI) based multimodal radiomics and the Vesical Imaging-Reporting and Data System (VI-RADS) score for noninvasive identification of HER2 status in bladder urothelial carcinoma (BUC). A total of 197 patients were retrospectively enrolled and randomly divided into a training cohort (n = 145) and a testing cohort (n = 52). The multimodal radiomics features were derived from mpMRI, which were also utilized for VI-RADS score evaluation. LASSO algorithm and six machine learning methods were applied for radiomics feature screening and model construction. The optimal radiomics model was selected to integrate with VI-RADS score to predict HER2 status, which was determined by immunohistochemistry. The performance of predictive model was evaluated by receiver operating characteristic curve with area under the curve (AUC). Among the enrolled patients, 110 (55.8%) patients were demonstrated with HER2-positive and 87 (44.2%) patients were HER2-negative. Eight features were selected to establish radiomics signature. The optimal radiomics signature achieved the AUC values of 0.841 (95% CI 0.779-0.904) in the training cohort and 0.794 (95%CI 0.650-0.938) in the testing cohort, respectively. The KNN model was selected to evaluate the significance of radiomics signature and VI-RADS score, which were integrated as a predictive nomogram. The AUC values for the nomogram in the training and testing cohorts were 0.889 (95%CI 0.840-0.938) and 0.826 (95%CI 0.702-0.950), respectively. Our study indicated the predictive model based on the integration of mpMRI-based radiomics and VI-RADS score could accurately predict HER2 status in BUC. The model might aid clinicians in tailoring individualized therapeutic strategies.

CT-based clinical-radiomics model to predict progression and drive clinical applicability in locally advanced head and neck cancer.

Bruixola G, Dualde-Beltrán D, Jimenez-Pastor A, Nogué A, Bellvís F, Fuster-Matanzo A, Alfaro-Cervelló C, Grimalt N, Salhab-Ibáñez N, Escorihuela V, Iglesias ME, Maroñas M, Alberich-Bayarri Á, Cervantes A, Tarazona N

pubmed logopapersJul 1 2025
Definitive chemoradiation is the primary treatment for locally advanced head and neck carcinoma (LAHNSCC). Optimising outcome predictions requires validated biomarkers, since TNM8 and HPV could have limitations. Radiomics may enhance risk stratification. This single-centre observational study collected clinical data and baseline CT scans from 171 LAHNSCC patients treated with chemoradiation. The dataset was divided into training (80%) and test (20%) sets, with a 5-fold cross-validation on the training set. Researchers extracted 108 radiomics features from each primary tumour and applied survival analysis and classification models to predict progression-free survival (PFS) and 5-year progression, respectively. Performance was evaluated using inverse probability of censoring weights and c-index for the PFS model and AUC, sensitivity, specificity, and accuracy for the 5-year progression model. Feature importance was measured by the SHapley Additive exPlanations (SHAP) method and patient stratification was assessed through Kaplan-Meier curves. The final dataset included 171 LAHNSCC patients, with 53% experiencing disease progression at 5 years. The random survival forest model best predicted PFS, with an AUC of 0.64 and CI of 0.66 on the test set, highlighting 4 radiomics features and TNM8 as significant contributors. It successfully stratified patients into low and high-risk groups (log-rank p < 0.005). The extreme gradient boosting model most effectively predicted a 5-year progression, incorporating 12 radiomics features and four clinical variables, achieving an AUC of 0.74, sensitivity of 0.53, specificity of 0.81, and accuracy of 0.66 on the test set. The combined clinical-radiomics model improved the standard TNM8 and clinical variables in predicting 5-year progression though further validation is necessary. Question There is an unmet need for non-invasive biomarkers to guide treatment in locally advanced head and neck cancer. Findings Clinical data (TNM8 staging, primary tumour site, age, and smoking) plus radiomics improved 5-year progression prediction compared with the clinical comprehensive model or TNM staging alone. Clinical relevance SHAP simplifies complex machine learning radiomics models for clinicians by using easy-to-understand graphical representations, promoting explainability.

Learning-based motion artifact correction in the Z-spectral domain for chemical exchange saturation transfer MRI.

Singh M, Mahmud SZ, Yedavalli V, Zhou J, Kamson DO, van Zijl P, Heo HY

pubmed logopapersJul 1 2025
To develop and evaluate a physics-driven, saturation contrast-aware, deep-learning-based framework for motion artifact correction in CEST MRI. A neural network was designed to correct motion artifacts directly from a Z-spectrum frequency (Ω) domain rather than an image spatial domain. Motion artifacts were simulated by modeling 3D rigid-body motion and readout-related motion during k-space sampling. A saturation-contrast-specific loss function was added to preserve amide proton transfer (APT) contrast, as well as enforce image alignment between motion-corrected and ground-truth images. The proposed neural network was evaluated on simulation data and demonstrated in healthy volunteers and brain tumor patients. The experimental results showed the effectiveness of motion artifact correction in the Z-spectrum frequency domain (MOCO<sub>Ω</sub>) compared to in the image spatial domain. In addition, a temporal convolution applied to a dynamic saturation image series was able to leverage motion artifacts to improve reconstruction results as a denoising process. The MOCO<sub>Ω</sub> outperformed existing techniques for motion correction in terms of image quality and computational efficiency. At 3 T, human experiments showed that the root mean squared error (RMSE) of APT images decreased from 4.7% to 2.1% at 1 μT and from 6.2% to 3.5% at 1.5 μT in case of "moderate" motion and from 8.7% to 2.8% at 1 μT and from 12.7% to 4.5% at 1.5 μT in case of "severe" motion, after motion artifact correction. The MOCO<sub>Ω</sub> could effectively correct motion artifacts in CEST MRI without compromising saturation transfer contrast.

Repeatability of AI-based, automatic measurement of vertebral and cardiovascular imaging biomarkers in low-dose chest CT: the ImaLife cohort.

Hamelink I, van Tuinen M, Kwee TC, van Ooijen PMA, Vliegenthart R

pubmed logopapersJul 1 2025
To evaluate the repeatability of AI-based automatic measurement of vertebral and cardiovascular markers on low-dose chest CT. We included participants of the population-based Imaging in Lifelines (ImaLife) study with low-dose chest CT at baseline and 3-4 month follow-up. An AI system (AI-Rad Companion chest CT prototype) performed automatic segmentation and quantification of vertebral height and density, aortic diameters, heart volume (cardiac chambers plus pericardial fat), and coronary artery calcium volume (CACV). A trained researcher visually checked segmentation accuracy. We evaluated the repeatability of adequate AI-based measurements at baseline and repeat scan using Intraclass Correlation Coefficient (ICC), relative differences, and change in CACV risk categorization, assuming no physiological change. Overall, 632 participants (63 ± 11 years; 56.6% men) underwent short-term repeat CT (mean interval, 3.9 ± 1.8 months). Visual assessment showed adequate segmentation in both baseline and repeat scan for 98.7% of vertebral measurements, 80.1-99.4% of aortic measurements (except for the sinotubular junction (65.2%)), and 86.0% of CACV. For heart volume, 53.5% of segmentations were adequate at baseline and repeat scans. ICC for adequately segmented cases showed excellent agreement for all biomarkers (ICC > 0.9). Relative difference between baseline and repeat measurements was < 4% for vertebral and aortic measurements, 7.5% for heart volume, and 28.5% for CACV. There was high concordance in CACV risk categorization (81.2%). In low-dose chest CT, segmentation accuracy of AI-based software was high for vertebral, aortic, and CACV evaluation and relatively low for heart volume. There was excellent repeatability of vertebral and aortic measurements and high concordance in overall CACV risk categorization. Question Can AI algorithms for opportunistic screening in chest CT obtain an accurate and repeatable result when applied to multiple CT scans of the same participant? Findings Vertebral and aortic analysis showed accurate segmentation and excellent repeatability; coronary calcium segmentation was generally accurate but showed modest repeatability due to a non-electrocardiogram-triggered protocol. Clinical relevance Opportunistic screening for diseases outside the primary purpose of the CT scan is time-consuming. AI allows automated vertebral, aortic, and coronary artery calcium (CAC) assessment, with highly repeatable outcomes of vertebral and aortic biomarkers and high concordance in overall CAC categorization.

Preoperative discrimination of absence or presence of myometrial invasion in endometrial cancer with an MRI-based multimodal deep learning radiomics model.

Chen Y, Ruan X, Wang X, Li P, Chen Y, Feng B, Wen X, Sun J, Zheng C, Zou Y, Liang B, Li M, Long W, Shen Y

pubmed logopapersJul 1 2025
Accurate preoperative evaluation of myometrial invasion (MI) is essential for treatment decisions in endometrial cancer (EC). However, the diagnostic accuracy of commonly utilized magnetic resonance imaging (MRI) techniques for this assessment exhibits considerable variability. This study aims to enhance preoperative discrimination of absence or presence of MI by developing and validating a multimodal deep learning radiomics (MDLR) model based on MRI. During March 2010 and February 2023, 1139 EC patients (age 54.771 ± 8.465 years; range 24-89 years) from five independent centers were enrolled retrospectively. We utilized ResNet18 to extract multi-scale deep learning features from T2-weighted imaging followed by feature selection via Mann-Whitney U test. Subsequently, a Deep Learning Signature (DLS) was formulated using Integrated Sparse Bayesian Extreme Learning Machine. Furthermore, we developed Clinical Model (CM) based on clinical characteristics and MDLR model by integrating clinical characteristics with DLS. The area under the curve (AUC) was used for evaluating diagnostic performance of the models. Decision curve analysis (DCA) and integrated discrimination index (IDI) were used to assess the clinical benefit and compare the predictive performance of models. The MDLR model comprised of age, histopathologic grade, subjective MR findings (TMD and Reading for MI status) and DLS demonstrated the best predictive performance. The AUC values for MDLR in training set, internal validation set, external validation set 1, and external validation set 2 were 0.899 (95% CI, 0.866-0.926), 0.874 (95% CI, 0.829-0.912), 0.862 (95% CI, 0.817-0.899) and 0.867 (95% CI, 0.806-0.914) respectively. The IDI and DCA showed higher diagnostic performance and clinical net benefits for the MDLR than for CM or DLS, which revealed MDLR may enhance decision-making support. The MDLR which incorporated clinical characteristics and DLS could improve preoperative accuracy in discriminating absence or presence of MI. This improvement may facilitate individualized treatment decision-making for EC.

Generalizable, sequence-invariant deep learning image reconstruction for subspace-constrained quantitative MRI.

Hu Z, Chen Z, Cao T, Lee HL, Xie Y, Li D, Christodoulou AG

pubmed logopapersJul 1 2025
To develop a deep subspace learning network that can function across different pulse sequences. A contrast-invariant component-by-component (CBC) network structure was developed and compared against previously reported spatiotemporal multicomponent (MC) structure for reconstructing MR Multitasking images. A total of 130, 167, and 16 subjects were imaged using T<sub>1</sub>, T<sub>1</sub>-T<sub>2</sub>, and T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -fat fraction (FF) mapping sequences, respectively. We compared CBC and MC networks in matched-sequence experiments (same sequence for training and testing), then examined their cross-sequence performance and generalizability by unmatched-sequence experiments (different sequences for training and testing). A "universal" CBC network was also evaluated using mixed-sequence training (combining data from all three sequences). Evaluation metrics included image normalized root mean squared error and Bland-Altman analyses of end-diastolic maps, both versus iteratively reconstructed references. The proposed CBC showed significantly better normalized root mean squared error than MC in both matched-sequence and unmatched-sequence experiments (p < 0.001), fewer structural details in quantitative error maps, and tighter limits of agreement. CBC was more generalizable than MC (smaller performance loss; p = 0.006 in T<sub>1</sub> and p < 0.001 in T<sub>1</sub>-T<sub>2</sub> from matched-sequence testing to unmatched-sequence testing) and additionally allowed training of a single universal network to reconstruct images from any of the three pulse sequences. The mixed-sequence CBC network performed similarly to matched-sequence CBC in T<sub>1</sub> (p = 0.178) and T<sub>1</sub>-T<sub>2</sub> (p = 0121), where training data were plentiful, and performed better in T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -FF (p < 0.001) where training data were scarce. Contrast-invariant learning of spatial features rather than spatiotemporal features improves performance and generalizability, addresses data scarcity, and offers a pathway to universal supervised deep subspace learning.

Longitudinal twin growth discordance patterns and adverse perinatal outcomes.

Prasad S, Ayhan I, Mohammed D, Kalafat E, Khalil A

pubmed logopapersJul 1 2025
Growth discordance in twin pregnancies is associated with increased perinatal morbidity and mortality, yet the patterns of discordance progression and the utility of Doppler assessments remain underinvestigated. The objective of this study was to conduct a longitudinal assessment of intertwin growth and Doppler discordance to identify possible distinct patterns and to investigate the predictive value of longitudinal discordance patterns for adverse perinatal outcomes in twin pregnancies. This retrospective cohort study included twin pregnancies followed and delivered at a tertiary hospital in London (United Kingdom) between 2010 and 2023. We included pregnancies with at least 3 ultrasound assessments after 18 weeks and delivery beyond 34 weeks' gestation. Monoamniotic twin pregnancies, pregnancies with twin-to-twin transfusion syndrome, genetic or structural abnormalities, or incomplete data were excluded. Data on chorionicity, biometry, Doppler indices, maternal characteristics and obstetrics, and neonatal outcomes were extracted from electronic records. Doppler assessment included velocimetry of the umbilical artery, middle cerebral artery, and cerebroplacental ratio. Intertwin growth discordance was calculated for each scan. The primary outcome was a composite of perinatal mortality and neonatal morbidity. Statistical analysis involved multilevel mixed effects regression models and unsupervised machine learning algorithms, specifically k-means clustering, to identify distinct patterns of intertwin discordance and their predictive value. Predictive models were compared using the area under the receiver operating characteristic curve, calibration intercept, and slope, validated with repeated cross-validation. Analyses were performed using R, with significance set at P<.05. Data from 823 twin pregnancies (647 dichorionic, 176 monochorionic) were analyzed. Five distinct patterns of intertwin growth discordance were identified using an unsupervised learning algorithm that clustered twin pairs based on the progression and patterns of discordance over gestation: low-stable (n=204, 24.8%), mild-decreasing (n=171, 20.8%), low-increasing (n=173, 21.0%), mild-increasing (n=189, 23.0%), and high-stable (n=86, 10.4%). In the high-stable cluster, the rates of perinatal morbidity (46.5%, 40/86) and mortality (9.3%, 8/86) were significantly higher compared to the low-stable (reference) cluster (P<.001). High-stable growth pattern was also associated with a significantly higher risk of composite adverse perinatal outcomes (odds ratio: 70.19, 95% confidence interval: 24.18-299.03, P<.001; adjusted odds ratio: 76.44, 95% confidence interval: 25.39-333.02, P<.001). The model integrating discordance pattern with cerebroplacental ratio discordance at the last ultrasound before delivery demonstrated superior predictive accuracy, evidenced by the highest area under the receiver operating characteristic curve of 0.802 (95% confidence interval: 0.712-0.892, P<.001), compared to only discordance patterns (area under the receiver operating characteristic curve: 0.785, 95% confidence interval: 0.697-0.873), intertwin weight discordance at the last ultrasound prior to delivery (area under the receiver operating characteristic curve: 0.677, 95% confidence interval: 0.545-0.809), combination of single measurements of estimated fetal weight and cardiopulmonary resuscitation discordance at the last ultrasound prior to delivery (area under the receiver operating characteristic curve: 0.702, 95% confidence interval: 0.586-0.818), and single measurement of cardiopulmonary resuscitation discordance only at the last ultrasound (area under the receiver operating characteristic curve: 0.633, 95% confidence interval: 0.515-0.751). Using an unsupervised machine learning algorithm, we identified 5 distinct trajectories of intertwin fetal growth discordance. Consistent high discordance is associated with increased rates of adverse perinatal outcomes, with a dose-response relationship. Moreover, a predictive model integrating discordance trajectory and cardiopulmonary resuscitation discordance at the last visit demonstrated superior predictive accuracy for the prediction of composite adverse perinatal outcomes, compared to either of these measurements alone or a single value of estimated fetal weight discordance at the last ultrasound prior to delivery.

Multi-site, multi-vendor development and validation of a deep learning model for liver stiffness prediction using abdominal biparametric MRI.

Ali R, Li H, Zhang H, Pan W, Reeder SB, Harris D, Masch W, Aslam A, Shanbhogue K, Bernieh A, Ranganathan S, Parikh N, Dillman JR, He L

pubmed logopapersJul 1 2025
Chronic liver disease (CLD) is a substantial cause of morbidity and mortality worldwide. Liver stiffness, as measured by MR elastography (MRE), is well-accepted as a surrogate marker of liver fibrosis. To develop and validate deep learning (DL) models for predicting MRE-derived liver stiffness using routine clinical non-contrast abdominal T1-weighted (T1w) and T2-weighted (T2w) data from multiple institutions/system manufacturers in pediatric and adult patients. We identified pediatric and adult patients with known or suspected CLD from four institutions, who underwent clinical MRI with MRE from 2011 to 2022. We used T1w and T2w data to train DL models for liver stiffness classification. Patients were categorized into two groups for binary classification using liver stiffness thresholds (≥ 2.5 kPa, ≥ 3.0 kPa, ≥ 3.5 kPa, ≥ 4 kPa, or ≥ 5 kPa), reflecting various degrees of liver stiffening. We identified 4695 MRI examinations from 4295 patients (mean ± SD age, 47.6 ± 18.7 years; 428 (10.0%) pediatric; 2159 males [50.2%]). With a primary liver stiffness threshold of 3.0 kPa, our model correctly classified patients into no/minimal (< 3.0 kPa) vs moderate/severe (≥ 3.0 kPa) liver stiffness with AUROCs of 0.83 (95% CI: 0.82, 0.84) in our internal multi-site cross-validation (CV) experiment, 0.82 (95% CI: 0.80, 0.84) in our temporal hold-out validation experiment, and 0.79 (95% CI: 0.75, 0.81) in our external leave-one-site-out CV experiment. The developed model is publicly available ( https://github.com/almahdir1/Multi-channel-DeepLiverNet2.0.git ). Our DL models exhibited reasonable diagnostic performance for categorical classification of liver stiffness on a large diverse dataset using T1w and T2w MRI data. Question Can DL models accurately predict liver stiffness using routine clinical biparametric MRI in pediatric and adult patients with CLD? Findings DeepLiverNet2.0 used biparametric MRI data to classify liver stiffness, achieving AUROCs of 0.83, 0.82, and 0.79 for multi-site CV, hold-out validation, and external CV. Clinical relevance Our DeepLiverNet2.0 AI model can categorically classify the severity of liver stiffening using anatomic biparametric MR images in children and young adults. Model refinements and incorporation of clinical features may decrease the need for MRE.

Malignancy risk stratification for pulmonary nodules: comparing a deep learning approach to multiparametric statistical models in different disease groups.

Piskorski L, Debic M, von Stackelberg O, Schlamp K, Welzel L, Weinheimer O, Peters AA, Wielpütz MO, Frauenfelder T, Kauczor HU, Heußel CP, Kroschke J

pubmed logopapersJul 1 2025
Incidentally detected pulmonary nodules present a challenge in clinical routine with demand for reliable support systems for risk classification. We aimed to evaluate the performance of the lung-cancer-prediction-convolutional-neural-network (LCP-CNN), a deep learning-based approach, in comparison to multiparametric statistical methods (Brock model and Lung-RADS®) for risk classification of nodules in cohorts with different risk profiles and underlying pulmonary diseases. Retrospective analysis was conducted on non-contrast and contrast-enhanced CT scans containing pulmonary nodules measuring 5-30 mm. Ground truth was defined by histology or follow-up stability. The final analysis was performed on 297 patients with 422 eligible nodules, of which 105 nodules were malignant. Classification performance of the LCP-CNN, Brock model, and Lung-RADS® was evaluated in terms of diagnostic accuracy measurements including ROC-analysis for different subcohorts (total, screening, emphysema, and interstitial lung disease). LCP-CNN demonstrated superior performance compared to the Brock model in total and screening cohorts (AUC 0.92 (95% CI: 0.89-0.94) and 0.93 (95% CI: 0.89-0.96)). Superior sensitivity of LCP-CNN was demonstrated compared to the Brock model and Lung-RADS® in total, screening, and emphysema cohorts for a risk threshold of 5%. Superior sensitivity of LCP-CNN was also shown across all disease groups compared to the Brock model at a threshold of 65%, compared to Lung-RADS® sensitivity was better or equal. No significant differences in the performance of LCP-CNN were found between subcohorts. This study offers further evidence of the potential to integrate deep learning-based decision support systems into pulmonary nodule classification workflows, irrespective of the individual patient risk profile and underlying pulmonary disease. Question Is a deep-learning approach (LCP-CNN) superior to multiparametric models (Brock model, Lung-RADS®) in classifying pulmonary nodule risk across varied patient profiles? Findings LCP-CNN shows superior performance in risk classification of pulmonary nodules compared to multiparametric models with no significant impact on risk profiles and structural pulmonary diseases. Clinical relevance LCP-CNN offers efficiency and accuracy, addressing limitations of traditional models, such as variations in manual measurements or lack of patient data, while producing robust results. Such approaches may therefore impact clinical work by complementing or even replacing current approaches.
Page 28 of 2172170 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.