Sort by:
Page 33 of 2982977 results

Establishing a Deep Learning Model That Integrates Pretreatment and Midtreatment Computed Tomography to Predict Treatment Response in Non-Small Cell Lung Cancer.

Chen X, Meng F, Zhang P, Wang L, Yao S, An C, Li H, Zhang D, Li H, Li J, Wang L, Liu Y

pubmed logopapersAug 1 2025
Patients with identical stages or similar tumor volumes can vary significantly in their responses to radiation therapy (RT) due to individual characteristics, making personalized RT for non-small cell lung cancer (NSCLC) challenging. This study aimed to develop a deep learning model by integrating pretreatment and midtreatment computed tomography (CT) to predict the treatment response in NSCLC patients. We retrospectively collected data from 168 NSCLC patients across 3 hospitals. Data from Shanghai General Hospital (SGH, 35 patients) and Shanxi Cancer Hospital (SCH, 93 patients) were used for model training and internal validation, while data from Linfen Central Hospital (LCH, 40 patients) were used for external validation. Deep learning, radiomics, and clinical features were extracted to establish a varying time interval long short-term memory network for response prediction. Furthermore, we derived a model-deduced personalize dose escalation (DE) for patients predicted to have suboptimal gross tumor volume regression. The area under the receiver operating characteristic curve (AUC) and predicted absolute error were used to evaluate the predictive Response Evaluation Criteria in Solid Tumors classification and the proportion of gross tumor volume residual. DE was calculated as the biological equivalent dose using an /α/β ratio of 10 Gy. The model using only pretreatment CT achieved the highest AUC of 0.762 and 0.687 in internal and external validation respectively, whereas the model integrating both pretreatment and midtreatment CT achieved AUC of 0.869 and 0.798, with predicted absolute error of 0.137 and 0.185, respectively. We performed personalized DE for 29 patients. Their original biological equivalent dose was approximately 72 Gy, within the range of 71.6 Gy to 75 Gy. DE ranged from 77.7 to 120 Gy for 29 patients, with 17 patients exceeding 100 Gy and 8 patients reaching the model's preset upper limit of 120 Gy. Combining pretreatment and midtreatment CT enhances prediction performance for RT response and offers a promising approach for personalized DE in NSCLC.

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.

Deep Learning-Based Signal Amplification of T1-Weighted Single-Dose Images Improves Metastasis Detection in Brain MRI.

Haase R, Pinetz T, Kobler E, Bendella Z, Zülow S, Schievelkamp AH, Schmeel FC, Panahabadi S, Stylianou AM, Paech D, Foltyn-Dumitru M, Wagner V, Schlamp K, Heussel G, Holtkamp M, Heussel CP, Vahlensieck M, Luetkens JA, Schlemmer HP, Haubold J, Radbruch A, Effland A, Deuschl C, Deike K

pubmed logopapersAug 1 2025
Double-dose contrast-enhanced brain imaging improves tumor delineation and detection of occult metastases but is limited by concerns about gadolinium-based contrast agents' effects on patients and the environment. The purpose of this study was to test the benefit of a deep learning-based contrast signal amplification in true single-dose T1-weighted (T-SD) images creating artificial double-dose (A-DD) images for metastasis detection in brain magnetic resonance imaging. In this prospective, multicenter study, a deep learning-based method originally trained on noncontrast, low-dose, and T-SD brain images was applied to T-SD images of 30 participants (mean age ± SD, 58.5 ± 11.8 years; 23 women) acquired externally between November 2022 and June 2023. Four readers with different levels of experience independently reviewed T-SD and A-DD images for metastases with 4 weeks between readings. A reference reader reviewed additionally acquired true double-dose images to determine any metastases present. Performances were compared using Mid-p McNemar tests for sensitivity and Wilcoxon signed rank tests for false-positive findings. All readers found more metastases using A-DD images. The 2 experienced neuroradiologists achieved the same level of sensitivity using T-SD images (62 of 91 metastases, 68.1%). While the increase in sensitivity using A-DD images was only descriptive for 1 of them (A-DD: 65 of 91 metastases, +3.3%, P = 0.424), the second neuroradiologist benefited significantly with a sensitivity increase of 12.1% (73 of 91 metastases, P = 0.008). The 2 less experienced readers (1 resident and 1 fellow) both found significantly more metastases on A-DD images (resident, T-SD: 61.5%, A-DD: 68.1%, P = 0.039; fellow, T-SD: 58.2%, A-DD: 70.3%, P = 0.008). They were therefore able to use A-DD images to increase their sensitivity to the neuroradiologists' initial level on regular T-SD images. False-positive findings did not differ significantly between sequences. However, readers showed descriptively more false-positive findings on A-DD images. The benefit in sensitivity particularly applied to metastases ≤5 mm (5.7%-17.3% increase in sensitivity). A-DD images can improve the detectability of brain metastases without a significant loss of precision and could therefore represent a potentially valuable addition to regular single-dose brain imaging.

First comparison between artificial intelligence-guided coronary computed tomography angiography versus single-photon emission computed tomography testing for ischemia in clinical practice.

Cho GW, Sayed S, D'Costa Z, Karlsberg DW, Karlsberg RP

pubmed logopapersAug 1 2025
Noninvasive cardiac testing with coronary computed tomography angiography (CCTA) and single-photon emission computed tomography (SPECT) are becoming alternatives to invasive angiography for the evaluation of obstructive coronary artery disease. We aimed to evaluate whether a novel artificial intelligence (AI)-assisted CCTA program is comparable to SPECT imaging for ischemic testing. CCTA images were analyzed using an artificial intelligence convolutional neural network machine-learning-based model, atherosclerosis imaging-quantitative computed tomography (AI-QCT) ISCHEMIA . A total of 183 patients (75 females and 108 males, with an average age of 60.8 years ± 12.3 years) were selected. All patients underwent AI-QCT ISCHEMIA -augmented CCTA, with 60 undergoing concurrent SPECT and 16 having invasive coronary angiograms. Eight studies were excluded from analysis due to incomplete data or coronary anomalies.  A total of 175 patients (95%) had CCTA performed, deemed acceptable for AI-QCT ISCHEMIA interpretation. Compared to invasive angiography, AI-QCT ISCHEMIA -driven CCTA showed a sensitivity of 75% and specificity of 70% for predicting coronary ischemia, versus 70% and 53%, respectively for SPECT. The negative predictive value was high for female patients when using AI-QCT ISCHEMIA compared to SPECT (91% vs. 68%, P  = 0.042). Area under the receiver operating characteristic curves were similar between both modalities (0.81 for AI-CCTA, 0.75 for SPECT, P  = 0.526). When comparing both modalities, the correlation coefficient was r  = 0.71 ( P  < 0.04). AI-powered CCTA is a viable alternative to SPECT for detecting myocardial ischemia in patients with low- to intermediate-risk coronary artery disease, with significant positive and negative correlation in results. For patients who underwent confirmatory invasive angiography, the results of AI-CCTA and SPECT imaging were comparable. Future research focusing on prospective studies involving larger and more diverse patient populations is warranted to further investigate the benefits offered by AI-driven CCTA.

Deep Learning Reconstruction Combined With Conventional Acceleration Improves Image Quality of 3 T Brain MRI and Does Not Impact Quantitative Diffusion Metrics.

Wilpert C, Russe MF, Weiss J, Voss C, Rau S, Strecker R, Reisert M, Bedin R, Urbach H, Zaitsev M, Bamberg F, Rau A

pubmed logopapersAug 1 2025
Deep learning reconstruction of magnetic resonance imaging (MRI) allows to either improve image quality of accelerated sequences or to generate high-resolution data. We evaluated the interaction of conventional acceleration and Deep Resolve Boost (DRB)-based reconstruction techniques of a single-shot echo-planar imaging (ssEPI) diffusion-weighted imaging (DWI) on image quality features in cerebral 3 T brain MRI and compared it with a state-of-the-art DWI sequence. In this prospective study, 24 patients received a standard of care ssEPI DWI and 5 additional adapted ssEPI DWI sequences, 3 of those with DRB reconstruction. Qualitative analysis encompassed rating of image quality, noise, sharpness, and artifacts. Quantitative analysis compared apparent diffusion coefficient (ADC) values region-wise between the different DWI sequences. Intraclass correlations, paired sampled t test, Wilcoxon signed rank test, and weighted Cohen κ were used. Compared with the reference standard, the acquisition time was significantly improved in accelerated DWI from 75 seconds up to 50% (39 seconds; P < 0.001). All tested DRB-reconstructed sequences showed significantly improved image quality, sharpness, and reduced noise ( P < 0.001). Highest image quality was observed for the combination of conventional acceleration and DL reconstruction. In singular slices, more artifacts were observed for DRB-reconstructed sequences ( P < 0.001). While in general high consistency was found between ADC values, increasing differences in ADC values were noted with increasing acceleration and application of DRB. Falsely pathological ADCs were rarely observed near frontal poles and optic chiasm attributable to susceptibility-related artifacts due to adjacent sinuses. In this comparative study, we found that the combination of conventional acceleration and DRB reconstruction improves image quality and enables faster acquisition of ssEPI DWI. Nevertheless, a tradeoff between increased acceleration with risk of stronger artifacts and high-resolution with longer acquisition time needs to be considered, especially for application in cerebral MRI.

Multimodal multiphasic preoperative image-based deep-learning predicts HCC outcomes after curative surgery.

Hui RW, Chiu KW, Lee IC, Wang C, Cheng HM, Lu J, Mao X, Yu S, Lam LK, Mak LY, Cheung TT, Chia NH, Cheung CC, Kan WK, Wong TC, Chan AC, Huang YH, Yuen MF, Yu PL, Seto WK

pubmed logopapersAug 1 2025
HCC recurrence frequently occurs after curative surgery. Histological microvascular invasion (MVI) predicts recurrence but cannot provide preoperative prognostication, whereas clinical prediction scores have variable performances. Recurr-NET, a multimodal multiphasic residual-network random survival forest deep-learning model incorporating preoperative CT and clinical parameters, was developed to predict HCC recurrence. Preoperative triphasic CT scans were retrieved from patients with resected histology-confirmed HCC from 4 centers in Hong Kong (internal cohort). The internal cohort was randomly divided in an 8:2 ratio into training and internal validation. External testing was performed in an independent cohort from Taiwan.Among 1231 patients (age 62.4y, 83.1% male, 86.8% viral hepatitis, and median follow-up 65.1mo), cumulative HCC recurrence rates at years 2 and 5 were 41.8% and 56.4%, respectively. Recurr-NET achieved excellent accuracy in predicting recurrence from years 1 to 5 (internal cohort AUROC 0.770-0.857; external AUROC 0.758-0.798), significantly outperforming MVI (internal AUROC 0.518-0.590; external AUROC 0.557-0.615) and multiple clinical risk scores (ERASL-PRE, ERASL-POST, DFT, and Shim scores) (internal AUROC 0.523-0.587, external AUROC: 0.524-0.620), respectively (all p < 0.001). Recurr-NET was superior to MVI in stratifying recurrence risks at year 2 (internal: 72.5% vs. 50.0% in MVI; external: 65.3% vs. 46.6% in MVI) and year 5 (internal: 86.4% vs. 62.5% in MVI; external: 81.4% vs. 63.8% in MVI) (all p < 0.001). Recurr-NET was also superior to MVI in stratifying liver-related and all-cause mortality (all p < 0.001). The performance of Recurr-NET remained robust in subgroup analyses. Recurr-NET accurately predicted HCC recurrence, outperforming MVI and clinical prediction scores, highlighting its potential in preoperative prognostication.

M4CXR: Exploring Multitask Potentials of Multimodal Large Language Models for Chest X-Ray Interpretation.

Park J, Kim S, Yoon B, Hyun J, Choi K

pubmed logopapersAug 1 2025
The rapid evolution of artificial intelligence, especially in large language models (LLMs), has significantly impacted various domains, including healthcare. In chest X-ray (CXR) analysis, previous studies have employed LLMs, but with limitations: either underutilizing the LLMs' capability for multitask learning or lacking clinical accuracy. This article presents M4CXR, a multimodal LLM designed to enhance CXR interpretation. The model is trained on a visual instruction-following dataset that integrates various task-specific datasets in a conversational format. As a result, the model supports multiple tasks such as medical report generation (MRG), visual grounding, and visual question answering (VQA). M4CXR achieves state-of-the-art clinical accuracy in MRG by employing a chain-of-thought (CoT) prompting strategy, in which it identifies findings in CXR images and subsequently generates corresponding reports. The model is adaptable to various MRG scenarios depending on the available inputs, such as single-image, multiimage, and multistudy contexts. In addition to MRG, M4CXR performs visual grounding at a level comparable to specialized models and demonstrates outstanding performance in VQA. Both quantitative and qualitative assessments reveal M4CXR's versatility in MRG, visual grounding, and VQA, while consistently maintaining clinical accuracy.

Deep learning model for automated segmentation of sphenoid sinus and middle skull base structures in CBCT volumes using nnU-Net v2.

Gülşen İT, Kuran A, Evli C, Baydar O, Dinç Başar K, Bilgir E, Çelik Ö, Bayrakdar İŞ, Orhan K, Acu B

pubmed logopapersAug 1 2025
The purpose of this study is the development of a deep learning model based on nnU-Net v2 for the automated segmentation of sphenoid sinus and middle skull base anatomic structures in cone-beam computed tomography (CBCT) volumes, followed by an evaluation of the model's performance. In this retrospective study, the sphenoid sinus and surrounding anatomical structures in 99 CBCT scans were annotated using web-based labeling software. Model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.01 for 1000 epochs. The performance of the model in automatically segmenting these anatomical structures in CBCT scans was evaluated using a series of metrics, including accuracy, precision, recall, dice coefficient (DC), 95% Hausdorff distance (95% HD), intersection on union (IoU), and AUC. The developed deep learning model demonstrated a high level of success in segmenting sphenoid sinus, foramen rotundum, and Vidian canal. Upon evaluation of the DC values, it was observed that the model demonstrated the highest degree of ability to segment the sphenoid sinus, with a DC value of 0.96. The nnU-Net v2-based deep learning model achieved high segmentation performance for the sphenoid sinus, foramen rotundum, and Vidian canal within the middle skull base, with the highest DC observed for the sphenoid sinus (DC: 0.96). However, the model demonstrated limited performance in segmenting other foramina of the middle skull base, indicating the need for further optimization for these structures.

FOCUS-DWI improves prostate cancer detection through deep learning reconstruction with IQMR technology.

Zhao Y, Xie XL, Zhu X, Huang WN, Zhou CW, Ren KX, Zhai RY, Wang W, Wang JW

pubmed logopapersAug 1 2025
This study explored the effects of using Intelligent Quick Magnetic Resonance (IQMR) image post-processing on image quality in Field of View Optimized and Constrained Single-Shot Diffusion-Weighted Imaging (FOCUS-DWI) sequences for prostate cancer detection, and assessed its efficacy in distinguishing malignant from benign lesions. The clinical data and MRI images from 62 patients with prostate masses (31 benign and 31 malignant) were retrospectively analyzed. Axial T2-weighted imaging with fat saturation (T2WI-FS) and FOCUS-DWI sequences were acquired, and the FOCUS-DWI images were processed using the IQMR post-processing system to generate IQMR-FOCUS-DWI images. Two independent radiologists undertook subjective scoring, grading using the Prostate Imaging Reporting and Data System (PI-RADS), diagnosis of benign and malignant lesions, and diagnostic confidence scoring for images from the FOCUS-DWI and IQMR-FOCUS-DWI sequences. Additionally, quantitative analyses, specifically, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), were conducted using T2WI-FS as the reference standard. The apparent diffusion coefficients (ADCs) of malignant and benign lesions were compared between the two imaging sequences. Spearman correlation coefficients were calculated to evaluate the associations between diagnostic confidence scores and diagnostic accuracy rates of the two sequence groups, as well as between the ADC values of malignant lesions and Gleason grading in the two sequence groups. Receiver operating characteristic (ROC) curves were utilized to assess the efficacy of ADC in distinguishing lesions. The qualitative analysis revealed that IQMR-FOCUS-DWI images showed significantly better noise suppression, reduced geometric distortion, and enhanced overall quality relative to the FOCUS-DWI images (P < 0.001). There was no significant difference in the PI-RADS scores between IQMR-FOCUS-DWI and FOCUS-DWI images (P = 0.0875), while the diagnostic confidence scores of IQMR-FOCUS-DWI sequences were markedly higher than those of FOCUS-DWI sequences (P = 0.0002). The diagnostic results of the FOCUS-DWI sequences for benign and malignant prostate lesions were consistent with those of the pathological results (P < 0.05), as were those of the IQMR-FOCUS-DWI sequences (P < 0.05). The quantitative analysis indicated that the PSNR, SSIM, and ADC values were markedly greater in IQMR-FOCUS-DWI images relative to FOCUS-DWI images (P < 0.01). In both imaging sequences, benign lesions exhibited ADC values markedly greater than those of malignant lesions (P < 0.001). The diagnostic confidence scores of both groups of sequences were significantly positively correlated with the diagnostic accuracy rate. In malignant lesions, the ADC values of the FOCUS-DWI sequences showed moderate negative correlations with the Gleason grading, while the ADC values of the IQMR-FOCUS-DWI sequences were strongly negatively associated with the Gleason grading. ROC curves indicated the superior diagnostic performance of IQMR-FOCUS-DWI (AUC = 0.941) compared to FOCUS-DWI (AUC = 0.832) for differentiating prostate lesions (P = 0.0487). IQMR-FOCUS-DWI significantly enhances image quality and improves diagnostic accuracy for benign and malignant prostate lesions compared to conventional FOCUS-DWI.

Natural language processing and LLMs in liver imaging: a practical review of clinical applications.

López-Úbeda P, Martín-Noguerol T, Luna A

pubmed logopapersAug 1 2025
Liver diseases pose a significant global health challenge due to their silent progression and high mortality. Proper interpretation of radiology reports is essential for the evaluation and management of these conditions but is limited by variability in reporting styles and the complexity of unstructured medical language. In this context, Natural Language Processing (NLP) techniques and Large Language Models (LLMs) have emerged as promising tools to extract relevant clinical information from unstructured liver radiology reports. This work reviews, from a practical point of view, the current state of NLP and LLM applications for liver disease classification, clinical feature extraction, diagnostic support, and staging from reports. It also discusses existing limitations, such as the need for high-quality annotated data, lack of explainability, and challenges in clinical integration. With responsible and validated implementation, these technologies have the potential to transform liver clinical management by enabling faster and more accurate diagnoses and optimizing radiology workflows, ultimately improving patient care in liver diseases.
Page 33 of 2982977 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.