Sort by:
Page 5 of 3813802 results

MultiD4CAD: Multimodal Dataset composed of CT and Clinical Features for Coronary Artery Disease Analysis.

Prinzi F, Militello C, Sollami G, Toia P, La Grutta L, Vitabile S

pubmed logopapersSep 26 2025
Multimodal datasets offer valuable support for developing Clinical Decision Support Systems (CDSS), which leverage predictive models to enhance clinicians' decision-making. In this observational study, we present a dataset of suspected Coronary Artery Disease (CAD) patients - called MultiD4CAD - comprising imaging and clinical data. The imaging data obtained from Coronary Computed Tomography Angiography (CCTA) includes epicardial (EAT) and pericoronary (PAT) adipose tissue segmentations. These metabolically active fat tissues play a key role in cardiovascular diseases. In addition, clinical data include a set of biomarkers recognized as CAD risk factors. The validated EAT and PAT segmentations make the dataset suitable for training predictive models based on radiomics and deep learning architectures. The inclusion of CAD disease labels allows for its application in supervised learning algorithms to predict CAD outcomes. MultiD4CAD has revealed important correlations between imaging features, clinical biomarkers, and CAD status. The article concludes by discussing some challenges, such as classification, segmentation, radiomics, and deep training tasks, that can be investigated and validated using the proposed dataset.

Deep learning-driven contactless ECG in MRI via beat pilot tone for motion-resolved image reconstruction and heart rate monitoring.

Sun H, Ding Q, Zhong S, Zhang Z

pubmed logopapersSep 26 2025
Electrocardiogram (ECG) is crucial for synchronizing cardiovascular magnetic resonance imaging (CMRI) acquisition with the cardiac cycle and for continuous heart rate monitoring during prolonged scans. However, conventional electrode-based ECG systems in clinical MRI environments suffer from tedious setup, magnetohydrodynamic (MHD) waveform distortion, skin burn risks, and patient discomfort. This study proposes a contactless ECG measurement method in MRI to address these challenges. We integrated Beat Pilot Tone (BPT)-a contactless, high motion sensitivity, and easily integrable RF motion sensing modality-into CMRI to capture cardiac motion without direct patient contact. A deep neural network was trained to map the BPT-derived cardiac mechanical motion signals to corresponding ECG waveforms. The reconstructed ECG was evaluated against simultaneously acquired ground truth ECG through multiple metrics: Pearson correlation coefficient, relative root mean square error (RRMSE), cardiac trigger timing accuracy, and heart rate estimation error. Additionally, we performed MRI retrospective binning reconstruction using reconstructed ECG reference and evaluated image quality under both standard clinical conditions and challenging scenarios involving arrhythmias and subject motion. To examine scalability of our approach across field strength, the model pretrained on 1.5T data was applied to 3T BPT cardiac acquisitions. In optimal acquisition scenarios, the reconstructed ECG achieved a median Pearson correlation of 89% relative to the ground truth, while cardiac triggering accuracy reached 94%, and heart rate estimation error remained below 1 bpm. The quality of the reconstructed images was comparable to that of ground truth synchronization. The method exhibited a degree of adaptability to irregular heart rate patterns and subject motion, and scaled effectively across MRI systems operating at different field strengths. The proposed contactless ECG measurement method has the potential to streamline CMRI workflows, improve patient safety and comfort, mitigate MHD distortion challenges and find a robust clinical application.

Evaluating the Accuracy and Efficiency of AI-Generated Radiology Reports Based on Positive Findings-A Qualitative Assessment of AI in Radiology.

Rajmohamed RF, Chapala S, Shazahan MA, Wali P, Botchu R

pubmed logopapersSep 26 2025
With increasing imaging demands, radiologists face growing workload pressures, often resulting in delays and reduced diagnostic efficiency. Recent advances in artificial intelligence (AI) have introduced tools for automated report generation, particularly in simpler imaging modalities, such as X-rays. However, limited research has assessed AI performance in complex studies such as MRI and CT scans, where report accuracy and clinical interpretation are critical. To evaluate the performance of a semi-automated AI-based reporting platform in generating radiology reports for complex imaging studies, and to compare its accuracy, efficiency, and user confidence with the traditional dictation method. This study involved 100 imaging cases, including MRI knee (n=21), MRI lumbar spine (n=30), CT head (n=23), and CT Abdomen and Pelvis (n=26). Consultant musculoskeletal radiologists reported each case using both traditional dictation and the AI platform. The radiologist first identified and entered the key positive findings, based on which the AI system generated a full draft report. Reporting time was recorded, and both methods were evaluated on accuracy, user confidence, and overall reporting experience (rated on a scale of 1-5). Statistical analysis was conducted using two-tailed t-tests and 95% confidence intervals. AI-generated reports demonstrated significantly improved performance across all parameters. The mean reporting time reduced from 6.1 to 3.43 min (p<0.0001) with AI-assisted report generation. Accuracy improved from 3.81 to 4.65 (p<0.0001), confidence ratings increased from 3.91 to 4.67 (p<0.0001), and overall reporting experience favored using the AI platform for generating radiology reports (mean 4.7 vs. 3.69, p<0.0001). Minor formatting errors and occasional anatomical misinterpretations were observed in AI-generated reports, but could be easily corrected by the radiologist during review. The AI-assisted reporting platform significantly improved efficiency and radiologist confidence without compromising accuracy. Although the tool performs well when provided with key clinical findings, it still requires expert oversight, especially in anatomically complex reporting. These findings support the use of AI as a supportive tool in radiology practice, with a focus on data integrity, consistency, and human validation.

Hybrid Fusion Model for Effective Distinguishing Benign and Malignant Parotid Gland Tumors in Gray-Scale Ultrasonography.

Mao Y, Jiang LP, Wang JL, Chen FQ, Zhang WP, Peng XQ, Chen L, Liu ZX

pubmed logopapersSep 26 2025
To develop a hybrid fusion model-deep learning radiomics nomograms (DLRN), integrating radiomics and transfer learning for assisting sonographers differentiate benign and malignant parotid gland tumors. This study retrospectively analyzed a total of 328 patients with pathologically confirmed parotid gland tumors from two centers. Radiomics features extracted from ultrasound images were input into eight machine learning classifiers to construct Radiomics (Rad) model. Additionally, images were also input into seven transfer learning networks to construct deep transfer learning (DTL) model. The prediction probabilities from these two models were combined through decision fusion to construct a DLR model. Clinical features were further integrated with the prediction probabilities of the DLR model to develop the DLRN model. The performance of these models was evaluated using receiver operating characteristic curve analysis, calibration curve, decision curve analysis and the Hosmer-Lemeshow test. In the internal and external validation cohorts, compared with Clinic (AUC = 0.891 and 0.734), Rad (AUC = 0.809 and 0.860), DTL (AUC = 0.905 and 0.782) and DLR (AUC = 0.932 and 0.828), the DLRN model demonstrated the greatest discriminative ability (AUC = 0.931 and 0.934), showing the best discriminative power. With the assistance of DLR, the diagnostic accuracy of resident, attending and chief physician increased by 6.6%, 6.5% and 1.2%, respectively. The hybrid fusion model DLRN significantly enhances the diagnostic performance for benign and malignant tumors of the parotid gland. It can effectively assist sonographers in making more accurate diagnoses.

[Advances in the application of multimodal image fusion technique in stomatology].

Ma TY, Zhu N, Zhang Y

pubmed logopapersSep 26 2025
Within the treatment process of modern stomatology, obtaining exquisite preoperative information is the key to accurate intraoperative planning with implementation and prognostic judgment. However, traditional single mode image has obvious shortcomings, such as "monotonous contents" and "unstable measurement accuracy", which could hardly meet the diversified needs of oral patients. Multimodal medical image fusion (MMIF) technique has been introduced into the studies of stomatology in the 1990s, aiming at realizing personalized patients' data analysis through multiple fusion algorithms, which combines the advantages of multimodal medical images while laying a stable foundation for new treatment technologies. Recently artificial intelligence (AI) has significantly increased the precision and efficiency of MMIF's registration: advanced algorithms and networks have confirmed the great compatibility between AI and MMIF. This article systematically reviews the development history of the multimodal image fusion technique and its current application in stomatology, while analyzing technological progresses within the domain combined with the background of AI's rapid development, in order to provide new ideas for achieving new advancements within the field of stomatology.

AI-driven MRI biomarker for triple-class HER2 expression classification in breast cancer: a large-scale multicenter study.

Wong C, Yang Q, Liang Y, Wei Z, Dai Y, Xu Z, Chen X, Du S, Han C, Liang C, Zhang L, Liu Z, Wang Y, Shi Z

pubmed logopapersSep 26 2025
Accurate classification of Human epidermal growth factor receptor 2 (HER2) expression is crucial for guiding treatment in breast cancer, especially with emerging therapies like trastuzumab deruxtecan (T-DXd) for HER2-low patients. Current gold-standard methods relying on invasive biopsy and immunohistochemistry suffer from sampling bias and interobserver variability, highlighting the need for reliable non-invasive alternatives. We developed an artificial intelligence framework that integrates a pretrained foundation model with a task-specific classifier to predict HER2 expression categories (HER2-zero, HER2-low, HER2-positive) directly from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The model was trained and validated using multicenter datasets. Model interpretability was assessed through feature visualization using t-SNE and UMAP dimensionality reduction techniques, complemented by SHAP analysis for post-hoc interpretation of critical predictive imaging features. The developed model demonstrated robust performance across datasets, achieving micro-average AUCs of 0.821 (95% CI 0.795–0.846) and 0.835 (95% CI 0.797–0.864), and macro-average AUCs of 0.833 (95% CI 0.818–0.847) and 0.857 (95% CI 0.837–0.872) in external validation. Subgroup analysis demonstrated strong discriminative power in distinguishing HER2 categories, particularly HER2-zero and HER2-low cases. Visualization techniques revealed distinct, biologically plausible clustering patterns corresponding to HER2 expression categories. This study presents a reproducible, non-invasive solution for comprehensive HER2 phenotyping using DCE-MRI, addressing fundamental limitations of biopsy-dependent assessment. Our approach enables accurate identification of HER2-low patients who may benefit from novel therapies like T-DXd. This framework represents a significant advancement in precision oncology, with potential to transform diagnostic workflows and guide targeted therapy selection in breast cancer care. The online version contains supplementary material available at 10.1186/s13058-025-02118-2.

Efficacy of PSMA PET/CT radiomics analysis for risk stratification in newly diagnosed prostate cancer: a multicenter study.

Jafari E, Zarei A, Dadgar H, Keshavarz A, Abdollahi H, Samimi R, Manafi-Farid R, Divband G, Nikkholgh B, Fallahi B, Amini H, Ahmadzadehfar H, Rahmim A, Zohrabi F, Assadi M

pubmed logopapersSep 26 2025
Prostate-specific membrane antigen (PSMA) PET/CT plays an increasing role in prostate cancer management. Radiomics analysis of PSMA PET/CT images may provide additional information for risk stratification. This study aimed to evaluate the performance of PSMA PET/CT radiomics analysis in differentiating between Gleason Grade Groups (GGG 1–3 vs. GGG 4–5) and predicting PSA levels (below vs. at or above 20 ng/ml) in patients with newly diagnosed prostate cancer. In this multicenter study, patients with confirmed primary prostate cancer were enrolled who underwent [68Ga]Ga-PSMA PET/CT for staging. Inclusion criteria required intraprostatic lesions on PET and the International Society of Urological Pathology (ISUP) grade information. Three different segments were delineated including intraprostatic PSMA-avid lesions on PET, the whole prostate in PET, and the whole prostate in CT. Radiomic features (RFs) were extracted from all segments. Dimensionality reduction was achieved through principal component analysis (PCA) prior to model training on data from two centers (186 cases) with 10-fold cross-validation. Model performance was validated with external data set (57 cases) using various machine learning models including random forest, nearest centroid, support vector machine (SVM), calibrated classifier CV and logistic regression. In this retrospective study, 243 patients with a median age of 69 (range: 46–89) were enrolled. For distinguishing GGG 1–3 from GGG 4–5, the nearest centroid classifier using radiomic features (RFs) from whole-prostate PET achieved the best performance in the internal test set, while the random forest classifier using RFs from PSMA-avid lesions in PET performed best in the external test set. However, when considering both internal and external test sets, a calibrated classifier CV using RFs from PSMA-avid PET data showed slightly improved overall performance. Regarding PSA level classification (< 20 ng/ml vs. ≥20 ng/ml), the nearest centroid classifier using RFs from the whole prostate in PET achieved the best performance in the internal test set. In the external test set, the highest performance was observed using RFs derived from the concatenation of PET and CT. Notably, when combining both internal and external test sets, the best performance was again achieved with RFs from the concatenated PET/CT data. Our research suggests that [68Ga]Ga-PSMA PET/CT radiomic features, particularly features derived from intraprostatic PSMA-avid lesions, may provide valuable information for pre-biopsy risk stratification in newly diagnosed prostate cancer.

Model-driven individualized transcranial direct current stimulation for the treatment of insomnia disorder: protocol for a randomized, sham-controlled, double-blind study.

Wang Y, Jia W, Zhang Z, Bai T, Xu Q, Jiang J, Wang Z

pubmed logopapersSep 26 2025
Insomnia disorder is a prevalent condition associated with significant negative impacts on health and daily functioning. Transcranial direct current stimulation (tDCS) has emerged as a potential technique for improving sleep. However, questions remain regarding its clinical efficacy, and there is a lack of standardized individualized stimulation protocols. This study aims to evaluate the efficacy of model-driven, individualized tDCS for treating insomnia disorder through a randomized, double-blind, sham-controlled trial. A total of 40 patients diagnosed with insomnia disorder will be recruited and randomly assigned to either an active tDCS group or a sham stimulation group. Individualized stimulation parameters will be determined through machine learning-based electric field modeling incorporating structural MRI and EEG data. Participants will undergo 10 sessions of tDCS (5 days/week for 2 consecutive weeks), with follow-up assessments conducted at 2 and 4 weeks after treatment. The primary outcome is the reduction in the Insomnia Severity Index (ISI) score at two weeks post-treatment. Secondary outcomes include changes in sleep parameters, anxiety, and depression scores. This study is expected to provide evidence for the effectiveness of individualized tDCS in improving sleep quality and reducing insomnia symptoms. This integrative approach, combining advanced neuroimaging and electrophysiological biomarkers, has the potential to establish an evidence-based framework for individualized brain stimulation, optimizing therapeutic outcomes. This study is registered at ClinicalTrials.gov (Identifier: NCT06671457) and was registered on 4 November 2024. The online version contains supplementary material available at 10.1186/s12888-025-07347-5.

The Evolution and Clinical Impact of Deep Learning Technologies in Breast MRI.

Fujioka T, Fujita S, Ueda D, Ito R, Kawamura M, Fushimi Y, Tsuboyama T, Yanagawa M, Yamada A, Tatsugami F, Kamagata K, Nozaki T, Matsui Y, Fujima N, Hirata K, Nakaura T, Tateishi U, Naganawa S

pubmed logopapersSep 26 2025
The integration of deep learning (DL) in breast MRI has revolutionized the field of medical imaging, notably enhancing diagnostic accuracy and efficiency. This review discusses the substantial influence of DL technologies across various facets of breast MRI, including image reconstruction, classification, object detection, segmentation, and prediction of clinical outcomes such as response to neoadjuvant chemotherapy and recurrence of breast cancer. Utilizing sophisticated models such as convolutional neural networks, recurrent neural networks, and generative adversarial networks, DL has improved image quality and precision, enabling more accurate differentiation between benign and malignant lesions and providing deeper insights into disease behavior and treatment responses. DL's predictive capabilities for patient-specific outcomes also suggest potential for more personalized treatment strategies. The advancements in DL are pioneering a new era in breast cancer diagnostics, promising more personalized and effective healthcare solutions. Nonetheless, the integration of this technology into clinical practice faces challenges, necessitating further research, validation, and development of legal and ethical frameworks to fully leverage its potential.

Generating Synthetic MR Spectroscopic Imaging Data with Generative Adversarial Networks to Train Machine Learning Models.

Maruyama S, Takeshima H

pubmed logopapersSep 26 2025
To develop a new method to generate synthetic MR spectroscopic imaging (MRSI) data for training machine learning models. This study targeted routine MRI examination protocols with single voxel spectroscopy (SVS). A novel model derived from pix2pix generative adversarial networks was proposed to generate synthetic MRSI data using MRI and SVS data as inputs. T1- and T2-weighted, SVS, and reference MRSI data were acquired from healthy brains with clinically available sequences. The proposed model was trained to generate synthetic MRSI data. Quantitative evaluation involved the calculation of the mean squared error (MSE) against the reference and metabolite ratio value. The effect of the location of and the number of the SVS data on the quality of the synthetic MRSI data was investigated using the MSE. The synthetic MRSI data generated from the proposed model were visually closer to the reference. The 95% confidence interval (CI) of the metabolite ratio value of synthetic MRSI data overlapped with the reference for seven of eight metabolite ratios. The MSEs tended to be lower in the same location than in different locations. The MSEs among groups of numbers of SVS data were not significantly different. A new method was developed to generate MRSI data by integrating MRI and SVS data. Our method can potentially increase the volume of MRSI data training for other machine learning models by adding SVS acquisition to routine MRI examinations.
Page 5 of 3813802 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.