Sort by:
Page 3 of 6036030 results

Tripathi A, Waqas A, Schabath MB, Yilmaz Y, Rasool G

pubmed logopapersOct 23 2025
Harmonized ONcologY Biomedical Embedding Encoder (HONeYBEE) is an open-source framework that integrates multimodal biomedical data for oncology applications. It processes clinical data (structured and unstructured), whole-slide images, radiology scans, and molecular profiles to generate unified patient-level embeddings using domain-specific foundation models and fusion strategies. These embeddings enable survival prediction, cancer-type classification, patient similarity retrieval, and cohort clustering. Evaluated on 11,400+ patients across 33 cancer types from The Cancer Genome Atlas (TCGA), clinical embeddings showed the strongest single-modality performance with 98.5% classification accuracy and 96.4% precision@10 in patient retrieval. They also achieved the highest survival prediction concordance indices across most cancer types. Multimodal fusion provided complementary benefits for specific cancers, improving overall survival prediction beyond clinical features alone. Comparative evaluation of four large language models revealed that general-purpose models like Qwen3 outperformed specialized medical models for clinical text representation, though task-specific fine-tuning improved performance on heterogeneous data such as pathology reports.

Tang Z, Tang Z, Liu Y, Chen L, Tang Z, Liu X, Salami T

pubmed logopapersOct 23 2025
Kidney stone disease is a common syndrome and a recurring one, where it bears a 50% chance of being manifested again within ten years and may lead to serious complications like ureteral obstruction and unbearable pain. If timely intervention is considered of paramount importance for a timely intervention, early and accurate detection using computed tomography (CT) scans is also critical to this process. Existing diagnostic systems are being challenged by factors like noise in images, low contrast, and class imbalance, and these might hamper the performance of existing systems. This work focuses on developing an optimized framework of deep learning for the detection of kidney stones in CT images to deal with these drawbacks. The overall proposed approach consists of a preprocessing scheme to normalize the data using Wang-Mendel (WM) de-noising and enhancing contrast globally, followed by data augmentation with the use of SdSmote to overcome an imbalance in the classes. The pre-processed images will be fed into a modified Bidirectional Recurrent Neural Network (BRNN), which will undergo optimization of the weights and biases using a newly implemented Bald Eagle Search (BES) algorithm, with quasi-oppositional learning and chaotic initialization introduced to increase convergence and global search capability. The proposed method is applied to the public CT Kidney Dataset, compared with state-of-the-art techniques like ensemble learning, Exemplar Darknet19, DE/SVM, and Decision Tree solutions. The proposed means attained better performance, showing 96.96% accuracy, 95.62% sensitivity, 91.67% specificity, 94.38% precision, 94.99% F1-score, and 91.61% in the Jaccard Index, thereby confirming the effectiveness and robustness of the proposed model in clinical decision-making concerning kidney stone diagnosis.

Ahmad S, Neal Joshua ES, Rao NT, Ghoniem RM, Taye BM, Bharany S

pubmed logopapersOct 23 2025
Mammography is a routine imaging technique used by radiologists to detect breast lesions, such as tumors and lumps. Precise lesion detection is critical for early treatment and diagnosis planning. Lesion detection and segmentation are still problematic due to inconsistencies in image quality and lesion properties. Hence, this work presents a new Multi-Stage Deep Lesion model (MSDLM) for enhancing the efficacy of breast lesion segmentation and classification. The suggested model is a Three-Unit Two-Parameter Gaussian model with U-Net, EfficientNetV2 B0, and a domain CNN classifier. U-Net is utilized for lesion segmentation, EfficientNetV2 B0 is employed for image deep feature extraction, and a CNN classifier is used for lesion classification. The MSDLM feature cascade is designed to enhance computational efficiency while retaining the most relevant features for breast cancer detection and identifying the minimum number of features most important in the detection and classification of breast lesions in mammograms. The Multi-Stage Deep Learning Model (MSDLM) was validated using two benchmark datasets, CBIS-DDSM and the Wisconsin breast cancer dataset. Segmentation and classification tasks were indicated by accuracy values of 97.6%, indicating reliability in breast lesion detection. Its sensitivity of 91.25% indicates its reliability in detecting positive cases, a basic requirement in medical diagnosis. It also indicated an Area Under the Curve (AUC) of 95.75%, indicating overall diagnostic performance irrespective of thresholds. The Intersection over Union (IoU) of 85.59% verifies its reliability to detect lesion areas in mammograms. MSDLM with a Gaussian distribution provides precise localization and classification of breast lesions. The MSDLM model allows for improved information flow and feature refinement. The algorithm outperforms baseline models in both effectiveness and computational cost. Its performance on two diverse datasets verifies its generalizability and enables radiologists to receive exact automated diagnoses.

Kunishima A, Inaba D, Iyoshi S, Ikeda Y, Goto M, Muramatsu R, Hashimoto M, Yoshida K, Mogi K, Yoshihara M, Nagao Y, Tamauchi S, Yokoi A, Yoshikawa N, Niimi K, Koizumi N, Kajiyama H

pubmed logopapersOct 23 2025
Malignant ovarian tumors (MOTs) and borderline ovarian tumors (BOTs) differ in treatment strategies and prognosis. However, accurate preoperative diagnosis remains challenging, and improving diagnostic accuracy is crucial. We developed and validated a system using artificial intelligence (AI) to integrate machine learning (ML) models based on blood test data and deep learning (DL) models based on magnetic resonance imaging (MRI) findings to distinguish between MOT and BOT. We analyzed 78 patients with malignant serous ovarian tumors and 31 with borderline serous ovarian tumors treated at our institution. A classification model was developed using ML for blood test data, and a DL model was constructed using MRI data. By integrating these models, we developed three fusion models as multimodal diagnostic AI and compared them with standalone models. The performance was evaluated using precision, recall, and accuracy. The classification model using Light Gradient Boosting Machine achieved an accuracy of 0.825, and the DL model using Visual Geometry Group 16-layer network achieved an accuracy of 0.722 for discriminating BOT from MOT. The intermediate, late, and dense fusion models achieved accuracies of 0.809, 0.776, and 0.825, respectively. Integrating multimodal information such as blood test and imaging data may enhance learning efficiency and improve diagnostic accuracy.

Feng Y, Qiu S, Wang R, Ma S, Yan F, Yang GZ

pubmed logopapersOct 23 2025
The in vivo characterization of biomechanical properties in soft biological tissues offers critical insights for both scientific research and clinical diagnostics. Magnetic resonance elastography (MRE) is a noninvasive technique that enables 3D measurements of the biomechanical properties of various soft tissues. While numerous inversion algorithms have been developed based on wave fields from MRE, robust and multi-parameter estimation of biomechanical properties remains an area of active development. Here we present comprehensive MRE datasets, including phantom, human liver, and human brain data. The phantom data serves as a benchmark for validation, while the liver and brain datasets represent typical application scenarios for MRE. All wave images were acquired using 3 T scanners, ensuring high-quality data. Additionally, a state-of-the-art inversion algorithm, the Traveling Wave Expansion-Based Neural Network (TWENN), is also provided for comparative analysis. These datasets provide a diverse range of application scenarios, facilitating the development and refinement of MRE inversion algorithms. By making these resources available, we aim to advance the field of MRE research and improve the inversion of biomechanical parameters.

Xiong Y, Rabe M, Kawula M, Marschner S, Corradini S, Belka C, Landry G, Kurz C

pubmed logopapersOct 23 2025
Objective&#xD;Online adaptation in magnetic resonance imaging-guided radiotherapy (MRgRT) for lung cancer is hindered by time-consuming organs-at-risk (OARs) recontouring on daily MR images (dMRIs) and inter-/intra-observer variability. Deep learning auto-segmentation (DLAS) of OARs offers an efficient alternative. While baseline models (BMs) provide general segmentation, patient-specific (PS) training using expert-delineated planning MR images (pMRI) can enhance accuracy. This study evaluated accumulated dose differences between BM and PS OAR models without manual modification in online plan adaptation.&#xD;&#xD;Approach&#xD;Eleven lung cancer patients treated with a 0.35 T magnetic resonance linear accelerator were retrospectively analyzed. Pre-trained population-based 3D U-Nets (BM) for nine thoracic OARs served as initial models for PS fine-tuning on planning MRIs. BM- and PS-generated OAR contours per fraction were imported into an MRgRT treatment planning system, along with clinical expert target contours. Online adaptive doses were re-optimized using both models' OAR contours with the same clinical objective functions. Fraction doses were accumulated on the pMRI, and dose-volume histogram (DVH) parameters of PTV, GTV, and other OARs within PTV+3 cm were calculated using clinical contours on pMRI. A Wilcoxon signed-rank test was used to test for statistical differences (α= 0.05) compared to accumulated clinical doses.&#xD;&#xD;Main results&#xD;PS models improved segmentation accuracy for all OARs compared to BM. They also mitigated substantial outliers in D<sup>BM</sup><sub>1cc</sub>versus D<sup>clinical</sup><sub>1cc</sub>and resulted in higher PTV D<sub>95%</sub>and GTV D<sub>98%</sub>than clinical plans. Overall, D<sup>BM</sup>met 48/61 OAR constraints, while D<sup>PS</sup>met 53. For PTVs, both D<sup>PS</sup>and D<sup>BM</sup>satisfied 21/25 constraints.&#xD;&#xD;Significance&#xD;Unmodified BM and PS model contours yielded median accumulated doses comparable to clinically delivered doses. However, PS models demonstrated superior geometric alignment, improved OAR sparing, and enhanced target coverage compared to BM, potentially benefiting MRgRT lung cancer patients.&#xD.

Truong NCD, Bangalore Yogananda CG, Wagner BC, Holcomb JM, Reddy DD, Saadat N, Bowerman J, Hatanpaa KJ, Patel TR, Fei B, Lee MD, Jain R, Bruce RJ, Madhuranthakam AJ, Pinho MC, Maldjian JA

pubmed logopapersOct 23 2025
Multi-center collaborations are crucial in developing robust and generalizable machine learning models in medical imaging. Traditional methods, such as centralized data sharing or federated learning (FL), face challenges, including privacy issues, communication burdens, and synchronization complexities. We present CATegorical and PHenotypic Image SyntHetic learnING (CATphishing), an alternative to FL using Latent Diffusion Models (LDM) to generate synthetic multi-contrast three-dimensional magnetic resonance imaging data for downstream tasks, eliminating the need for raw data sharing or iterative inter-site communication. Each institution trains an LDM to capture site-specific data distributions, producing synthetic samples aggregated at a central server. We evaluate CATphishing using data from 2491 patients across seven institutions for isocitrate dehydrogenase mutation classification and three-class tumor-type classification. CATphishing achieves accuracy comparable to centralized training and FL, with synthetic data exhibiting high fidelity. This method addresses privacy, scalability, and communication challenges, offering a promising alternative for collaborative artificial intelligence development in medical imaging.

Wang B, Liang J, Ye C, Yan T, Liu M, Yan T

pubmed logopapersOct 23 2025
Functional magnetic resonance imaging (fMRI) is a non-invasive neuroimaging technique that allows the observation of brain functional connectivity patterns. Attention-based diagnostic models like Transformers have been widely applied in fMRI data for brain disease diagnosis. However, the global attention mechanism of the Transformer faces challenges in adaptively identifying and focusing on significant brain regions and connections relevant to disease diagnosis while reducing attention to non-relevant regions and connections in fMRI data, as well as the degradation problem of the attention mechanism, thereby limiting the improvement in diagnostic accuracy. To address these problems, we propose a connection-mask-residual focused attention network (Trifocal Transformer) based on fMRI data for brain disease diagnosis. In the Trifocal Transformer, a Connection Focus Module is developed to simulate brain functional connectivity, thereby enhancing the attention mechanism's ability to focus on significant regions and connections relevant to disease diagnosis more effectively than standard self-attention mechanisms. To mitigate the potential negative impact of non-focused regions in the attention map, a learnable Mask Focus Module is designed to adaptively reduce attention to non-relevant regions and connections, addressing a limitation in conventional Transformer-based models. To address the degradation of the attention mechanism's focusing ability, we establish Residual Focus Connections between the attention maps, which reinforce the focusing effect across layers and ensure stable attention to significant features. Comprehensive experimental results demonstrate that the Trifocal Transformer achieves superior diagnostic accuracies of 74.1% and 71.2% on ADHD-200 and ABIDE I datasets, respectively. Furthermore, our method reveals potentially disease-related regions of interest (ROIs), providing a new neuroimaging perspective for brain disease diagnosis and treatment. Code is available at https://github.com/Jiarui-Liang/Trifocal-Transformer.

Wang X, Yang M, Tosun S, Nakamura K, Li S, Li X

pubmed logopapersOct 23 2025
Low reliability has consistently been a challenge in the application of deep learning models for high risk decision-making scenarios. In medical image segmentation, for instance, multiple expert annotations can be consulted to reduce subjective bias and reach a consensus, thereby enhancing the segmentation accuracy and reliability. To develop a reliable lesion segmentation model, we leverage the uncertainty introduced by multiple annotations, enabling the model to better capture real-world diagnostic variability and provide more informative predictions. Since a reliable model should produce calibrated uncertainty estimates that align with actual predictive performance, we propose CalDiff, a novel framework designed to calibrate model uncertainty in lesion segmentation and mitigate the risk of overconfident yet incorrect predictions. To harness the superior generative ability of diffusion mod els, a dual step-wise and sequence-aware calibration mechanism is proposed on the basis of the sequential nature of diffusion models. We evaluate the calibrated model through a comprehensive quantitative and visual analysis, thus ad dressing the previously overlooked challenge of assessing uncertainty calibration and model reliability in scenarios with multiple annotations and multiple predictions. Experimental results on two multi-annotated lesion segmentation datasets demonstrate that CalDiff produces uncertainty maps that can reflect informative low confidence areas, which can further indicate the false predictions potentially made by the model. By calibrating the uncertainty in the training phase, the uncertain areas produced from our model are more closely correlated with areas where the model has made errors in the inference. In summary, the uncertainty captured by our CalDiff framework can serve as apowerful indicator, which can help mitigate the risks of adopting model's outputs, allowing clinicians to prioritize reviewing areas or slices with higher uncertainty and enhancing the model's reliability and trustworthiness in real clinical practice.

Nucifora G, Muser D, Bradley J, Tsoumani Z, De Angelis G, Caiffa T, Schmitt M, Sinagra G, Miller C

pubmed logopapersOct 23 2025
Ischemic cardiomyopathy (ICM) shows significant heterogeneity in clinical outcomes, challenging traditional risk stratification methods. Cardiac magnetic resonance (CMR) imaging offers detailed insights into myocardial structure and function, yet integrating this multidimensional data remains complex. Aim of the current study was to assess whether unsupervised machine learning could help identify distinct phenotypic subgroups and enhance prognostic accuracy. This study included 319 clinically stable ICM patients. CMR-derived variables, including left ventricular ejection fraction (LVEF), ventricular volumes, and myocardial scar burden, were analysed using KAMILA clustering algorithm. The optimal number of clusters was determined through silhouette analysis, within-cluster sum of squares, and gap statistics. Principal Component Analysis (PCA) visualized the clustering results, and prognostic value was assessed using Cox regression and Kaplan-Meier survival analysis. SHAP (SHapley Additive exPlanations) values were used to evaluate feature importance. Two distinct phenotypic clusters were identified. Cluster 1 (n = 219) demonstrated better cardiac function, with higher LVEF, smaller ventricular volumes, and lower scar burden. Cluster 2 (n = 100) indicated advanced disease, with lower LVEF, larger volumes, higher scar burden, and greater midwall fibrosis. PCA confirmed clear separation between clusters, explaining 62.6% of the variance. After a median follow-up of 13 months, the composite endpoint was observed in 37 (12%) patients. Patients in Cluster 2 had a significantly higher risk of experiencing the composite outcome (HR = 3.96, p < 0.001). SHAP analysis identified ischaemic scar burden, sphericity index, and midwall fibrosis as key predictors of outcomes. Unsupervised clustering of CMR-derived variables identified distinct ICM phenotypes with important prognostic implications. This method improves risk stratification and could help tailor personalised treatment plans, highlighting the potential of machine learning in understanding ICM heterogeneity.
Page 3 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.