Sort by:
Page 99 of 2382377 results

Multimodal prediction based on ultrasound for response to neoadjuvant chemotherapy in triple negative breast cancer.

Lyu M, Yi S, Li C, Xie Y, Liu Y, Xu Z, Wei Z, Lin H, Zheng Y, Huang C, Lin X, Liu Z, Pei S, Huang B, Shi Z

pubmed logopapersJul 25 2025
Pathological complete response (pCR) can guide surgical strategy and postoperative treatments in triple-negative breast cancer (TNBC). In this study, we developed a Breast Cancer Response Prediction (BCRP) model to predict the pCR in patients with TNBC. The BCRP model integrated multi-dimensional longitudinal quantitative imaging features, clinical factors and features from the Breast Imaging Data and Reporting System (BI-RADS). Multi-dimensional longitudinal quantitative imaging features, including deep learning features and radiomics features, were extracted from multiview B-mode and colour Doppler ultrasound images before and after treatment. The BCRP model achieved the areas under the receiver operating curves (AUCs) of 0.94 [95% confidence interval (CI), 0.91-0.98] and 0.84 [95%CI, 0.75-0.92] in the training and external test cohorts, respectively. Additionally, the low BCRP score was an independent risk factor for event-free survival (P < 0.05). The BCRP model showed a promising ability in predicting response to neoadjuvant chemotherapy in TNBC, and could provide valuable information for survival.

Automated characterization of abdominal MRI exams using deep learning.

Kim J, Chae A, Duda J, Borthakur A, Rader DJ, Gee JC, Kahn CE, Witschey WR, Sagreiya H

pubmed logopapersJul 25 2025
Advances in magnetic resonance imaging (MRI) have revolutionized disease detection and treatment planning. However, the growing volume and complexity of MRI data-along with heterogeneity in imaging protocols, scanner technology, and labeling practices-creates a need for standardized tools to automatically identify and characterize key imaging attributes. Such tools are essential for large-scale, multi-institutional studies that rely on harmonized data to train robust machine learning models. In this study, we developed convolutional neural networks (CNNs) to automatically classify three core attributes of abdominal MRI: pulse sequence type, imaging orientation, and contrast enhancement status. Three distinct CNNs with similar backbone architectures were trained to classify single image slices into one of 12 pulse sequences, 4 orientations, or 2 contrast classes. The models achieved high classification accuracies of 99.51%, 99.87%, and 99.99% for pulse sequence, orientation, and contrast, respectively. We applied Grad-CAM to visualize image regions influencing pulse sequence predictions and highlight relevant anatomical features. To enhance performance, we implemented a majority voting approach to aggregate slice-level predictions, achieving 100% accuracy at the volume level for all tasks. External validation using the Duke Liver Dataset demonstrated strong generalizability; after adjusting for class label mismatch, volume-level accuracies exceeded 96.9% across all classification tasks.

Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting: the DECIPHER study.

Bloom B, Haimovich A, Pott J, Williams SL, Cheetham M, Langsted S, Skene I, Astin-Chamberlain R, Thomas SH

pubmed logopapersJul 25 2025
Identifying whether there is a traumatic intracranial bleed (ICB+) on head CT is critical for clinical care and research. Free text CT reports are unstructured and therefore must undergo time-consuming manual review. Existing artificial intelligence classification schemes are not optimised for the emergency department endpoint of classification of ICB+ or ICB-. We sought to assess three methods for classifying CT reports: a text classification (TC) programme, a commercial natural language processing programme (Clinithink) and a generative pretrained transformer large language model (Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting (DECIPHER)-LLM). Primary objective: determine the diagnostic classification performance of the dichotomous categorisation of each of the three approaches. determine whether the LLM could achieve a substantial reduction in CT report review workload while maintaining 100% sensitivity.Anonymised radiology reports of head CT scans performed for trauma were manually labelled as ICB+/-. Training and validation sets were randomly created to train the TC and natural language processing models. Prompts were written to train the LLM. 898 reports were manually labelled. Sensitivity and specificity (95% CI)) of TC, Clinithink and DECIPHER-LLM (with probability of ICB set at 10%) were respectively 87.9% (76.7% to 95.0%) and 98.2% (96.3% to 99.3%), 75.9% (62.8% to 86.1%) and 96.2% (93.8% to 97.8%) and 100% (93.8% to 100%) and 97.4% (95.3% to 98.8%).With DECIPHER-LLM probability of ICB+ threshold of 10% set to identify CT reports requiring manual evaluation, CT reports requiring manual classification reduced by an estimated 385/449 cases (85.7% (95% CI 82.1% to 88.9%)) while maintaining 100% sensitivity. DECIPHER-LLM outperformed other tested free-text classification methods.

A novel approach for breast cancer detection using a Nesterov accelerated adam optimizer with an attention mechanism.

Saber A, Emara T, Elbedwehy S, Hassan E

pubmed logopapersJul 25 2025
Image-based automatic breast tumor detection has become a significant research focus, driven by recent advancements in machine learning (ML) algorithms. Traditional disease detection methods often involve manual feature extraction from images, a process requiring extensive expertise from specialists and pathologists. This labor-intensive approach is not only time-consuming but also impractical for widespread application. However, advancements in digital technologies and computer vision have enabled convolutional neural networks (CNNs) to learn features automatically, thereby overcoming these challenges. This paper presents a deep neural network model based on the MobileNet-V2 architecture, enhanced with a convolutional block attention mechanism for identifying tumor types in ultrasound images. The attention module improves the MobileNet-V2 model's performance by highlighting disease-affected areas within the images. The proposed model refines features extracted by MobileNet-V2 using the Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimizer. This integration enhances convergence and stability, leading to improved classification accuracy. The proposed approach was evaluated on the BUSI ultrasound image dataset. Experimental results demonstrated strong performance, achieving an accuracy of 99.1%, sensitivity of 99.7%, specificity of 99.5%, precision of 97.7%, and an area under the curve (AUC) of 1.0 using an 80-20 data split. Additionally, under 10-fold cross-validation, the model achieved an accuracy of 98.7%, sensitivity of 99.1%, specificity of 98.3%, precision of 98.4%, F1-score of 98.04%, and an AUC of 0.99.

Deep learning-based image classification for integrating pathology and radiology in AI-assisted medical imaging.

Lu C, Zhang J, Liu R

pubmed logopapersJul 25 2025
The integration of pathology and radiology in medical imaging has emerged as a critical need for advancing diagnostic accuracy and improving clinical workflows. Current AI-driven approaches for medical image analysis, despite significant progress, face several challenges, including handling multi-modal imaging, imbalanced datasets, and the lack of robust interpretability and uncertainty quantification. These limitations often hinder the deployment of AI systems in real-world clinical settings, where reliability and adaptability are essential. To address these issues, this study introduces a novel framework, the Domain-Informed Adaptive Network (DIANet), combined with an Adaptive Clinical Workflow Integration (ACWI) strategy. DIANet leverages multi-scale feature extraction, domain-specific priors, and Bayesian uncertainty modeling to enhance interpretability and robustness. The proposed model is tailored for multi-modal medical imaging tasks, integrating adaptive learning mechanisms to mitigate domain shifts and imbalanced datasets. Complementing the model, the ACWI strategy ensures seamless deployment through explainable AI (XAI) techniques, uncertainty-aware decision support, and modular workflow integration compatible with clinical systems like PACS. Experimental results demonstrate significant improvements in diagnostic accuracy, segmentation precision, and reconstruction fidelity across diverse imaging modalities, validating the potential of this framework to bridge the gap between AI innovation and clinical utility.

DeepJIVE: Learning Joint and Individual Variation Explained from Multimodal Data Using Deep Learning

Matthew Drexler, Benjamin Risk, James J Lah, Suprateek Kundu, Deqiang Qiu

arxiv logopreprintJul 25 2025
Conventional multimodal data integration methods provide a comprehensive assessment of the shared or unique structure within each individual data type but suffer from several limitations such as the inability to handle high-dimensional data and identify nonlinear structures. In this paper, we introduce DeepJIVE, a deep-learning approach to performing Joint and Individual Variance Explained (JIVE). We perform mathematical derivation and experimental validations using both synthetic and real-world 1D, 2D, and 3D datasets. Different strategies of achieving the identity and orthogonality constraints for DeepJIVE were explored, resulting in three viable loss functions. We found that DeepJIVE can successfully uncover joint and individual variations of multimodal datasets. Our application of DeepJIVE to the Alzheimer's Disease Neuroimaging Initiative (ADNI) also identified biologically plausible covariation patterns between the amyloid positron emission tomography (PET) and magnetic resonance (MR) images. In conclusion, the proposed DeepJIVE can be a useful tool for multimodal data analysis.

Deep Learning-Based Multi-View Echocardiographic Framework for Comprehensive Diagnosis of Pericardial Disease

Jeong, S., Moon, I., Jeon, J., Jeong, D., Lee, J., kim, J., Lee, S.-A., Jang, Y., Yoon, Y. E., Chang, H.-J.

medrxiv logopreprintJul 25 2025
BackgroundPericardial disease exhibits a wide clinical spectrum, ranging from mild effusions to life-threatening tamponade or constriction pericarditis. While transthoracic echocardiography (TTE) is the primary diagnostic modality, its effectiveness is limited by operator dependence and incomplete evaluation of functional impact. Existing artificial intelligence models focus primarily on effusion detection, lacking comprehensive disease assessment. MethodsWe developed a deep learning (DL)-based framework that sequentially assesses pericardial disease: (1) morphological changes, including pericardial effusion amount (normal/small/moderate/large) and pericardial thickening or adhesion (yes/no), using five B-mode views, and (2) hemodynamic significance (yes/no), incorporating additional inputs from Doppler and inferior vena cava measurements. The developmental dataset comprises 2,253 TTEs from multiple Korean institutions (225 for internal testing), and the independent external test set consists of 274 TTEs. ResultsIn the internal test set, the model achieved diagnostic accuracy of 81.8-97.3% for pericardial effusion classification, 91.6% for pericardial thickening/adhesion, and 86.2% for hemodynamic significance. Corresponding accuracy in the external test set was 80.3-94.2%, 94.5%, and 85.5%, respectively. Area under the receiver operating curves (AUROCs) for the three tasks in the internal test set was 0.92-0.99, 0.90, and 0.79; and in the external test set, 0.95-0.98, 0.85, and 0.76. Sensitivity for detecting pericardial thickening/adhesion and hemodynamic significance was modest (66.7% and 68.8% in the internal test set), but improved substantially when cases with poor image quality were excluded (77.3%, and 80.8%). Similar performance gains were observed in subgroups with complete target views and a higher number of available video clips. ConclusionsThis study presents the first DL-based TTE model capable of comprehensive evaluation of pericardial disease, integrating both morphological and functional assessments. The proposed framework demonstrated strong generalizability and aligned with the real-world diagnostic workflow. However, caution is warranted when interpreting results under suboptimal imaging conditions.

An approach for cancer outcomes modelling using a comprehensive synthetic dataset.

Tu L, Choi HHF, Clark H, Lloyd SAM

pubmed logopapersJul 24 2025
Limited patient data availability presents a challenge for efficient machine learning (ML) model development. Recent studies have proposed methods to generate synthetic medical images but lack the corresponding prognostic information required for predicting outcomes. We present a cancer outcomes modelling approach that involves generating a comprehensive synthetic dataset which can accurately mimic a real dataset. A real public dataset containing computed tomography-based radiomic features and clinical information for 132 non-small cell lung cancer patients was used. A synthetic dataset of virtual patients was synthesized using a conditional tabular generative adversarial network. Models to predict two-year overall survival were trained on real or synthetic data using combinations of four feature selection methods (mutual information, ANOVA F-test, recursive feature elimination, random forest (RF) importance weights) and six ML algorithms (RF, k-nearest neighbours, logistic regression, support vector machine, XGBoost, Gaussian Naïve Bayes). Models were tested on withheld real data and externally validated. Real and synthetic datasets were similar, with an average one minus Kolmogorov-Smirnov test statistic of 0.871 for continuous features. Chi-square test confirmed agreement for discrete features (p < 0.001). XGBoost using RF importance-based features performed the most consistently for both datasets, with percent differences in balanced accuracy and area under the precision-recall curve of < 1.3%. Preliminary findings demonstrate the potential application of synthetic radiomic and clinical data augmentation for cancer outcomes modelling, although further validation with larger diverse datasets is crucial. While our approach was described in a lung context, it may be applied to other sites or endpoints.

Deep Learning to Differentiate Parkinsonian Syndromes Using Multimodal Magnetic Resonance Imaging: A Proof-of-Concept Study.

Mattia GM, Chougar L, Foubert-Samier A, Meissner WG, Fabbri M, Pavy-Le Traon A, Rascol O, Grabli D, Degos B, Pyatigorskaya N, Faucher A, Vidailhet M, Corvol JC, Lehéricy S, Péran P

pubmed logopapersJul 24 2025
The differentiation between multiple system atrophy (MSA) and Parkinson's disease (PD) based on clinical diagnostic criteria can be challenging, especially at an early stage. Leveraging deep learning methods and magnetic resonance imaging (MRI) data has shown great potential in aiding automatic diagnosis. The aim was to determine the feasibility of a three-dimensional convolutional neural network (3D CNN)-based approach using multimodal, multicentric MRI data for differentiating MSA and its variants from PD. MRI data were retrospectively collected from three MSA French reference centers. We computed quantitative maps of gray matter density (GD) from a T1-weighted sequence and mean diffusivity (MD) from diffusion tensor imaging. These maps were used as input to a 3D CNN, either individually ("monomodal," "GD" or "MD") or in combination ("bimodal," "GD-MD"). Classification tasks included the differentiation of PD and MSA patients. Model interpretability was investigated by analyzing misclassified patients and providing a visual interpretation of the most activated regions in CNN predictions. The study population included 92 patients with MSA (50 with MSA-P, parkinsonian variant; 33 with MSA-C, cerebellar variant; 9 with MSA-PC, mixed variant) and 64 with PD. The best accuracies were obtained for the PD/MSA (0.88 ± 0.03 with GD-MD), PD/MSA-C&PC (0.84 ± 0.08 with MD), and PD/MSA-P (0.78 ± 0.09 with GD) tasks. Patients misclassified by the CNN exhibited fewer and milder image alterations, as found using an image-based z score analysis. Activation maps highlighted regions involved in MSA pathophysiology, namely the putamen and cerebellum. Our findings hold promise for developing an efficient, MRI-based, and user-independent diagnostic tool suitable for differentiating parkinsonian syndromes in clinical practice. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Foresman G, Biro J, Tran A, MacRae K, Kazi S, Schubel L, Visconti A, Gallagher W, Smith KM, Giardina T, Haskell H, Miller K

pubmed logopapersJul 24 2025
Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication. This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools. This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus. Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions. This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.
Page 99 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.