Sort by:
Page 141 of 1751742 results

Feasibility of improving vocal fold pathology image classification with synthetic images generated by DDPM-based GenAI: a pilot study.

Khazrak I, Zainaee S, M Rezaee M, Ghasemi M, C Green R

pubmed logopapersMay 17 2025
Voice disorders (VD) are often linked to vocal fold structural pathologies (VFSP). Laryngeal imaging plays a vital role in assessing VFSPs and VD in clinical and research settings, but challenges like scarce and imbalanced datasets can limit the generalizability of findings. Denoising Diffusion Probabilistic Models (DDPMs), a subtype of Generative AI, has gained attention for its ability to generate high-quality and realistic synthetic images to address these challenges. This study explores the feasibility of improving VFSP image classification by generating synthetic images using DDPMs. 404 laryngoscopic images depicting VF without and with VFSP were included. DDPMs were used to generate synthetic images to augment the original dataset. Two convolutional neural network architectures, VGG16 and ResNet50, were applied for model training. The models were initially trained only on the original dataset. Then, they were trained on the augmented datasets. Evaluation metrics were analyzed to assess the performance of the models for both binary classification (with/without VFSPs) and multi-class classification (seven specific VFSPs). Realistic and high-quality synthetic images were generated for dataset augmentation. The model first failed to converge when trained only on the original dataset, but they successfully converged and achieved low loss and high accuracy when trained on the augmented datasets. The best performance was gained for both binary and multi-class classification when the models were trained on an augmented dataset. Generating realistic images of VFSP using DDPMs is feasible and can enhance the classification of VFSPs by an AI model and may support VD screening and diagnosis.

Fair ultrasound diagnosis via adversarial protected attribute aware perturbations on latent embeddings.

Xu Z, Tang F, Quan Q, Yao Q, Kong Q, Ding J, Ning C, Zhou SK

pubmed logopapersMay 17 2025
Deep learning techniques have significantly enhanced the convenience and precision of ultrasound image diagnosis, particularly in the crucial step of lesion segmentation. However, recent studies reveal that both train-from-scratch models and pre-trained models often exhibit performance disparities across sex and age attributes, leading to biased diagnoses for different subgroups. In this paper, we propose APPLE, a novel approach designed to mitigate unfairness without altering the parameters of the base model. APPLE achieves this by learning fair perturbations in the latent space through a generative adversarial network. Extensive experiments on both a publicly available dataset and an in-house ultrasound image dataset demonstrate that our method improves segmentation and diagnostic fairness across all sensitive attributes and various backbone architectures compared to the base models. Through this study, we aim to highlight the critical importance of fairness in medical segmentation and contribute to the development of a more equitable healthcare system.

MRI-based radiomics for differentiating high-grade from low-grade clear cell renal cell carcinoma: a systematic review and meta-analysis.

Broomand Lomer N, Ghasemi A, Ahmadzadeh AM, A Torigian D

pubmed logopapersMay 17 2025
High-grade clear cell renal cell carcinoma (ccRCC) is linked to lower survival rates and more aggressive disease progression. This study aims to assess the diagnostic performance of MRI-derived radiomics as a non-invasive approach for pre-operative differentiation of high-grade from low-grade ccRCC. A systematic search was conducted across PubMed, Scopus, and Embase. Quality assessment was performed using QUADAS-2 and METRICS. Pooled sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and area under the curve (AUC) were estimated using a bivariate model. Separate meta-analyses were conducted for radiomics models and combined models, where the latter integrated clinical and radiological features with radiomics. Subgroup analysis was performed to identify potential sources of heterogeneity. Sensitivity analysis was conducted to identify potential outliers. A total of 15 studies comprising 2,265 patients were included, with seven and six studies contributing to the meta-analysis of radiomics and combined models, respectively. The pooled estimates of the radiomics model were as follows: sensitivity, 0.78; specificity, 0.84; PLR, 4.17; NLR, 0.28; DOR, 17.34; and AUC, 0.84. For the combined model, the pooled sensitivity, specificity, PLR, NLR, DOR, and AUC were 0.87, 0.81, 3.78, 0.21, 28.57, and 0.90, respectively. Radiomics models trained on smaller cohorts exhibited a significantly higher pooled specificity and PLR than those trained on larger cohorts. Also, radiomics models based on single-user segmentation demonstrated a significantly higher pooled specificity compared to multi-user segmentation. Radiomics has demonstrated potential as a non-invasive tool for grading ccRCC, with combined models achieving superior performance.

Development of a deep-learning algorithm for etiological classification of subarachnoid hemorrhage using non-contrast CT scans.

Chen L, Wang X, Li Y, Bao Y, Wang S, Zhao X, Yuan M, Kang J, Sun S

pubmed logopapersMay 17 2025
This study aims to develop a deep learning algorithm for differentiating aneurysmal subarachnoid hemorrhage (aSAH) from non-aneurysmal subarachnoid hemorrhage (naSAH) using non-contrast computed tomography (NCCT) scans. This retrospective study included 618 patients diagnosed with SAH. The dataset was divided into a training and internal validation cohort (533 cases: aSAH = 305, naSAH = 228) and an external test cohort (85 cases: aSAH = 55, naSAH = 30). Hemorrhage regions were automatically segmented using a U-Net + + architecture. A ResNet-based deep learning model was trained to classify the etiology of SAH. The model achieved robust performance in distinguishing aSAH from naSAH. In the internal validation cohort, it yielded an average sensitivity of 0.898, specificity of 0.877, accuracy of 0.889, Matthews correlation coefficient (MCC) of 0.777, and an area under the curve (AUC) of 0.948 (95% CI: 0.929-0.967). In the external test cohort, the model demonstrated an average sensitivity of 0.891, specificity of 0.880, accuracy of 0.887, MCC of 0.761, and AUC of 0.914 (95% CI: 0.889-0.940), outperforming junior radiologists (average accuracy: 0.836; MCC: 0.660). The study presents a deep learning architecture capable of accurately identifying SAH etiology from NCCT scans. The model's high diagnostic performance highlights its potential to support rapid and precise clinical decision-making in emergency settings. Question Differentiating aneurysmal from naSAH is crucial for timely treatment, yet existing imaging modalities are not universally accessible or convenient for rapid diagnosis. Findings A ResNet-variant-based deep learning model utilizing non-contrast CT scans demonstrated high accuracy in classifying SAH etiology and enhanced junior radiologists' diagnostic performance. Clinical relevance AI-driven analysis of non-contrast CT scans provides a fast, cost-effective, and non-invasive solution for preoperative SAH diagnosis. This approach facilitates early identification of patients needing aneurysm surgery while minimizing unnecessary angiography in non-aneurysmal cases, enhancing clinical workflow efficiency.

Intracranial hemorrhage segmentation and classification framework in computer tomography images using deep learning techniques.

Ahmed SN, Prakasam P

pubmed logopapersMay 17 2025
By helping the neurosurgeon create treatment strategies that increase the survival rate, automotive diagnosis and CT (Computed Tomography) hemorrhage segmentation (CT) could be beneficial. Owing to the significance of medical image segmentation and the difficulties in carrying out human operations, a wide variety of automated techniques for this purpose have been developed, with a primary focus on particular image modalities. In this paper, MUNet (Multiclass-UNet) based Intracranial Hemorrhage Segmentation and Classification Framework (IHSNet) is proposed to successfully segment multiple kinds of hemorrhages while the fully connected layers help in classifying the type of hemorrhages.The segmentation accuracy rates for hemorrhages are 98.53% with classification accuracy stands at 98.71% when using the suggested approach. There is potential for this suggested approach to be expanded in the future to handle further medical picture segmentation issues. Intraventricular hemorrhage (IVH), Epidural hemorrhage (EDH), Intraparenchymal hemorrhage (IPH), Subdural hemorrhage (SDH), Subarachnoid hemorrhage (SAH) are the subtypes involved in intracranial hemorrhage (ICH) whose DICE coefficients are 0.77, 0.84, 0.64, 0.80, and 0.92 respectively.The proposed method has great deal of clinical application potential for computer-aided diagnostics, which can be expanded in the future to handle further medical picture segmentation and to tackle with the involved issues.

Brain metabolic imaging-based model identifies cognitive stability in prodromal Alzheimer's disease.

Perron J, Scramstad C, Ko JH

pubmed logopapersMay 17 2025
The recent approval of anti-amyloid pharmaceuticals for the treatment of Alzheimer's disease (AD) has created a pressing need for the ability to accurately identify optimal candidates for anti-amyloid therapy, specifically those with evidence for incipient cognitive decline, since patients with mild cognitive impairment (MCI) may remain stable for several years even with positive AD biomarkers. Using fluorodeoxyglucose PET and biomarker data from 594 ADNI patients, a neural network ensemble was trained to forecast cognition from MCI diagnostic baseline. Training data comprised PET studies of patients with biological AD. The ensemble discriminated between progressive and stable prodromal subjects (MCI with positive amyloid and tau) at baseline with 88.6% area-under-curve, 88.6% (39/44) accuracy, 73.7% (14/19) sensitivity and 100% (25/25) specificity in the test set. It also correctly classified all other test subjects (healthy or AD continuum subjects across the cognitive spectrum) with 86.4% accuracy (206/239), 77.4% sensitivity (33/42) and 88.23% (165/197) specificity. By identifying patients with prodromal AD who will not progress to dementia, our model could significantly reduce overall societal burden and cost if implemented as a screening tool. The model's high positive predictive value in the prodromal test set makes it a practical means for selecting candidates for anti-amyloid therapy and trials.

An integrated deep learning model for early and multi-class diagnosis of Alzheimer's disease from MRI scans.

Vinukonda ER, Jagadesh BN

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely affects memory, behavior, and cognitive function. Early and accurate diagnosis is crucial for effective intervention, yet detecting subtle changes in the early stages remains a challenge. In this study, we propose a hybrid deep learning-based multi-class classification system for AD using magnetic resonance imaging (MRI). The proposed approach integrates an improved DeepLabV3+ (IDeepLabV3+) model for lesion segmentation, followed by feature extraction using the LeNet-5 model. A novel feature selection method based on average correlation and error probability is employed to enhance classification efficiency. Finally, an Enhanced ResNext (EResNext) model is used to classify AD into four stages: non-dementia (ND), very mild dementia (VMD), mild dementia (MD), and moderate dementia (MOD). The proposed model achieves an accuracy of 98.12%, demonstrating its superior performance over existing methods. The area under the ROC curve (AUC) further validates its effectiveness, with the highest score of 0.97 for moderate dementia. This study highlights the potential of hybrid deep learning models in improving early AD detection and staging, contributing to more accurate clinical diagnosis and better patient care.

A self-supervised multimodal deep learning approach to differentiate post-radiotherapy progression from pseudoprogression in glioblastoma.

Gomaa A, Huang Y, Stephan P, Breininger K, Frey B, Dörfler A, Schnell O, Delev D, Coras R, Donaubauer AJ, Schmitter C, Stritzelberger J, Semrau S, Maier A, Bayer S, Schönecker S, Heiland DH, Hau P, Gaipl US, Bert C, Fietkau R, Schmidt MA, Putz F

pubmed logopapersMay 17 2025
Accurate differentiation of pseudoprogression (PsP) from True Progression (TP) following radiotherapy (RT) in glioblastoma patients is crucial for optimal treatment planning. However, this task remains challenging due to the overlapping imaging characteristics of PsP and TP. This study therefore proposes a multimodal deep-learning approach utilizing complementary information from routine anatomical MR images, clinical parameters, and RT treatment planning information for improved predictive accuracy. The approach utilizes a self-supervised Vision Transformer (ViT) to encode multi-sequence MR brain volumes to effectively capture both global and local context from the high dimensional input. The encoder is trained in a self-supervised upstream task on unlabeled glioma MRI datasets from the open BraTS2021, UPenn-GBM, and UCSF-PDGM datasets (n = 2317 MRI studies) to generate compact, clinically relevant representations from FLAIR and T1 post-contrast sequences. These encoded MR inputs are then integrated with clinical data and RT treatment planning information through guided cross-modal attention, improving progression classification accuracy. This work was developed using two datasets from different centers: the Burdenko Glioblastoma Progression Dataset (n = 59) for training and validation, and the GlioCMV progression dataset from the University Hospital Erlangen (UKER) (n = 20) for testing. The proposed method achieved competitive performance, with an AUC of 75.3%, outperforming the current state-of-the-art data-driven approaches. Importantly, the proposed approach relies solely on readily available anatomical MRI sequences, clinical data, and RT treatment planning information, enhancing its clinical feasibility. The proposed approach addresses the challenge of limited data availability for PsP and TP differentiation and could allow for improved clinical decision-making and optimized treatment plans for glioblastoma patients.

Exploring interpretable echo analysis using self-supervised parcels.

Majchrowska S, Hildeman A, Mokhtari R, Diethe T, Teare P

pubmed logopapersMay 17 2025
The application of AI for predicting critical heart failure endpoints using echocardiography is a promising avenue to improve patient care and treatment planning. However, fully supervised training of deep learning models in medical imaging requires a substantial amount of labelled data, posing significant challenges due to the need for skilled medical professionals to annotate image sequences. Our study addresses this limitation by exploring the potential of self-supervised learning, emphasising interpretability, robustness, and safety as crucial factors in cardiac imaging analysis. We leverage self-supervised learning on a large unlabelled dataset, facilitating the discovery of features applicable to a various downstream tasks. The backbone model not only generates informative features for training smaller models using simple techniques but also produces features that are interpretable by humans. The study employs a modified Self-supervised Transformer with Energy-based Graph Optimisation (STEGO) network on top of self-DIstillation with NO labels (DINO) as a backbone model, pre-trained on diverse medical and non-medical data. This approach facilitates the generation of self-segmented outputs, termed "parcels", which identify distinct anatomical sub-regions of the heart. Our findings highlight the robustness of these self-learned parcels across diverse patient profiles and phases of the cardiac cycle phases. Moreover, these parcels offer high interpretability and effectively encapsulate clinically relevant cardiac substructures. We conduct a comprehensive evaluation of the proposed self-supervised approach on publicly available datasets, demonstrating its adaptability to a wide range of requirements. Our results underscore the potential of self-supervised learning to address labelled data scarcity in medical imaging, offering a path to improve cardiac imaging analysis and enhance the efficiency and interpretability of diagnostic procedures, thus positively impacting patient care and clinical decision-making.

The Role of Digital Technologies in Personalized Craniomaxillofacial Surgical Procedures.

Daoud S, Shhadeh A, Zoabi A, Redenski I, Srouji S

pubmed logopapersMay 17 2025
Craniomaxillofacial (CMF) surgery addresses complex challenges, balancing aesthetic and functional restoration. Digital technologies, including advanced imaging, virtual surgical planning, computer-aided design, and 3D printing, have revolutionized this field. These tools improve accuracy and optimize processes across all surgical phases, from diagnosis to postoperative evaluation. CMF's unique demands are met through patient-specific solutions that optimize outcomes. Emerging technologies like artificial intelligence, extended reality, robotics, and bioprinting promise to overcome limitations, driving the future of personalized, technology-driven CMF care.
Page 141 of 1751742 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.