Sort by:
Page 77 of 99982 results

The role of deep learning in diagnostic imaging of spondyloarthropathies: a systematic review.

Omar M, Watad A, McGonagle D, Soffer S, Glicksberg BS, Nadkarni GN, Klang E

pubmed logopapersJun 1 2025
Diagnostic imaging is an integral part of identifying spondyloarthropathies (SpA), yet the interpretation of these images can be challenging. This review evaluated the use of deep learning models to enhance the diagnostic accuracy of SpA imaging. Following PRISMA guidelines, we systematically searched major databases up to February 2024, focusing on studies that applied deep learning to SpA imaging. Performance metrics, model types, and diagnostic tasks were extracted and analyzed. Study quality was assessed using QUADAS-2. We analyzed 21 studies employing deep learning in SpA imaging diagnosis across MRI, CT, and X-ray modalities. These models, particularly advanced CNNs and U-Nets, demonstrated high accuracy in diagnosing SpA, differentiating arthritis forms, and assessing disease progression. Performance metrics frequently surpassed traditional methods, with some models achieving AUCs up to 0.98 and matching expert radiologist performance. This systematic review underscores the effectiveness of deep learning in SpA imaging diagnostics across MRI, CT, and X-ray modalities. The studies reviewed demonstrated high diagnostic accuracy. However, the presence of small sample sizes in some studies highlights the need for more extensive datasets and further prospective and external validation to enhance the generalizability of these AI models. Question How can deep learning models improve diagnostic accuracy in imaging for spondyloarthropathies (SpA), addressing challenges in early detection and differentiation from other forms of arthritis? Findings Deep learning models, especially CNNs and U-Nets, showed high accuracy in SpA imaging across MRI, CT, and X-ray, often matching or surpassing expert radiologists. Clinical relevance Deep learning models can enhance diagnostic precision in SpA imaging, potentially reducing diagnostic delays and improving treatment decisions, but further validation on larger datasets is required for clinical integration.

Improving predictability, reliability, and generalizability of brain-wide associations for cognitive abilities via multimodal stacking.

Tetereva A, Knodt AR, Melzer TR, van der Vliet W, Gibson B, Hariri AR, Whitman ET, Li J, Lal Khakpoor F, Deng J, Ireland D, Ramrakha S, Pat N

pubmed logopapersJun 1 2025
Brain-wide association studies (BWASs) have attempted to relate cognitive abilities with brain phenotypes, but have been challenged by issues such as predictability, test-retest reliability, and cross-cohort generalizability. To tackle these challenges, we proposed a machine learning "stacking" approach that draws information from whole-brain MRI across different modalities, from task-functional MRI (fMRI) contrasts and functional connectivity during tasks and rest to structural measures, into one prediction model. We benchmarked the benefits of stacking using the Human Connectome Projects: Young Adults (<i>n</i> = 873, 22-35 years old) and Human Connectome Projects-Aging (<i>n</i> = 504, 35-100 years old) and the Dunedin Multidisciplinary Health and Development Study (Dunedin Study, <i>n</i> = 754, 45 years old). For predictability, stacked models led to out-of-sample <i>r</i>∼0.5-0.6 when predicting cognitive abilities at the time of scanning, primarily driven by task-fMRI contrasts. Notably, using the Dunedin Study, we were able to predict participants' cognitive abilities at ages 7, 9, and 11 years using their multimodal MRI at age 45 years, with an out-of-sample <i>r</i> of 0.52. For test-retest reliability, stacked models reached an excellent level of reliability (interclass correlation > 0.75), even when we stacked only task-fMRI contrasts together. For generalizability, a stacked model with nontask MRI built from one dataset significantly predicted cognitive abilities in other datasets. Altogether, stacking is a viable approach to undertake the three challenges of BWAS for cognitive abilities.

Multimodal Artificial Intelligence Using Endoscopic USG, CT, and MRI to Differentiate Between Serous and Mucinous Cystic Neoplasms.

Seza K, Tawada K, Kobayashi A, Nakamura K

pubmed logopapersJun 1 2025
Introduction Serous cystic neoplasms (SCN) and mucinous cystic neoplasms (MCN) often exhibit similar imaging features when evaluated with a single imaging modality. Differentiating between SCN and MCN typically necessitates the utilization of multiple imaging techniques, including computed tomography (CT), magnetic resonance imaging (MRI), and endoscopic ultrasonography (EUS). Recent research indicates that artificial intelligence (AI) can effectively distinguish between SCN and MCN using single-modal imaging. Despite these advancements, the diagnostic performance of AI has not yet reached an optimal level. This study compares the efficacy of AI in classifying SCN and MCN using multimodal imaging versus single-modal imaging. The objective was to assess the effectiveness of AI utilizing multimodal imaging with EUS, CT, and MRI to classify these two types of pancreatic cysts. Methods We retrospectively gathered data from 25 patients with surgically confirmed SCN and 24 patients with surgically confirmed MCN as part of a multicenter study. Imaging was conducted using four modalities: EUS, early-phase contrast-enhanced abdominal CT, T2-weighted MRI, and magnetic resonance pancreatography. Four images per modality were obtained for each tumor. Data augmentation techniques were utilized, resulting in a final dataset of 39,200 images per modality. An AI model with ResNet was employed to categorize the cysts as SCN or MCN, incorporating clinical features and combinations of imaging modalities (single, double, triple, and all four modalities). The classification outcomes were compared with those of five experienced gastroenterologists with over 10 years of experience. The comparison is based on three performance metrics: sensitivity, specificity, and accuracy. Results For AI utilizing a single imaging modality, the sensitivity, specificity, and accuracy were 87.0%, 92.7%, and 90.8%, respectively. Combining two imaging modalities improved the sensitivity, specificity, and accuracy to 95.3%, 95.1%, and 94.9%. With three modalities, AI achieved a sensitivity of 96.0%, a specificity of 99.0%, and an accuracy of 97.0%. Ultimately, employing all four imaging modalities resulted in AI achieving 98.0% sensitivity, 100% specificity, and 99.0% accuracy. In contrast, experts utilizing all four modalities attained a sensitivity of 78.0%, specificity of 82.0%, and accuracy of 81.0%. The AI models consistently outperformed the experts across all metrics. A continuous enhancement in performance was observed with each additional imaging modality, with AI utilizing three and four modalities significantly surpassing single-modal imaging AI. Conclusion AI utilizing multimodal imaging offers better performance compared to both single-modal imaging AI and experienced human experts in classifying SCN and MCN.

Diagnostic value of deep learning of multimodal imaging of thyroid for TI-RADS category 3-5 classification.

Qian T, Feng X, Zhou Y, Ling S, Yao J, Lai M, Chen C, Lin J, Xu D

pubmed logopapersJun 1 2025
Thyroid nodules classified within the Thyroid Imaging Reporting and Data Systems (TI-RADS) category 3-5 are typically regarded as having varying degrees of malignancy risk, with the risk increasing from TI-RADS 3 to TI-RADS 5. While some of these nodules may undergo fine-needle aspiration (FNA) biopsy to assess their nature, this procedure carries a risk of false negatives and inherent complications. To avoid the need for unnecessary biopsy examination, we explored a method for distinguishing the benign and malignant characteristics of thyroid TI-RADS 3-5 nodules based on deep-learning ultrasound images combined with computed tomography (CT). Thyroid nodules, assessed as American College of Radiology (ACR) TI-RADS category 3-5 through conventional ultrasound, all of which had postoperative pathology results, were examined using both conventional ultrasound and CT before operation. We investigated the effectiveness of deep-learning models based on ultrasound alone, CT alone, and a combination of both imaging modalities using the following metrics: Area Under Curve (AUC), sensitivity, accuracy, and positive predictive value (PPV). Additionally, we compared the diagnostic efficacy of the combined methods with manual readings of ultrasound and CT. A total of 768 thyroid nodules falling within TI-RADS categories 3-5 were identified across 768 patients. The dataset comprised 499 malignant and 269 benign cases. For the automatic identification of thyroid TI-RADS category 3-5 nodules, deep learning combined with ultrasound and CT demonstrated a significantly higher AUC (0.930; 95% CI: 0.892, 0.969) compared to the application of ultrasound alone AUC (0.901; 95% CI: 0.856, 0.947) or CT alone AUC (0.776; 95% CI: 0.713, 0.840). Additionally, the AUC of combined modalities surpassed that of radiologists'assessments using ultrasound alone AUCmean (0.725;95% CI:0.677, 0.773), CT alone AUCmean (0.617; 95% CI:0.564, 0.669). Deep learning method combined with ultrasound and CT imaging of thyroid can allow more accurate and precise classification of nodules within TI-RADS categories 3-5.

Data Augmentation for Medical Image Classification Based on Gaussian Laplacian Pyramid Blending With a Similarity Measure.

Kumar A, Sharma A, Singh AK, Singh SK, Saxena S

pubmed logopapersJun 1 2025
Breast cancer is a devastating disease that affects women worldwide, and computer-aided algorithms have shown potential in automating cancer diagnosis. Recently Generative Artificial Intelligence (GenAI) opens new possibilities for addressing the challenges of labeled data scarcity and accurate prediction in critical applications. However, a lack of diversity, as well as unrealistic and unreliable data, have a detrimental impact on performance. Therefore, this study proposes an augmentation scheme to address the scarcity of labeled data and data imbalance in medical datasets. This approach integrates the concepts of the Gaussian-Laplacian pyramid and pyramid blending with similarity measures. In order to maintain the structural properties of images and capture inter-variability of patient images of the same category similarity-metric-based intermixing has been introduced. It helps to maintain the overall quality and integrity of the dataset. Subsequently, deep learning approach with significant modification, that leverages transfer learning through the usage of concatenated pre-trained models is applied to classify breast cancer histopathological images. The effectiveness of the proposal, including the impact of data augmentation, is demonstrated through a detailed analysis of three different medical datasets, showing significant performance improvement over baseline models. The proposal has the potential to contribute to the development of more accurate and reliable approach for breast cancer diagnosis.

Automated Ensemble Multimodal Machine Learning for Healthcare.

Imrie F, Denner S, Brunschwig LS, Maier-Hein K, van der Schaar M

pubmed logopapersJun 1 2025
The application of machine learning in medicine and healthcare has led to the creation of numerous diagnostic and prognostic models. However, despite their success, current approaches generally issue predictions using data from a single modality. This stands in stark contrast with clinician decision-making which employs diverse information from multiple sources. While several multimodal machine learning approaches exist, significant challenges in developing multimodal systems remain that are hindering clinical adoption. In this paper, we introduce a multimodal framework, AutoPrognosis-M, that enables the integration of structured clinical (tabular) data and medical imaging using automated machine learning. AutoPrognosis-M incorporates 17 imaging models, including convolutional neural networks and vision transformers, and three distinct multimodal fusion strategies. In an illustrative application using a multimodal skin lesion dataset, we highlight the importance of multimodal machine learning and the power of combining multiple fusion strategies using ensemble learning. We have open-sourced our framework as a tool for the community and hope it will accelerate the uptake of multimodal machine learning in healthcare and spur further innovation.

A Survey of Surrogates and Health Care Professionals Indicates Support of Cognitive Motor Dissociation-Assisted Prognostication.

Heinonen GA, Carmona JC, Grobois L, Kruger LS, Velazquez A, Vrosgou A, Kansara VB, Shen Q, Egawa S, Cespedes L, Yazdi M, Bass D, Saavedra AB, Samano D, Ghoshal S, Roh D, Agarwal S, Park S, Alkhachroum A, Dugdale L, Claassen J

pubmed logopapersJun 1 2025
Prognostication of patients with acute disorders of consciousness is imprecise but more accurate technology-supported predictions, such as cognitive motor dissociation (CMD), are emerging. CMD refers to the detection of willful brain activation following motor commands using functional magnetic resonance imaging or machine learning-supported analysis of the electroencephalogram in clinically unresponsive patients. CMD is associated with long-term recovery, but acceptance by surrogates and health care professionals is uncertain. The objective of this study was to determine receptiveness for CMD to inform goals of care (GoC) decisions and research participation among health care professionals and surrogates of behaviorally unresponsive patients. This was a two-center study of surrogates of and health care professionals caring for unconscious patients with severe neurological injury who were enrolled in two prospective US-based studies. Participants completed a 13-item survey to assess demographics, religiosity, minimal acceptable level of recovery, enthusiasm for research participation, and receptiveness for CMD to support GoC decisions. Completed surveys were obtained from 196 participants (133 health care professionals and 63 surrogates). Across all respondents, 93% indicated that they would want their loved one or the patient they cared for to participate in a research study that supports recovery of consciousness if CMD were detected, compared to 58% if CMD were not detected. Health care professionals were more likely than surrogates to change GoC with a positive (78% vs. 59%, p = 0.005) or negative (83% vs. 59%, p = 0.0002) CMD result. Participants who reported religion was the most important part of their life were least likely to change GoC with or without CMD. Participants who identified as Black (odds ratio [OR] 0.12, 95% confidence interval [CI] 0.04-0.36) or Hispanic/Latino (OR 0.39, 95% CI 0.2-0.75) and those for whom religion was the most important part of their life (OR 0.18, 95% CI 0.05-0.64) were more likely to accept a lower minimum level of recovery. Technology-supported prognostication and enthusiasm for clinical trial participation was supported across a diverse spectrum of health care professionals and surrogate decision-makers. Education for surrogates and health care professionals should accompany integration of technology-supported prognostication.

P2TC: A Lightweight Pyramid Pooling Transformer-CNN Network for Accurate 3D Whole Heart Segmentation.

Cui H, Wang Y, Zheng F, Li Y, Zhang Y, Xia Y

pubmed logopapersJun 1 2025
Cardiovascular disease is a leading global cause of death, requiring accurate heart segmentation for diagnosis and surgical planning. Deep learning methods have been demonstrated to achieve superior performances in cardiac structures segmentation. However, there are still limitations in 3D whole heart segmentation, such as inadequate spatial context modeling, difficulty in capturing long-distance dependencies, high computational complexity, and limited representation of local high-level semantic information. To tackle the above problems, we propose a lightweight Pyramid Pooling Transformer-CNN (P2TC) network for accurate 3D whole heart segmentation. The proposed architecture comprises a dual encoder-decoder structure with a 3D pyramid pooling Transformer for multi-scale information fusion and a lightweight large-kernel Convolutional Neural Network (CNN) for local feature extraction. The decoder has two branches for precise segmentation and contextual residual handling. The first branch is used to generate segmentation masks for pixel-level classification based on the features extracted by the encoder to achieve accurate segmentation of cardiac structures. The second branch highlights contextual residuals across slices, enabling the network to better handle variations and boundaries. Extensive experimental results on the Multi-Modality Whole Heart Segmentation (MM-WHS) 2017 challenge dataset demonstrate that P2TC outperforms the most advanced methods, achieving the Dice scores of 92.6% and 88.1% in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities respectively, which surpasses the baseline model by 1.5% and 1.7%, and achieves state-of-the-art segmentation results.

MedKAFormer: When Kolmogorov-Arnold Theorem Meets Vision Transformer for Medical Image Representation.

Wang G, Zhu Q, Song C, Wei B, Li S

pubmed logopapersJun 1 2025
Vision Transformers (ViTs) suffer from high parameter complexity because they rely on Multi-layer Perceptrons (MLPs) for nonlinear representation. This issue is particularly challenging in medical image analysis, where labeled data is limited, leading to inadequate feature representation. Existing methods have attempted to optimize either the patch embedding stage or the non-embedding stage of ViTs. Still, they have struggled to balance effective modeling, parameter complexity, and data availability. Recently, the Kolmogorov-Arnold Network (KAN) was introduced as an alternative to MLPs, offering a potential solution to the large parameter issue in ViTs. However, KAN cannot be directly integrated into ViT due to challenges such as handling 2D structured data and dimensionality catastrophe. To solve this problem, we propose MedKAFormer, the first ViT model to incorporate the Kolmogorov-Arnold (KA) theorem for medical image representation. It includes a Dynamic Kolmogorov-Arnold Convolution (DKAC) layer for flexible nonlinear modeling in the patch embedding stage. Additionally, it introduces a Nonlinear Sparse Token Mixer (NSTM) and a Nonlinear Dynamic Filter (NDF) in the non-embedding stage. These components provide comprehensive nonlinear representation while reducing model overfitting. MedKAFormer reduces parameter complexity by 85.61% compared to ViT-Base and achieves competitive results on 14 medical datasets across various imaging modalities and structures.

A Trusted Medical Image Zero-Watermarking Scheme Based on DCNN and Hyperchaotic System.

Xiang R, Liu G, Dang M, Wang Q, Pan R

pubmed logopapersJun 1 2025
The zero-watermarking methods provide a means of lossless, which was adopted to protect medical image copyright requiring high integrity. However, most existing studies have only focused on robustness and there has been little discussion about the analysis and experiment on discriminability. Therefore, this paper proposes a trusted robust zero-watermarking scheme for medical images based on Deep convolution neural network (DCNN) and the hyperchaotic encryption system. Firstly, the medical image is converted into several feature map matrices by the specific convolution layer of DCNN. Then, a stable Gram matrix is obtained by calculating the colinear correlation between different channels in feature map matrices. Finally, the Gram matrixes of the medical image and the feature map matrixes of the watermark image are fused by the trained DCNN to generate the zero-watermark. Meanwhile, we propose two feature evaluation criteria for finding differentiated eigenvalues. The eigenvalue is used as the explicit key to encrypt the generated zero-watermark by Lorenz hyperchaotic encryption, which enhances security and discriminability. The experimental results show that the proposed scheme can resist common image attacks and geometric attacks, and is distinguishable in experiments, being applicable for the copyright protection of medical images.
Page 77 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.