Sort by:
Page 591 of 7627616 results

Sumshun Nahar Eity, Mahin Montasir Afif, Tanisha Fairooz, Md. Mortuza Ahmmed, Md Saef Ullah Miah

arxiv logopreprintJun 17 2025
Accurate diagnosis of brain disorders such as Alzheimer's disease and brain tumors remains a critical challenge in medical imaging. Conventional methods based on manual MRI analysis are often inefficient and error-prone. To address this, we propose DGG-XNet, a hybrid deep learning model integrating VGG16 and DenseNet121 to enhance feature extraction and classification. DenseNet121 promotes feature reuse and efficient gradient flow through dense connectivity, while VGG16 contributes strong hierarchical spatial representations. Their fusion enables robust multiclass classification of neurological conditions. Grad-CAM is applied to visualize salient regions, enhancing model transparency. Trained on a combined dataset from BraTS 2021 and Kaggle, DGG-XNet achieved a test accuracy of 91.33\%, with precision, recall, and F1-score all exceeding 91\%. These results highlight DGG-XNet's potential as an effective and interpretable tool for computer-aided diagnosis (CAD) of neurodegenerative and oncological brain disorders.

Hu H, Ye L, Wu P, Shi Z, Chen G, Li Y

pubmed logopapersJun 17 2025
The study aimed to identify factors influencing the evolution of chronic lesions in multiple sclerosis (MS) using a machine learning approach. Longitudinal data were collected from individuals with relapsing-remitting multiple sclerosis (RRMS). The "iron rim" sign was identified using quantitative susceptibility mapping (QSM), and microstructural damage was quantified via T1/fluid attenuated inversion recovery (FLAIR) ratios. Additional data included baseline lesion volume, cerebral T2-hyperintense lesion volume, iron rim lesion volume, the proportion of iron rim lesion volume, gender, age, disease duration (DD), disability and cognitive scores, use of disease-modifying therapy, and follow-up intervals. These features were integrated into machine learning models (logistic regression (LR), random forest (RF), and support vector machine (SVM)) to predict lesion volume change, with the most predictive model selected for feature importance analysis. The study included 47 RRMS individuals (mean age, 30.6 ± 8.0 years [standard deviation], 6 males) and 833 chronic lesions. Machine learning model development results showed that the SVM model demonstrated superior predictive efficiency, with an AUC of 0.90 in the training set and 0.81 in the testing set. Feature importance analysis identified the top three features were the "iron rim" sign of lesions, DD, and the T1/FLAIR ratios of the lesions. This study developed a machine learning model to predict the volume outcome of MS lesions. Feature importance analysis identified chronic inflammation around the lesion, DD, and the microstructural damage as key factors influencing volume change in chronic MS lesions. Question The evolution of different chronic lesions in MS exhibits variability, and the driving factors influencing these outcomes remain to be further investigated. Findings A SVM learning model was developed to predict chronic MS lesion volume changes, integrating lesion characteristics, lesion burden, and clinical data. Clinical relevance Chronic inflammation surrounding lesions, DD, and microstructural damage are key factors influencing the evolution of chronic MS lesions.

Madhavan AA, Zhou Z, Thorne J, Kodet ML, Cutsforth-Gregory JK, Schievink WI, Mark IT, Schueler BA, Yu L

pubmed logopapersJun 17 2025
Cone beam CT is an imaging modality that provides high-resolution, cross-sectional imaging in the fluoroscopy suite. In neuroradiology, cone beam CT has been used for various applications including temporal bone imaging and during spinal and cerebral angiography. Furthermore, cone beam CT has been shown to improve imaging of spinal CSF leaks during myelography. One drawback of cone beam CT is that images have a relatively high noise level. In this technical report, we describe the first application of a high-resolution convolutional neural network to denoise cone beam CT myelographic images. We show examples of the resulting improvement in image quality for a variety of types of spinal CSF leaks. Further application of this technique is warranted to demonstrate its clinical utility and potential use for other cone beam CT applications.ABBREVIATIONS: CBCT = cone beam CT; CB-CTM = cone beam CT myelography; CTA = CT angiography; CVF = CSF-venous fistula; DSM = digital subtraction myelography; EID = energy integrating detector; FBP = filtered back-projection; SNR = signal-to-noise ratio.

Wajih Hassan Raza, Aamir Bader Shah, Yu Wen, Yidan Shen, Juan Diego Martinez Lemus, Mya Caryn Schiess, Timothy Michael Ellmore, Renjie Hu, Xin Fu

arxiv logopreprintJun 17 2025
The integration of multi-modal Magnetic Resonance Imaging (MRI) and clinical data holds great promise for enhancing the diagnosis of neurological disorders (NDs) in real-world clinical settings. Deep Learning (DL) has recently emerged as a powerful tool for extracting meaningful patterns from medical data to aid in diagnosis. However, existing DL approaches struggle to effectively leverage multi-modal MRI and clinical data, leading to suboptimal performance. To address this challenge, we utilize a unique, proprietary multi-modal clinical dataset curated for ND research. Based on this dataset, we propose a novel transformer-based Mixture-of-Experts (MoE) framework for ND classification, leveraging multiple MRI modalities-anatomical (aMRI), Diffusion Tensor Imaging (DTI), and functional (fMRI)-alongside clinical assessments. Our framework employs transformer encoders to capture spatial relationships within volumetric MRI data while utilizing modality-specific experts for targeted feature extraction. A gating mechanism with adaptive fusion dynamically integrates expert outputs, ensuring optimal predictive performance. Comprehensive experiments and comparisons with multiple baselines demonstrate that our multi-modal approach significantly enhances diagnostic accuracy, particularly in distinguishing overlapping disease states. Our framework achieves a validation accuracy of 82.47\%, outperforming baseline methods by over 10\%, highlighting its potential to improve ND diagnosis by applying multi-modal learning to real-world clinical data.

Xinkai Zhao, Yuta Tokuoka, Junichiro Iwasawa, Keita Oda

arxiv logopreprintJun 17 2025
The increasing use of diffusion models for image generation, especially in sensitive areas like medical imaging, has raised significant privacy concerns. Membership Inference Attack (MIA) has emerged as a potential approach to determine if a specific image was used to train a diffusion model, thus quantifying privacy risks. Existing MIA methods often rely on diffusion reconstruction errors, where member images are expected to have lower reconstruction errors than non-member images. However, applying these methods directly to medical images faces challenges. Reconstruction error is influenced by inherent image difficulty, and diffusion models struggle with high-frequency detail reconstruction. To address these issues, we propose a Frequency-Calibrated Reconstruction Error (FCRE) method for MIAs on medical image diffusion models. By focusing on reconstruction errors within a specific mid-frequency range and excluding both high-frequency (difficult to reconstruct) and low-frequency (less informative) regions, our frequency-selective approach mitigates the confounding factor of inherent image difficulty. Specifically, we analyze the reverse diffusion process, obtain the mid-frequency reconstruction error, and compute the structural similarity index score between the reconstructed and original images. Membership is determined by comparing this score to a threshold. Experiments on several medical image datasets demonstrate that our FCRE method outperforms existing MIA methods.

Wang Y, Sheng H, Wang X

pubmed logopapersJun 17 2025
Alzheimer's disease is a debilitating neurological disorder that requires accurate diagnosis for the most effective therapy and care. This article presents a new vision transformer model specifically created to evaluate magnetic resonance imaging data from the Alzheimer's Disease Neuroimaging Initiative dataset in order to categorize cases of Alzheimer's disease. Contrary to models that rely on convolutional neural networks, the vision transformer has the ability to capture large relationships between far-apart pixels in the images. The suggested architecture has shown exceptional outcomes, as its precision has emphasized its capacity to detect and distinguish significant characteristics from MRI scans, hence enabling the precise classification of Alzheimer's disease subtypes and various stages. The model utilizes both the elements from convolutional neural network and vision transformer models to extract both local and global visual patterns, facilitating the accurate categorization of various Alzheimer's disease classifications. We specifically focus on the term 'dementia in patients with Alzheimer's disease' to describe individuals who have progressed to the dementia stage as a result of AD, distinguishing them from those in earlier stages of the disease. Precise categorization of Alzheimer's disease has significant therapeutic importance, as it enables timely identification, tailored treatment strategies, disease monitoring, and prognostic assessment. The stated high accuracy indicates that the suggested vision transformer model has the capacity to assist healthcare providers and researchers in generating well-informed and precise evaluations of individuals with Alzheimer's disease.

Wang Y, Xiong H, Sun K, Bai S, Dai L, Ding Z, Liu J, Wang Q, Liu Q, Shen D

pubmed logopapersJun 17 2025
Multimodal brain magnetic resonance imaging (MRI) offers complementary insights into brain structure and function, thereby improving the diagnostic accuracy of neurological disorders and advancing brain-related research. However, the widespread applicability of MRI is substantially limited by restricted scanner accessibility and prolonged acquisition times. Here, we present TUMSyn, a text-guided universal MRI synthesis model capable of generating brain MRI specified by textual imaging metadata from routinely acquired scans. We ensure the reliability of TUMSyn by constructing a brain MRI database comprising 31,407 3D images across 7 MRI modalities from 13 worldwide centers and pre-training an MRI-specific text encoder to process text prompts effectively. Experiments on diverse datasets and physician assessments indicate that TUMSyn-generated images can be utilized along with acquired MRI scan(s) to facilitate large-scale MRI-based screening and diagnosis of multiple brain diseases, substantially reducing the time and cost of MRI in the healthcare system.

Carocha A, Vicente M, Bernardeco J, Rijo C, Cohen Á, Cruz J

pubmed logopapersJun 17 2025
The second-trimester ultrasound is a crucial tool in prenatal care, typically conducted between 18 and 24 weeks of gestation to evaluate fetal anatomy, growth, and mid-trimester screening. This article provides a comprehensive overview of the best practices and guidelines for performing this examination, with a focus on detecting fetal anomalies. The ultrasound assesses key structures and evaluates fetal growth by measuring biometric parameters, which are essential for estimating fetal weight. Additionally, the article discusses the importance of placental evaluation, amniotic fluid levels measurement, and the risk of preterm birth through cervical length measurements. Factors that can affect the accuracy of the scan, such as the skill of the operator, the quality of the equipment, and maternal conditions such as obesity, are discussed. The article also addresses the limitations of the procedure, including variability in detection. Despite these challenges, the second-trimester ultrasound remains a valuable screening and diagnostic tool, providing essential information for managing pregnancies, especially in high-risk cases. Future directions include improving imaging technology, integrating artificial intelligence for anomaly detection, and standardizing ultrasound protocols to enhance diagnostic accuracy and ensure consistent prenatal care.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.

Seyed Mohsen Hosseini

arxiv logopreprintJun 17 2025
Class imbalance and the difficulty imbalance are the two types of data imbalance that affect the performance of neural networks in medical segmentation tasks. In class imbalance the loss is dominated by the majority classes and in difficulty imbalance the loss is dominated by easy to classify pixels. This leads to an ineffective training. Dice loss, which is based on a geometrical metric, is very effective in addressing the class imbalance compared to the cross entropy (CE) loss, which is adopted directly from classification tasks. To address the difficulty imbalance, the common approach is employing a re-weighted CE loss or a modified Dice loss to focus the training on difficult to classify areas. The existing modification methods are computationally costly and with limited success. In this study we propose a simple modification to the Dice loss with minimal computational cost. With a pixel level modulating term, we take advantage of the effectiveness of Dice loss in handling the class imbalance to also handle the difficulty imbalance. Results on three commonly used medical segmentation tasks show that the proposed Pixel-wise Modulated Dice loss (PM Dice loss) outperforms other methods, which are designed to tackle the difficulty imbalance problem.
Page 591 of 7627616 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.