Sort by:
Page 325 of 6636627 results

Wallace T, Heng IS, Subasic S, Messenger C

pubmed logopapersJul 30 2025
Synthetic images are an option for augmenting limited medical imaging datasets to improve the performance of various machine learning models. A common metric for evaluating synthetic image quality is the Fréchet Inception Distance (FID) which measures the similarity of two image datasets. In this study we evaluate the relationship between this metric and the improvement which synthetic images, generated by a Progressively Growing Generative Adversarial Network (PGGAN), grant when augmenting Diabetes-related Macular Edema (DME) intraretinal fluid segmentation performed by a U-Net model with limited amounts of training data. We find that the behaviour of augmenting with standard and synthetic images agrees with previously conducted experiments. Additionally, we show that dissimilar (high FID) datasets do not improve segmentation significantly. As FID between the training and augmenting datasets decreases, the augmentation datasets are shown to contribute to significant and robust improvements in image segmentation. Finally, we find that there is significant evidence to suggest that synthetic and standard augmentations follow separate log-normal trends between FID and improvements in model performance, with synthetic data proving more effective than standard augmentation techniques. Our findings show that more similar datasets (lower FID) will be more effective at improving U-Net performance, however, the results also suggest that this improvement may only occur when images are sufficiently dissimilar.

Gao R, Mai S, Wang S, Hu W, Chang Z, Wu G, Guan H

pubmed logopapersJul 30 2025
In recent years, the application of deep learning (DL) technology in the thyroid field has shown exponential growth, greatly promoting innovation in thyroid disease research. As the most common malignant tumor of the endocrine system, the precise diagnosis and treatment of thyroid cancer has been a key focus of clinical research. This article systematically reviews the latest research progress in DL research for the diagnosis and treatment of thyroid malignancies, focusing on the breakthrough application of advanced models such as convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs) in key areas such as ultrasound images analysis for thyroid nodules, automatic classification of pathological images, and assessment of extrathyroidal extension. Furthermore, the review highlights the great potential of DL techniques in the development of individualized treatment planning and prognosis prediction. In addition, it analyzes the technical bottlenecks and clinical challenges faced by current DL applications in thyroid cancer diagnosis and treatment and looks ahead to future directions for development. The aim of this review is to provide the latest research insights for clinical practitioners, promote further improvements in the precision diagnosis and treatment system for thyroid cancer, and ultimately achieve better diagnostic and therapeutic outcomes for thyroid cancer patients.

Xie H, Gan W, Ji W, Chen X, Alashi A, Thorn SL, Zhou B, Liu Q, Xia M, Guo X, Liu YH, An H, Kamilov US, Wang G, Sinusas AJ, Liu C

pubmed logopapersJul 30 2025
Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning methods have been introduced to improve image quality from low-dose or few-view SPECT data, previous approaches often fail to generalize across different acquisition settings, limiting realistic applicability. This work introduced DiffSPECT-3D, a diffusion framework for 3D cardiac SPECT imaging that effectively adapts to different acquisition settings without requiring further network re-training or fine-tuning. Using both image and projection data, a consistency strategy is proposed to ensure that diffusion sampling at each step aligns with the low-dose/few-view projection measurements, the image data, and the scanner geometry, thus enabling generalization to different low-dose/few-view settings. Incorporating anatomical spatial information from CT and total variation constraint, we proposed a 2.5D conditional strategy to allow DiffSPECT-3D to observe 3D contextual information from the entire image volume, addressing the 3D memory/computational issues in diffusion model. We extensively evaluated the proposed method on 1,325 clinical <sup>99m</sup>Tc tetrofosmin stress/rest studies from 795 patients. Each study was reconstructed into 5 different low-count levels and 5 different projection few-view levels for model evaluations, ranging from 1% to 50% and from 1 view to 9 view, respectively. Validated against cardiac catheterization results and diagnostic review from nuclear cardiologists, the presented results show the potential to achieve low-dose and few-view SPECT imaging without compromising clinical performance. Additionally, DiffSPECT-3D could be directly applied to full-dose SPECT images to further improve image quality, especially in a low-dose stress-first cardiac SPECT imaging protocol.

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

M PJ, M SK

pubmed logopapersJul 30 2025
Early-stage Brain tumor detection is critical for timely diagnosis and effective treatment. We propose a hybrid deep learning method, Convolutional Neural Network (CNN) integrated with YOLO (You Only Look once) and SAM (Segment Anything Model) for diagnosing tumors. A novel hybrid deep learning framework combining a CNN with YOLOv11 for real-time object detection and the SAM for precise segmentation. Enhancing the CNN backbone with deeper convolutional layers to enable robust feature extraction, while YOLOv11 localizes tumor regions, SAM is used to refine the tumor boundaries through detailed mask generation. A dataset of 896 MRI brain images is used for training, testing, and validating the model, including images of both tumors and healthy brains. Additionally, CNN-based YOLO+SAM methods were utilized successfully to segment and diagnose brain tumors. Our suggested model achieves good performance of Precision as 94.2%, Recall as 95.6% and mAP50(B) score as 96.5% demonstrating and highlighting the effectiveness of the proposed approach for early-stage brain tumor diagnosis Conclusion: The validation is demonstrated through a comprehensive ablation study. The robustness of the system makes it more suitable for clinical deployment.

Batool Z, Hu S, Kamal MA, Greig NH, Shen B

pubmed logopapersJul 30 2025
Neurological disorders are marked by neurodegeneration, leading to impaired cognition, psychosis, and mood alterations. These symptoms are typically associated with functional changes in both emotional and cognitive processes, which are often correlated with anatomical variations in the brain. Hence, brain structural magnetic resonance imaging (MRI) data have become a critical focus in research, particularly for predictive modeling. The involvement of large MRI data consortia, such as the Alzheimer's Disease Neuroimaging Initiative (ADNI), has facilitated numerous MRI-based classification studies utilizing advanced artificial intelligence models. Among these, convolutional neural networks (CNNs) and non-convolutional artificial neural networks (NC-ANNs) have been prominently employed for brain image processing tasks. These deep learning models have shown significant promise in enhancing the predictive performance for the diagnosis of neurological disorders, with a particular emphasis on Alzheimer's disease (AD). This review aimed to provide a comprehensive summary of these deep learning studies, critically evaluating their methodologies and outcomes. By categorizing the studies into various sub-fields, we aimed to highlight the strengths and limitations of using MRI-based deep learning approaches for diagnosing brain disorders. Furthermore, we discussed the potential implications of these advancements in clinical practice, considering the challenges and future directions for improving diagnostic accuracy and patient outcomes. Through this detailed analysis, we seek to contribute to the ongoing efforts in harnessing AI for better understanding and management of AD.

Yuan C, Jia X, Wang L, Yang C

pubmed logopapersJul 30 2025
Magnetic Resonance Imaging (MRI) is a crucial method for clinical diagnosis. Different abdominal MRI sequences provide tissue and structural information from various perspectives, offering reliable evidence for doctors to make accurate diagnoses. In recent years, with the rapid development of intelligent medical imaging, some studies have begun exploring deep learning methods for MRI sequence recognition. However, due to the significant intra-class variations and subtle inter-class differences in MRI sequences, traditional deep learning algorithms still struggle to effectively handle such types of complex distributed data. In addition, the key features for identifying MRI sequence categories often exist in subtle details, while significant discrepancies can be observed among sequences from individual samples. In contrast, current deep learning based MRI sequence classification methods tend to overlook these fine-grained differences across diverse samples. To overcome the above challenges, this paper proposes a fine-grained prototype network, SequencesNet, for MRI sequence classification. A network combining convolutional neural networks (CNNs) with improved vision transformers is constructed for feature extraction, considering both local and global information. Specifically, a Feature Selection Module (FSM) is added to the visual transformer, and fine-grained features for sequence discrimination are selected based on fused attention weights from multiple layers. Then, a Prototype Classification Module (PCM) is proposed to classify MRI sequences based on fine-grained MRI representations. Comprehensive experiments are conducted on a public abdominal MRI sequence classification dataset and a private dataset. Our proposed SequencesNet achieved the highest accuracy with 96.73% and 95.98% in two sequence classification datasets, respectively, and outperfom the comparative prototypes and fine-grained models. The visualization results exhibit that our proposed sequencesNet can better capture fine-grained information. The proposed SequencesNet shows promising performance in MRI sequence classification, excelling in distinguishing subtle inter-class differences and handling large intra-class variability. Specifically, FSM enhances clinical interpretability by focusing on fine-grained features, and PCM improves clustering by optimizing prototype-sample distances. Compared to baselines like 3DResNet18 and TransFG, SequencesNet achieves higher recall and precision, particularly for similar sequences like DCE-LAP and DCE-PVP. The proposed new MRI sequence classification model, SequencesNet, addresses the problem of subtle inter-class differences and significant intraclass variations existing in medical images. The modular design of SequencesNet can be extended to other medical imaging tasks, including but not limited to multimodal image fusion, lesion detection, and disease staging. Future work can be done to decrease the computational complexity and increase the generalization of the model.

Todd M, Kang S, Wu S, Adhin D, Yoon DY, Willcocks R, Kim S

pubmed logopapersJul 29 2025
Duchenne muscular dystrophy (DMD) is a rare X-linked genetic muscle disorder affecting primarily pediatric males and leading to limited life expectancy. This systematic review of 85 DMD trials and non-interventional studies (2010-2022) evaluated how magnetic resonance imaging biomarkers-particularly fat fraction and T2 relaxation time-are currently being used to quantitatively track disease progression and how their use compares to traditional mobility-based functional endpoints. Imaging biomarker studies lasted on average 4.50 years, approximately 11 months longer than those using only ambulatory functional endpoints. While 93% of biologic intervention trials (n = 28) included ambulatory functional endpoints, only 13.3% (n = 4) incorporated imaging biomarkers. Small molecule trials and natural history studies were the predominant contributors to imaging biomarker use, each comprising 30.4% of such studies. Small molecule trials used imaging biomarkers more frequently than biologic trials, likely because biologics often target dystrophin, an established surrogate biomarker, while small molecules lack regulatory-approved biomarkers. Notably, following the 2018 FDA guidance finalization, we observed a significant decrease in new trials using imaging biomarkers despite earlier regulatory encouragement. This analysis demonstrates that while imaging biomarkers are increasingly used in natural history studies, their integration into interventional trials remains limited. From XGBoost machine learning analysis, trial duration and start year were the strongest predictors of biomarker usage, with a decline observed following the 2018 FDA guidance. Despite their potential to objectively track disease progression, imaging biomarkers have not yet been widely adopted as primary endpoints in therapeutic trials, likely due to regulatory and logistical challenges. Future research should examine whether standardizing imaging protocols or integrating hybrid endpoint models could bridge the regulatory gap currently limiting biomarker adoption in therapeutic trials.

Zhou M, Mi R, Zhao A, Wen X, Niu Y, Wu X, Dong Y, Xu Y, Li Y, Xiang J

pubmed logopapersJul 29 2025
Major Depressive Disorder (MDD) is a common mental disorder, and making an early and accurate diagnosis is crucial for effective treatment. Functional Connectivity Network (FCN) constructed based on functional Magnetic Resonance Imaging (fMRI) have demonstrated the potential to reveal the mechanisms underlying brain abnormalities. Deep learning has been widely employed to extract features from FCN, but existing methods typically operate directly on the network, failing to fully exploit their deep information. Although graph coarsening techniques offer certain advantages in extracting the brain's complex structure, they may also result in the loss of critical information. To address this issue, we propose the Multi-Granularity Brain Networks Fusion (MGBNF) framework. MGBNF models brain networks through multi-granularity analysis and constructs combinatorial modules to enhance feature extraction. Finally, the Constrained Attention Pooling (CAP) mechanism is employed to achieve the effective integration of multi-channel features. In the feature extraction stage, the parameter sharing mechanism is introduced and applied to multiple channels to capture similar connectivity patterns between different channels while reducing the number of parameters. We validate the effectiveness of the MGBNF model on multiple classification tasks and various brain atlases. The results demonstrate that MGBNF outperforms baseline models in terms of classification performance. Ablation experiments further validate its effectiveness. In addition, we conducted a thorough analysis of the variability of different subtypes of MDD by multiple classification tasks, and the results support further clinical applications.

Ghotbi E, Hadidchi R, Hathaway QA, Bancks MP, Bluemke DA, Barr RG, Smith BM, Post WS, Budoff M, Lima JAC, Demehri S

pubmed logopapersJul 29 2025
To investigate the longitudinal association between diabetes and changes in vertebral bone mineral density (BMD) derived from conventional chest CT and to evaluate whether kidney function (estimated glomerular filtration rate (eGFR)) modifies this relationship. This longitudinal study included 1046 participants from the Multi-Ethnic Study of Atherosclerosis Lung Study with vertebral BMD measurements from chest CTs at Exam 5 (2010-2012) and Exam 6 (2016-2018). Diabetes was classified based on the American Diabetes Association criteria, and those with impaired fasting glucose (i.e., prediabetes) were excluded. Volumetric BMD was derived using a validated deep learning model to segment trabecular bone of thoracic vertebrae. Linear mixed-effects models estimated the association between diabetes and BMD changes over time. Following a significant interaction between diabetes status and eGFR, additional stratified analyses examined the impact of kidney function (i.e., diabetic nephropathy), categorized by eGFR (≥ 60 vs. < 60 mL/min/body surface area). Participants with diabetes had a higher baseline vertebral BMD than those without (202 vs. 190 mg/cm<sup>3</sup>) and experienced a significant increase over a median followpup of 6.2 years (β = 0.62 mg/cm<sup>3</sup>/year; 95% CI 0.26, 0.98). This increase was more pronounced among individuals with diabetes and reduced kidney function (β = 1.52 mg/cm<sup>3</sup>/year; 95% CI 0.66, 2.39) compared to the diabetic individuals with preserved kidney function (β = 0.48 mg/cm<sup>3</sup>/year; 95% CI 0.10, 0.85). Individuals with diabetes exhibited an increase in vertebral BMD over time in comparison to the non-diabetes group which is more pronounced in those with diabetic nephropathy. These findings suggest that conventional BMD measurements may not fully capture the well-known fracture risk in diabetes. Further studies incorporating bone microarchitecture using advanced imaging and fracture outcomes are needed to refine skeletal health assessments in the diabetic population.
Page 325 of 6636627 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.