Sort by:
Page 404 of 7427411 results

Yuan C, Jia X, Wang L, Yang C

pubmed logopapersJul 30 2025
Magnetic Resonance Imaging (MRI) is a crucial method for clinical diagnosis. Different abdominal MRI sequences provide tissue and structural information from various perspectives, offering reliable evidence for doctors to make accurate diagnoses. In recent years, with the rapid development of intelligent medical imaging, some studies have begun exploring deep learning methods for MRI sequence recognition. However, due to the significant intra-class variations and subtle inter-class differences in MRI sequences, traditional deep learning algorithms still struggle to effectively handle such types of complex distributed data. In addition, the key features for identifying MRI sequence categories often exist in subtle details, while significant discrepancies can be observed among sequences from individual samples. In contrast, current deep learning based MRI sequence classification methods tend to overlook these fine-grained differences across diverse samples. To overcome the above challenges, this paper proposes a fine-grained prototype network, SequencesNet, for MRI sequence classification. A network combining convolutional neural networks (CNNs) with improved vision transformers is constructed for feature extraction, considering both local and global information. Specifically, a Feature Selection Module (FSM) is added to the visual transformer, and fine-grained features for sequence discrimination are selected based on fused attention weights from multiple layers. Then, a Prototype Classification Module (PCM) is proposed to classify MRI sequences based on fine-grained MRI representations. Comprehensive experiments are conducted on a public abdominal MRI sequence classification dataset and a private dataset. Our proposed SequencesNet achieved the highest accuracy with 96.73% and 95.98% in two sequence classification datasets, respectively, and outperfom the comparative prototypes and fine-grained models. The visualization results exhibit that our proposed sequencesNet can better capture fine-grained information. The proposed SequencesNet shows promising performance in MRI sequence classification, excelling in distinguishing subtle inter-class differences and handling large intra-class variability. Specifically, FSM enhances clinical interpretability by focusing on fine-grained features, and PCM improves clustering by optimizing prototype-sample distances. Compared to baselines like 3DResNet18 and TransFG, SequencesNet achieves higher recall and precision, particularly for similar sequences like DCE-LAP and DCE-PVP. The proposed new MRI sequence classification model, SequencesNet, addresses the problem of subtle inter-class differences and significant intraclass variations existing in medical images. The modular design of SequencesNet can be extended to other medical imaging tasks, including but not limited to multimodal image fusion, lesion detection, and disease staging. Future work can be done to decrease the computational complexity and increase the generalization of the model.

Todd M, Kang S, Wu S, Adhin D, Yoon DY, Willcocks R, Kim S

pubmed logopapersJul 29 2025
Duchenne muscular dystrophy (DMD) is a rare X-linked genetic muscle disorder affecting primarily pediatric males and leading to limited life expectancy. This systematic review of 85 DMD trials and non-interventional studies (2010-2022) evaluated how magnetic resonance imaging biomarkers-particularly fat fraction and T2 relaxation time-are currently being used to quantitatively track disease progression and how their use compares to traditional mobility-based functional endpoints. Imaging biomarker studies lasted on average 4.50 years, approximately 11 months longer than those using only ambulatory functional endpoints. While 93% of biologic intervention trials (n = 28) included ambulatory functional endpoints, only 13.3% (n = 4) incorporated imaging biomarkers. Small molecule trials and natural history studies were the predominant contributors to imaging biomarker use, each comprising 30.4% of such studies. Small molecule trials used imaging biomarkers more frequently than biologic trials, likely because biologics often target dystrophin, an established surrogate biomarker, while small molecules lack regulatory-approved biomarkers. Notably, following the 2018 FDA guidance finalization, we observed a significant decrease in new trials using imaging biomarkers despite earlier regulatory encouragement. This analysis demonstrates that while imaging biomarkers are increasingly used in natural history studies, their integration into interventional trials remains limited. From XGBoost machine learning analysis, trial duration and start year were the strongest predictors of biomarker usage, with a decline observed following the 2018 FDA guidance. Despite their potential to objectively track disease progression, imaging biomarkers have not yet been widely adopted as primary endpoints in therapeutic trials, likely due to regulatory and logistical challenges. Future research should examine whether standardizing imaging protocols or integrating hybrid endpoint models could bridge the regulatory gap currently limiting biomarker adoption in therapeutic trials.

Zhou M, Mi R, Zhao A, Wen X, Niu Y, Wu X, Dong Y, Xu Y, Li Y, Xiang J

pubmed logopapersJul 29 2025
Major Depressive Disorder (MDD) is a common mental disorder, and making an early and accurate diagnosis is crucial for effective treatment. Functional Connectivity Network (FCN) constructed based on functional Magnetic Resonance Imaging (fMRI) have demonstrated the potential to reveal the mechanisms underlying brain abnormalities. Deep learning has been widely employed to extract features from FCN, but existing methods typically operate directly on the network, failing to fully exploit their deep information. Although graph coarsening techniques offer certain advantages in extracting the brain's complex structure, they may also result in the loss of critical information. To address this issue, we propose the Multi-Granularity Brain Networks Fusion (MGBNF) framework. MGBNF models brain networks through multi-granularity analysis and constructs combinatorial modules to enhance feature extraction. Finally, the Constrained Attention Pooling (CAP) mechanism is employed to achieve the effective integration of multi-channel features. In the feature extraction stage, the parameter sharing mechanism is introduced and applied to multiple channels to capture similar connectivity patterns between different channels while reducing the number of parameters. We validate the effectiveness of the MGBNF model on multiple classification tasks and various brain atlases. The results demonstrate that MGBNF outperforms baseline models in terms of classification performance. Ablation experiments further validate its effectiveness. In addition, we conducted a thorough analysis of the variability of different subtypes of MDD by multiple classification tasks, and the results support further clinical applications.

Ghotbi E, Hadidchi R, Hathaway QA, Bancks MP, Bluemke DA, Barr RG, Smith BM, Post WS, Budoff M, Lima JAC, Demehri S

pubmed logopapersJul 29 2025
To investigate the longitudinal association between diabetes and changes in vertebral bone mineral density (BMD) derived from conventional chest CT and to evaluate whether kidney function (estimated glomerular filtration rate (eGFR)) modifies this relationship. This longitudinal study included 1046 participants from the Multi-Ethnic Study of Atherosclerosis Lung Study with vertebral BMD measurements from chest CTs at Exam 5 (2010-2012) and Exam 6 (2016-2018). Diabetes was classified based on the American Diabetes Association criteria, and those with impaired fasting glucose (i.e., prediabetes) were excluded. Volumetric BMD was derived using a validated deep learning model to segment trabecular bone of thoracic vertebrae. Linear mixed-effects models estimated the association between diabetes and BMD changes over time. Following a significant interaction between diabetes status and eGFR, additional stratified analyses examined the impact of kidney function (i.e., diabetic nephropathy), categorized by eGFR (≥ 60 vs. < 60 mL/min/body surface area). Participants with diabetes had a higher baseline vertebral BMD than those without (202 vs. 190 mg/cm<sup>3</sup>) and experienced a significant increase over a median followpup of 6.2 years (β = 0.62 mg/cm<sup>3</sup>/year; 95% CI 0.26, 0.98). This increase was more pronounced among individuals with diabetes and reduced kidney function (β = 1.52 mg/cm<sup>3</sup>/year; 95% CI 0.66, 2.39) compared to the diabetic individuals with preserved kidney function (β = 0.48 mg/cm<sup>3</sup>/year; 95% CI 0.10, 0.85). Individuals with diabetes exhibited an increase in vertebral BMD over time in comparison to the non-diabetes group which is more pronounced in those with diabetic nephropathy. These findings suggest that conventional BMD measurements may not fully capture the well-known fracture risk in diabetes. Further studies incorporating bone microarchitecture using advanced imaging and fracture outcomes are needed to refine skeletal health assessments in the diabetic population.

Peng Y, Hu Z, Wen M, Deng Y, Zhao D, Yu Y, Liang W, Dai X, Wang Y

pubmed logopapersJul 29 2025
Periventricular-intraventricular haemorrhage (IVH) is the most prevalent type of neonatal intracranial haemorrhage. It is especially threatening to preterm infants, in whom it is associated with significant morbidity and mortality. Cranial ultrasound has become an important means of screening periventricular IVH in infants. The integration of artificial intelligence with neonatal ultrasound is promising for enhancing diagnostic accuracy, reducing physician workload, and consequently improving periventricular IVH outcomes. The study investigated whether deep learning-based analysis of the cranial ultrasound images of infants could detect and grade periventricular IVH. This multicentre observational study included 1,060 cases and healthy controls from two hospitals. The retrospective modelling dataset encompassed 773 participants from January 2020 to July 2023, while the prospective two-centre validation dataset included 287 participants from August 2023 to January 2024. The periventricular IVH net model, a deep learning model incorporating the convolutional block attention module mechanism, was developed. The model's effectiveness was assessed by randomly dividing the retrospective data into training and validation sets, followed by independent validation with the prospective two-centre data. To evaluate the model, we measured its recall, precision, accuracy, F1-score, and area under the curve (AUC). The regions of interest (ROI) that influenced the detection by the deep learning model were visualised in significance maps, and the t-distributed stochastic neighbour embedding (t-SNE) algorithm was used to visualise the clustering of model detection parameters. The final retrospective dataset included 773 participants (mean (standard deviation (SD)) gestational age, 32.7 (4.69) weeks; mean (SD) weight, 1,862.60 (855.49) g). For the retrospective data, the model's AUC was 0.99 (95% confidence interval (CI), 0.98-0.99), precision was 0.92 (0.89-0.95), recall was 0.93 (0.89-0.95), and F1-score was 0.93 (0.90-0.95). For the prospective two-centre validation data, the model's AUC was 0.961 (95% CI, 0.94-0.98) and accuracy was 0.89 (95% CI, 0.86-0.92). The two-centre prospective validation results of the periventricular IVH net model demonstrated its tremendous potential for paediatric clinical applications. Combining artificial intelligence with paediatric ultrasound can enhance the accuracy and efficiency of periventricular IVH diagnosis, especially in primary hospitals or community hospitals.

Kotti J, Chalasani V, Rajan C

pubmed logopapersJul 29 2025
Brain Tumour (BT) is characterised by the uncontrolled proliferation of the cells within the brain which can result in cancer. Detecting BT at the early stage significantly increases the patient's survival chances. The existing BT detection methods often struggle with high computational complexity, limited feature discrimination, and poor generalisation. To mitigate these issues, an effective brain tumour detection and segmentation method based on A hybrid network named MobileNet- Deep Batch-Normalized eLU AlexNet (M-DbneAlexnet) is developed based on Magnetic Resonance Imaging (MRI). The image enhancement is done by Piecewise Linear Transformation (PLT) function. BT region is segmented Transformer Brain Tumour Segmentation (TransBTSV2). Then feature extraction is done. Finally, BT is detected using M-DbneAlexnet model, which is devised by combining MobileNet and Deep Batch-Normalized eLU AlexNet (DbneAlexnet).<b>Results:</b> The proposed model achieved an accuracy of 92.68%, sensitivity of 93.02%, and specificity of 92.85%, demonstrating its effectiveness in accurately detecting brain tumors from MRI images. The proposed model enhances training speed and performs well on limited datasets, making it effective for distinguishing between tumor and healthy tissues. Its practical utility lies in enabling early detection and diagnosis of brain tumors, which can significantly reduce mortality rates.

Saadh MJ, Hussain QM, Albadr RJ, Doshi H, Rekha MM, Kundlas M, Pal A, Rizaev J, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Farhood B

pubmed logopapersJul 29 2025
ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.

Peiran Gu, Teng Yao, Mengshen He, Fuhao Duan, Feiyan Liu, RenYuan Peng, Bao Ge

arxiv logopreprintJul 29 2025
In recent years, artificial intelligence has been increasingly applied in the field of medical imaging. Among these applications, fundus image analysis presents special challenges, including small lesion areas in certain fundus diseases and subtle inter-disease differences, which can lead to reduced prediction accuracy and overfitting in the models. To address these challenges, this paper proposes the Transformer-based model SwinECAT, which combines the Shifted Window (Swin) Attention with the Efficient Channel Attention (ECA) Attention. SwinECAT leverages the Swin Attention mechanism in the Swin Transformer backbone to effectively capture local spatial structures and long-range dependencies within fundus images. The lightweight ECA mechanism is incorporated to guide the SwinECAT's attention toward critical feature channels, enabling more discriminative feature representation. In contrast to previous studies that typically classify fundus images into 4 to 6 categories, this work expands fundus disease classification to 9 distinct types, thereby enhancing the granularity of diagnosis. We evaluate our method on the Eye Disease Image Dataset (EDID) containing 16,140 fundus images for 9-category classification. Experimental results demonstrate that SwinECAT achieves 88.29\% accuracy, with weighted F1-score of 0.88 and macro F1-score of 0.90. The classification results of our proposed model SwinECAT significantly outperform the baseline Swin Transformer and multiple compared baseline models. To our knowledge, this represents the highest reported performance for 9-category classification on this public dataset.

Yutao Hu, Ying Zheng, Shumei Miao, Xiaolei Zhang, Jiahao Xia, Yaolei Qi, Yiyang Zhang, Yuting He, Qian Chen, Jing Ye, Hongyan Qiao, Xiuhua Hu, Lei Xu, Jiayin Zhang, Hui Liu, Minwen Zheng, Yining Wang, Daimin Zhang, Ji Zhang, Wenqi Shao, Yun Liu, Longjiang Zhang, Guanyu Yang

arxiv logopreprintJul 29 2025
Foundation models have demonstrated remarkable potential in medical domain. However, their application to complex cardiovascular diagnostics remains underexplored. In this paper, we present Cardiac-CLIP, a multi-modal foundation model designed for 3D cardiac CT images. Cardiac-CLIP is developed through a two-stage pre-training strategy. The first stage employs a 3D masked autoencoder (MAE) to perform self-supervised representation learning from large-scale unlabeled volumetric data, enabling the visual encoder to capture rich anatomical and contextual features. In the second stage, contrastive learning is introduced to align visual and textual representations, facilitating cross-modal understanding. To support the pre-training, we collect 16641 real clinical CT scans, supplemented by 114k publicly available data. Meanwhile, we standardize free-text radiology reports into unified templates and construct the pathology vectors according to diagnostic attributes, based on which the soft-label matrix is generated to supervise the contrastive learning process. On the other hand, to comprehensively evaluate the effectiveness of Cardiac-CLIP, we collect 6,722 real-clinical data from 12 independent institutions, along with the open-source data to construct the evaluation dataset. Specifically, Cardiac-CLIP is comprehensively evaluated across multiple tasks, including cardiovascular abnormality classification, information retrieval and clinical analysis. Experimental results demonstrate that Cardiac-CLIP achieves state-of-the-art performance across various downstream tasks in both internal and external data. Particularly, Cardiac-CLIP exhibits great effectiveness in supporting complex clinical tasks such as the prospective prediction of acute coronary syndrome, which is notoriously difficult in real-world scenarios.

Julia Wolleb, Florentin Bieder, Paul Friedrich, Hemant D. Tagare, Xenophon Papademetris

arxiv logopreprintJul 29 2025
Ultrasound is widely used in clinical care, yet standard deep learning methods often struggle with full video analysis due to non-standardized acquisition and operator bias. We offer a new perspective on ultrasound video analysis through implicit neural representations (INRs). We build on Functa, an INR framework in which each image is represented by a modulation vector that conditions a shared neural network. However, its extension to the temporal domain of medical videos remains unexplored. To address this gap, we propose VidFuncta, a novel framework that leverages Functa to encode variable-length ultrasound videos into compact, time-resolved representations. VidFuncta disentangles each video into a static video-specific vector and a sequence of time-dependent modulation vectors, capturing both temporal dynamics and dataset-level redundancies. Our method outperforms 2D and 3D baselines on video reconstruction and enables downstream tasks to directly operate on the learned 1D modulation vectors. We validate VidFuncta on three public ultrasound video datasets -- cardiac, lung, and breast -- and evaluate its downstream performance on ejection fraction prediction, B-line detection, and breast lesion classification. These results highlight the potential of VidFuncta as a generalizable and efficient representation framework for ultrasound videos. Our code is publicly available under https://github.com/JuliaWolleb/VidFuncta_public.
Page 404 of 7427411 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,300+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.