Sort by:
Page 92 of 6386373 results

T-Mai Bui, Fares Bougourzi, Fadi Dornaika, Vinh Truong Hoang

arxiv logopreprintOct 4 2025
In recent years, deep learning has shown near-expert performance in segmenting complex medical tissues and tumors. However, existing models are often task-specific, with performance varying across modalities and anatomical regions. Balancing model complexity and performance remains challenging, particularly in clinical settings where both accuracy and efficiency are critical. To address these issues, we propose a hybrid segmentation architecture featuring a three-branch encoder that integrates CNNs, Transformers, and a Mamba-based Attention Fusion (MAF) mechanism to capture local, global, and long-range dependencies. A multi-scale attention-based CNN decoder reconstructs fine-grained segmentation maps while preserving contextual consistency. Additionally, a co-attention gate enhances feature selection by emphasizing relevant spatial and semantic information across scales during both encoding and decoding, improving feature interaction and cross-scale communication. Extensive experiments on multiple benchmark datasets show that our approach outperforms state-of-the-art methods in accuracy and generalization, while maintaining comparable computational complexity. By effectively balancing efficiency and effectiveness, our architecture offers a practical and scalable solution for diverse medical imaging tasks. Source code and trained models will be publicly released upon acceptance to support reproducibility and further research.

Fu J, Wang Z, Zhang H, Li X, Ni X, Zhang C, Zhao T

pubmed logopapersOct 3 2025
To evaluate the ability of two-dimensional ultrasound radiomics, integrated with clinical features, to predict central lymph node metastasis (CLNM) in papillary thyroid carcinoma (PTC). We conducted a retrospective study of PTC patients treated at the Second People's Hospital of Changzhou from January 2018 to February 2023. A total of 725 eligible patients were randomly allocated to training and test cohorts in a 7:3 ratio. Radiomic features were extracted from the PTC primary nodal region region on two-dimensional ultrasound images. Dimensionality reduction was performed using Mann-Whitney <i>U</i> tests, Spearman correlation analysis, and least absolute shrinkage and selection operator regression, yielding a radiomics signature (Rad-score). Seven machine-learning algorithms-logistic regression, support vector machine, k-nearest neighbors, decision tree, random forest, light gradient boosting machine, and gaussian naïve bayes-were compared to identify the optimal classifier. A joint predictive model was then constructed by integrating the Rad-score with clinically significant variables identified by univariate and multivariate logistic regression, and implemented using the optimal machine-learning classifier. Model performance was comprehensively evaluated using the area under the receiver operating characteristic curve (AUC), calibration curves, and decision curve analysis. Among the seven algorithms, gaussian naïve bayes achieved the highest predictive performance. Univariate and multivariate logistic regression revealed that sex, age, and tumor aspect ratio were independent predictors of CLNM. These variables were integrated with the Rad-score to yield a joint model that achieved AUCs of 0.840 (95% CI, 0.806-0.873) and 0.811 (95% CI, 0.746-0.866) in the training and test cohorts, respectively. Calibration curves and decision curve analysis indicated that the joint model was well-calibrated and afforded favorable clinical utility. The joint model integrating two-dimensional ultrasound radiomics with clinical features enables effective preoperative prediction of CLNM in PTC.

Wang Q, Li S, Sun H, Cui S, Song W

pubmed logopapersOct 3 2025
The assessment of the degree of nasal obstruction is valuable in disease diagnosis, quality of life assessment, and epidemiological studies. To this end, this article proposes a multimodal nasal obstruction degree classification model based on cone beam computed tomography (CBCT) images and nasal resistance measurements. The model consists of four modules: image feature extraction, table feature extraction, feature fusion, and classification. In the image feature extraction module, this article proposes a strategy of using the trained MedicalNet large model to get the pre-training parameters and then migrating them to the three-dimensional convolutional neural network (3D CNN) feature extraction model. For the nasal resistance measurement form data, a method based on extreme gradient boosting (XGBoost) feature importance analysis is proposed to filter key features to reduce the data dimension. In order to fuse the two types of modal data, a feature fusion method based on local and global features was designed. Finally, the fused features are classified using the tabular network (TabNet) model. In order to verify the effectiveness of the proposed method, comparison experiments and ablation experiments are designed, and the experimental results show that the accuracy and recall of the proposed multimodal classification model reach 0.93 and 0.9, respectively, which are significantly higher than other methods.

Zhang J, Han F, Wang X, Wu F, Song X, Liu Q, Wang J, Grecucci A, Zhang Y, Yi X, Chen BT

pubmed logopapersOct 3 2025
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disorder characterized by significant clinicopathologic heterogeneity. This study aimed to identify distinct ALS phenotypes by integrating brain 18 F-fluorodeoxyglucose positron emission tomography-computed tomography (18 F-FDG PET-CT) metabolic imaging with consensus clustering data. This study prospectively enrolled 127 patients with ALS and 128 healthy controls. All participants underwent a brain 18 F-FDG-PET-CT metabolic imaging, psychological questionnaires, and functional screening. K-means consensus clustering was applied to define neuroimaging-based phenotypes. Survival analyses were also performed. Whole exome sequencing (WES) was utilized to detect ALS-related genetic mutations, followed by GO/KEGG pathway enrichment and imaging-transcriptome analysis based on the brain metabolic activity on the 18 F-FDG-PET-CT imaging. Consensus clustering identified two metabolic phenotypes, i.e., the metabolic attenuation phenotype and the metabolic non-attenuation phenotype according to their glucose metabolic activity pattern. The metabolic attenuation phenotype was associated with worse survival (p = 0.022), poorer physical function (p = 0.005), more severe depression (p = 0.026) and greater anxiety level (p = 0.05). WES testing and neuroimaging-transcriptome analysis identified specific gene mutations and molecular pathways with each phenotype. We identified two distinct ALS phenotypes with varying clinicopathologic features, indicating that the unsupervised machine learning applied to PET imaging may effectively classify metabolic subtypes of ALS. These findings contributed novel insights into the heterogeneous pathophysiology of ALS, which should inform personalized therapeutic strategies for patients with ALS.

Cukur T, Dar SU, Nezhad VA, Jun Y, Kim TH, Fujita S, Bilgic B

pubmed logopapersOct 3 2025
MRI is an indispensable clinical tool, offering a rich variety of tissue contrasts to support broad diagnostic and research applications. Protocols can incorporate multiple structural, functional, diffusion, spectroscopic, or relaxometry sequences to provide complementary information for differential diagnosis, and to capture multidimensional insights into tissue structure and composition. However, these capabilities come at the cost of prolonged scan times, which reduce patient throughput, increase susceptibility to motion artifacts, and may require trade-offs in image quality or diagnostic scope. Over the last two decades, advances in image reconstruction algorithms-alongside improvements in hardware and pulse sequence design-have made it possible to accelerate acquisitions while preserving diagnostic quality. Central to this progress is the ability to incorporate prior information to regularize the solutions to the reconstruction problem. In this tutorial, we overview the basics of MRI reconstruction and highlight state-of-the-art approaches, beginning with classical methods that rely on explicit hand-crafted priors, and then turning to deep learning methods that leverage a combination of learned and crafted priors to further push the performance envelope. We also explore the translational aspects and eventual clinical implications of these methods. We conclude by discussing future directions to address remaining challenges in MRI reconstruction. The tutorial is accompanied by a Python toolbox (https://github.com/tutorial-MRI-recon/tutorial) to demonstrate select methods discussed in the article.

Sandeep D, Baranitharan K, Padmavathi A, Guganathan L

pubmed logopapersOct 3 2025
Manual segmentation of retinal blood vessels in fundus images has been widely used for detecting vascular occlusion, diabetic retinopathy, and other retinal conditions. However, existing automated methods face challenges in accurately segmenting fine vessels and optimizing loss functions effectively. This study aims to develop an integrated framework that enhances vessel segmentation accuracy and robustness for clinical applications. The proposed pipeline integrates multiple advanced techniques to address the limitations of current approaches. In preprocessing, Quasi-Cross Bilateral Filtering (QCBF) is applied to reduce noise and enhance vessel visibility. Feature extraction is performed using a Directed Acyclic Graph Neural Network with VGG16 (DAGNN-VGG16) for hierarchical and topologically-aware representation learning. Segmentation is achieved using a Dense Generative Adversarial Network with Quick Attention Network (Dense GAN-QAN), which balances loss and emphasizes critical vessel features. To further optimize training convergence, the Swarm Bipolar Algorithm (SBA) is employed for loss minimization. The method was evaluated on three benchmark retinal vessel segmentation datasets-CHASE-DB1, STARE, and DRIVE-using sixfold cross-validation. The proposed approach achieved consistently high performance with mean results of accuracy: 99.87%, F1- score: 99.82%, precision: 99.84%, recall: 99.78%, and specificity: 99.87% across all datasets, demonstrating strong generalization and robustness. The integrated QCBF-DAGNN-VGG16-Dense GAN-QAN-SBA framework advances the state-of-the-art in retinal vessel segmentation by effectively handling fine vessel structures and ensuring optimized training. Its consistently high performance across multiple datasets highlights its potential for reliable clinical deployment in retinal disease detection and diagnosis.

Moda NA, Suleiman ME, Hooshmand S, Reed WM

pubmed logopapersOct 3 2025
Breast cancer is the most commonly diagnosed cancer among women worldwide, and concerns regarding radiation exposure from mammography screening remain a potential barrier to participation. This scoping review explores existing models estimating long-term radiation risks associated with repeated mammography screening. A structured search across five databases (Medline, Embase, Scopus, Web of Science and CINAHL) along with manual searching identified 24 studies published between 2014 and 2024. These were categorised into three themes: (1) models estimating dose-risk profiles, (2) factors affecting radiation dose and (3) the use of artificial intelligence (AI) in dose estimation and mammographic breast density (MBD) estimation. Studies showed that breast density, compressed breast thickness (CBT) and technical imaging parameters significantly influence mean glandular dose (MGD). Modelling studies highlighted the low risk of radiation-induced cancer, inconsistencies in protocols and vendor-specific limitations. AI applications are emerging as promising tools for improving individualised dose-risk assessments but require further development for compatibility across different imaging platforms.

Sundari MS, Sailaja NV, Swapna D, Vikkurty S, Jadala VC, Durga K, Thottempudi P

pubmed logopapersOct 3 2025
Polycystic Ovarian Disease (PCOD), also known as Polycystic Ovary Syndrome (PCOS), is a prevalent hormonal and metabolic condition primarily affecting women of reproductive age worldwide. It is typically marked by disrupted ovulation, an increase in circulating androgen hormones, and the presence of multiple small ovarian follicles, which collectively result in menstrual irregularities, infertility challenges, and associated metabolic disturbances. This study presents an automated diagnostic framework for PCOD detection from transvaginal ultrasound images, leveraging an Enhanced [Formula: see text] convolutional neural network architecture. The model incorporates attention mechanisms, batch normalization, and dropout regularization to improve feature learning and generalization. Bayesian Optimization was employed to fine-tune critical hyperparameters, including learning rate, batch size, and dropout rate, ensuring optimal model performance. The proposed system was trained and validated on a curated ovarian ultrasound image dataset, applying data augmentation and SMOTE techniques to address class imbalance. Experimental evaluation demonstrated that the Enhanced [Formula: see text] model achieved a classification accuracy of 94.8%, sensitivity of 93.2%, specificity of 95.5%, precision of 94.0%, and an F1-score of 93.6% on the independent test set. Interpretability was enhanced through Grad-CAM visualization, which effectively localized diagnostically significant regions within the ultrasound images, corroborating clinical findings. These results highlight the potential of the proposed deep learning-based framework to serve as a reliable, scalable, and interpretable decision-support tool for PCOD diagnosis, offering improved diagnostic consistency and reducing operator dependency in clinical workflows.

Li Y, Xu W, Zhao C, Zhang J, Zhang Z, Shen P, Wang X, Yang G, Du J, Zhang H, Tan Y

pubmed logopapersOct 3 2025
There is variability in overall survival among 2021 World Health Organization isocitrate dehydrogenase wild type glioblastoma (IDH-wt GBM) patients. The aim of the study was to develop a combined model for stratifying survival risk in IDH-wt GBM and explore the biological foundation. A total of 369 IDH-wt GBM patients were retrospectively collected: 273 patients from three local hospitals (training set: n = 192, testing set: n = 81) and 96 patients from the TCIA database (validation set). Radiomics features from tumor and peritumoral edema in preoperative CE-T1WI and T2FLAIR were extracted. Univariate and least absolute shrinkage and selection operator Cox regression analyses selected significant radiomics features to construct radiomics model, while univariable and multivariable analyses identified clinical risk factors for the clinical model. High-risk and low-risk patients from radiomics and clinical model underwent subgroup analysis. The combined model was constructed using the Random Survival Forest model. Additionally, differentially expressed genes between combined high-risk and low-risk groups were identified, with enrichment analyses exploring their biological mechanisms. The radiomics model categorizes patients into high-risk and low-risk groups with superior performance (C-index: 0.762/0.715/0.690 for training/testing/validation sets) compared to the clinical model (C-index: 0.700/0.656/0.643). The combined model demonstrates the highest value in survival risk stratification (C-index: training/testing/validation sets: 0.788/0.725/0.709). The activation of Gamma-aminobutyric acid (GABA) receptor-related pathways is closely associated with malignant progression and prognosis of IDH-wt GBM. The radiomics model might be a new prognostic biomarker for IDH-wt GBM. The combined model shows an approximately 12.57% improvement of stratification ability over the clinical model. Additionally, the significant activation of GABA receptor-related pathways may be a biological feature of combined high-risk IDH-wt GBM.

Goya-Maldonado R, Erwin-Grabner T, Zeng LL, Ching CRK, Aleman A, Amod AR, Basgoze Z, Benedetti F, Besteher B, Brosch K, Bülow R, Colle R, Connolly CG, Corruble E, Couvy-Duchesne B, Cullen K, Dannlowski U, Davey CG, Dols A, Ernsting J, Evans JW, Fisch L, Fuentes-Claramonte P, Gonul AS, Gotlib IH, Grabe HJ, Groenewold NA, Grotegerd D, Hahn T, Hamilton JP, Han LKM, Harrison BJ, Ho TC, Jahanshad N, Jamieson AJ, Karuk A, Kircher T, Klimes-Dougan B, Koopowitz SM, Lancaster T, Leenings R, Li M, Linden DEJ, MacMaster FP, Mehler DMA, Meinert S, Melloni E, Mueller BA, Mwangi B, Nenadić I, Ojha A, Okamoto Y, Oudega ML, Penninx BWJH, Poletti S, Pomarol-Clotet E, Portella MJ, Radua J, Rodríguez-Cano E, Sacchet MD, Salvador R, Schrantee A, Sim K, Soares JC, Solanes A, Stein DJ, Stein F, Stolicyn A, Thomopoulos SI, Toenders YJ, Uyar-Demir A, Vieta E, Vives-Gilabert Y, Völzke H, Walter M, Whalley HC, Whittle S, Winter N, Wittfeld K, Wright MJ, Wu MJ, Yang TT, Zarate C, Veltman DJ, Schmaal L, Thompson PM

pubmed logopapersOct 3 2025
Major depressive disorder (MDD) is a complex psychiatric disorder that affects the lives of hundreds of millions of individuals around the globe. Even today, researchers debate if morphological alterations in the brain are linked to MDD, likely due to the heterogeneity of this disorder. The application of deep learning tools to neuroimaging data, capable of capturing complex non-linear patterns, has the potential to provide diagnostic and predictive biomarkers for MDD. However, previous attempts to demarcate MDD patients and healthy controls (HC) based on segmented cortical features via linear machine learning approaches have reported low accuracies. In this study, we used globally representative data from the ENIGMA-MDD working group containing 7012 participants from 31 sites (N = 2772 MDD and N = 4240 HC), which allows a comprehensive analysis with generalizable results. Based on the hypothesis that integration of vertex-wise cortical features can improve classification performance, we evaluated the classification of a DenseNet and a Support Vector Machine (SVM), with the expectation that the former would outperform the latter. As we analyzed a multi-site sample, we additionally applied the ComBat harmonization tool to remove potential nuisance effects of site. We found that both classifiers exhibited close to chance performance (balanced accuracy DenseNet: 51%; SVM: 53%), when estimated on unseen sites. Slightly higher classification performance (balanced accuracy DenseNet: 58%; SVM: 55%) was found when the cross-validation folds contained subjects from all sites, indicating site effect. In conclusion, the integration of vertex-wise morphometric features and the use of the non-linear classifier did not lead to the differentiability between MDD and HC. Our results support the notion that MDD classification on this combination of features and classifiers is unfeasible. Future studies are needed to determine whether more sophisticated integration of information from other MRI modalities such as fMRI and DWI will lead to a higher performance in this diagnostic task.
Page 92 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.