Sort by:
Page 220 of 6546537 results

Marina Grifell i Plana, Vladyslav Zalevskyi, Léa Schmidt, Yvan Gomez, Thomas Sanchez, Vincent Dunet, Mériam Koob, Vanessa Siffredi, Meritxell Bach Cuadra

arxiv logopreprintAug 28 2025
Accurate fetal brain segmentation is crucial for extracting biomarkers and assessing neurodevelopment, especially in conditions such as corpus callosum dysgenesis (CCD), which can induce drastic anatomical changes. However, the rarity of CCD severely limits annotated data, hindering the generalization of deep learning models. To address this, we propose a pathology-informed domain randomization strategy that embeds prior knowledge of CCD manifestations into a synthetic data generation pipeline. By simulating diverse brain alterations from healthy data alone, our approach enables robust segmentation without requiring pathological annotations. We validate our method on a cohort comprising 248 healthy fetuses, 26 with CCD, and 47 with other brain pathologies, achieving substantial improvements on CCD cases while maintaining performance on both healthy fetuses and those with other pathologies. From the predicted segmentations, we derive clinically relevant biomarkers, such as corpus callosum length (LCC) and volume, and show their utility in distinguishing CCD subtypes. Our pathology-informed augmentation reduces the LCC estimation error from 1.89 mm to 0.80 mm in healthy cases and from 10.9 mm to 0.7 mm in CCD cases. Beyond these quantitative gains, our approach yields segmentations with improved topological consistency relative to available ground truth, enabling more reliable shape-based analyses. Overall, this work demonstrates that incorporating domain-specific anatomical priors into synthetic data pipelines can effectively mitigate data scarcity and enhance analysis of rare but clinically significant malformations.

Zizhao Tang, Changhao Liu, Nuo Tong, Shuiping Gou, Mei Shi

arxiv logopreprintAug 28 2025
Metastasis remains the major challenge in the clinical management of head and neck squamous cell carcinoma (HNSCC). Reliable pre-treatment prediction of metastatic risk is crucial for optimizing treatment strategies and prognosis. This study develops a deep learning-based multimodal framework to predict metastasis risk in HNSCC patients by integrating computed tomography (CT) images, radiomics, and clinical data. 1497 HNSCC patients were included. Tumor and organ masks were derived from pretreatment CT images. A 3D Swin Transformer extracted deep features from tumor regions. Meanwhile, 1562 radiomics features were obtained using PyRadiomics, followed by correlation filtering and random forest selection, leaving 36 features. Clinical variables including age, sex, smoking, and alcohol status were encoded and fused with imaging-derived features. Multimodal features were fed into a fully connected network to predict metastasis risk. Performance was evaluated using five-fold cross-validation with area under the curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). The proposed fusion model outperformed single-modality models. The 3D deep learning module alone achieved an AUC of 0.715, and when combined with radiomics and clinical features, predictive performance improved (AUC = 0.803, ACC = 0.752, SEN = 0.730, SPE = 0.758). Stratified analysis showed generalizability across tumor subtypes. Ablation studies indicated complementary information from different modalities. Evaluation showed the 3D Swin Transformer provided more robust representation learning than conventional networks. This multimodal fusion model demonstrated high accuracy and robustness in predicting metastasis risk in HNSCC, offering a comprehensive representation of tumor biology. The interpretable model has potential as a clinical decision-support tool for personalized treatment planning.

Chen T, Li X

pubmed logopapersAug 28 2025
Recently, Transformer has been widely used in medical imaging analysis for its competitive potential when given enough data. However, Transformer conducts attention on a global scale by utilizing self-attention mechanisms across all input patches, thereby requiring substantial computational power and memory, especially when dealing with large 3D images such as MRI images. In this study, we proposed Residual Global Scoring Network (ResGSNet), a novel architecture combining ResNet with Global Scoring Module (GSM), achieving high computational efficiency while incorporating both local and global features. First, our proposed GSM utilized local attention to conduct information exchange within local brain regions, subsequently assigning global scores to each of these local regions, demonstrating the capability to encapsulate local and global information with reduced computational burden and superior performance compared to existing methods. Second, we utilized Grad-CAM++ and the Attention Map to interpret model predictions, uncovering brain regions related to Alzheimer's Disease (AD) Detection. Third, our extensive experiments on the ADNI dataset show that our proposed ResGSNet achieved satisfactory performance with 95.1% accuracy in predicting AD, a 1.3% increase compared to state-of-the-art methods, and 93.4% accuracy for Mild Cognitive Impairment (MCI). Our model for detecting MCI can potentially serve as a screening tool for identifying individuals at high risk of developing AD and allow for early intervention. Furthermore, the Grad-CAM++ and Attention Map not only identified brain regions commonly associated with AD and MCI but also revealed previously undiscovered regions, including putamen, cerebellum cortex, and caudate nucleus, holding promise for further research into the etiology of AD.

Roy M. Gabriel, Mohammadreza Zandehshahvar, Marly van Assen, Nattakorn Kittisut, Kyle Peters, Carlo N. De Cecco, Ali Adibi

arxiv logopreprintAug 28 2025
To reduce the amount of required labeled data for lung disease severity classification from chest X-rays (CXRs) under class imbalance, this study applied deep active learning with a Bayesian Neural Network (BNN) approximation and weighted loss function. This retrospective study collected 2,319 CXRs from 963 patients (mean age, 59.2 $\pm$ 16.6 years; 481 female) at Emory Healthcare affiliated hospitals between January and November 2020. All patients had clinically confirmed COVID-19. Each CXR was independently labeled by 3 to 6 board-certified radiologists as normal, moderate, or severe. A deep neural network with Monte Carlo Dropout was trained using active learning to classify disease severity. Various acquisition functions were used to iteratively select the most informative samples from an unlabeled pool. Performance was evaluated using accuracy, area under the receiver operating characteristic curve (AU ROC), and area under the precision-recall curve (AU PRC). Training time and acquisition time were recorded. Statistical analysis included descriptive metrics and performance comparisons across acquisition strategies. Entropy Sampling achieved 93.7% accuracy (AU ROC, 0.91) in binary classification (normal vs. diseased) using 15.4% of the training data. In the multi-class setting, Mean STD sampling achieved 70.3% accuracy (AU ROC, 0.86) using 23.1% of the labeled data. These methods outperformed more complex and computationally expensive acquisition functions and significantly reduced labeling needs. Deep active learning with BNN approximation and weighted loss effectively reduces labeled data requirements while addressing class imbalance, maintaining or exceeding diagnostic performance.

Yidong Zhao, Peter Kellman, Hui Xue, Tongyun Yang, Yi Zhang, Yuchi Han, Orlando Simonetti, Qian Tao

arxiv logopreprintAug 28 2025
Pretrained segmentation models for cardiac magnetic resonance imaging (MRI) struggle to generalize across different imaging sequences due to significant variations in image contrast. These variations arise from changes in imaging protocols, yet the same fundamental spin properties, including proton density, T1, and T2 values, govern all acquired images. With this core principle, we introduce Reverse Imaging, a novel physics-driven method for cardiac MRI data augmentation and domain adaptation to fundamentally solve the generalization problem. Our method reversely infers the underlying spin properties from observed cardiac MRI images, by solving ill-posed nonlinear inverse problems regularized by the prior distribution of spin properties. We acquire this "spin prior" by learning a generative diffusion model from the multiparametric SAturation-recovery single-SHot acquisition sequence (mSASHA) dataset, which offers joint cardiac T1 and T2 maps. Our method enables approximate but meaningful spin-property estimates from MR images, which provide an interpretable "latent variable" that lead to highly flexible image synthesis of arbitrary novel sequences. We show that Reverse Imaging enables highly accurate segmentation across vastly different image contrasts and imaging protocols, realizing wide-spectrum generalization of cardiac MRI segmentation.

Kumar R, Zeng S, Kumar J, Mao X

pubmed logopapersAug 28 2025
Positron Emission Tomography-Computed Tomography (PET-CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning's privacy-preserving collaboration with transfer learning's pre-trained model adaptation, enhancing liver lesion segmentation in PET-CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET-CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET-CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.

AboArab MA, Anić M, Potsika VT, Saeed H, Zulfiqar M, Skalski A, Stretti E, Kostopoulos V, Psarras S, Pennati G, Berti F, Spahić L, Benolić L, Filipović N, Fotiadis DI

pubmed logopapersAug 28 2025
Peripheral artery disease (PAD) is a progressive vascular condition affecting >237 million individuals worldwide. Accurate diagnosis and patient-specific treatment planning are critical but are often hindered by limited access to advanced imaging tools and real-time analytical support. This study presents DECODE, an open-source, cloud-based platform that integrates artificial intelligence, interactive 3D visualization, and computational modeling to improve the noninvasive management of PAD. The DECODE platform was designed as a modular backend (Django) and frontend (React) architecture that combines deep learning-based segmentation, real-time volume rendering, and finite element simulations. Peripheral artery and intima-media thickness segmentation were implemented via convolutional neural networks, including extended U-Net and nnU-Net architectures. Centreline extraction algorithms provide quantitative vascular geometry analysis. Balloon angioplasty simulations were conducted via nonlinear finite element models calibrated with experimental data. Usability was evaluated via the System Usability Scale (SUS), and user acceptance was assessed via the Technology Acceptance Model (TAM). Peripheral artery segmentation achieved an average Dice coefficient of 0.91 and a 95th percentile Hausdorff distance 1.0 mm across 22 computed tomography dataset. Intima-media segmentation evaluated on 300 intravascular optical coherence tomography images demonstrated Dice scores 0.992 for the lumen boundaries and 0.980 for the intima boundaries, with corresponding Hausdorff distances of 0.056 mm and 0.101 mm, respectively. Finite element simulations successfully reproduced the mechanical interactions between balloon and artery models in both idealized and subject-specific geometries, identifying pressure and stress distributions relevant to treatment outcomes. The platform received an average SUS score 87.5, indicating excellent usability, and an overall TAM score 4.21 out of 5, reflecting high user acceptance. DECODE provides an automated, cloud-integrated solution for PAD diagnosis and intervention planning, combining deep learning, computational modeling, and high-fidelity visualization. The platform enables precise vascular analysis, real-time procedural simulation, and interactive clinical decision support. By streamlining image processing, enhancing segmentation accuracy, and enabling in-silico trials, DECODE offers a scalable infrastructure for personalized vascular care and sets a new benchmark in digital health technologies for PAD.

Bagheri M, Velasco-Annis C, Wang J, Faghihpirayesh R, Khan S, Calixto C, Jaimes C, Vasung L, Ouaalam A, Afacan O, Warfield SK, Rollins CK, Gholipour A

pubmed logopapersAug 28 2025
Accurate characterization of in-utero brain development is essential for understanding typical and atypical neurodevelopment. Building upon previous efforts to construct spatiotemporal fetal brain MRI atlases, we present the CRL-2025 fetal brain atlas, which is a spatiotemporal (4D) atlas of the developing fetal brain between 21 and 37 gestational weeks. This atlas is constructed from carefully processed MRI scans of 160 fetuses with typically-developing brains using a diffeomorphic deformable registration framework integrated with kernel regression on age. CRL-2025 uniquely includes detailed tissue segmentations, transient white matter compartments, and parcellation into 126 anatomical regions. This atlas offers significantly enhanced anatomical details over the CRL-2017 atlas, and is released along with the CRL diffusion MRI atlas with its newly created tissue segmentation and labels as well as deep learning-based multiclass segmentation models for fine-grained fetal brain MRI segmentation. The CRL-2025 atlas and its associated tools provide a robust and scalable platform for fetal brain MRI segmentation, groupwise analysis, and early neurodevelopmental research, and these materials are publicly released to support the broader research community.

Musinguzi, D., Katumba, A., Kawooya, M. G., Malumba, R., Nakatumba-Nabende, J., Achuka, S. A., Adewole, M., Anazodo, U.

medrxiv logopreprintAug 28 2025
IntroductionBreast cancer is one of the most common cancers globally. Its incidence in Africa has increased sharply, surpassing that in high-income countries. Mortality remains high due to late-stage diagnosis, when treatment is less effetive. We propose the first open, longitudinal breast imaging dataset from Africa comprising point-of-care ultrasound scans, mammograms, biopsy pathology, and clinical profiles to support early detection using machine learning. Methods and AnalysisWe will engage women through community outreach and train them in self-examination. Those with suspected lesions, particularly with a family history of breast cancer, will be invited to participate. A total of 100 women will undergo baseline assessment at medical centers, including clinical exams, blood tests, and mammograms. Follow-up point-of-care ultrasound scans and clinical data will be collected at 3 and 6 months, with final assessments at 9 months including mammograms. Ethics and DisseminationThe study has been approved by the Institutional Review Boards at ECUREI and the MAI Lab. Findings will be disseminated through peer-reviewed journals and scientific conferences.

Ngo TH, Tran MH, Nguyen HB, Hoang VN, Le TL, Vu H, Tran TK, Nguyen HK, Can VM, Nguyen TB, Tran TH

pubmed logopapersAug 27 2025
Traumatic brain injury (TBI) is one of the most prevalent health conditions, with severity assessment serving as an initial step for management, prognosis, and targeted therapy. Existing studies on automated outcome prediction using machine learning (ML) often overlook the importance of TBI features in decision-making and the challenges posed by limited and imbalanced training data. Furthermore, many attempts have focused on quantitatively evaluating ML algorithms without explaining the decisions, making the outcomes difficult to interpret and apply for less-experienced doctors. This study presents a novel supportive tool, named E-TBI (explainable outcome prediction after TBI), designed with a user-friendly web-based interface to assist doctors in outcome prediction after TBI using machine learning. The tool is developed with the capability to visualize rules applied in the decision-making process. At the tool's core is a feature selection and classification module that receives multimodal data from TBI patients (demographic data, clinical data, laboratory test results, and CT findings). It then infers one of four TBI severity levels. This research investigates various machine learning models and feature selection techniques, ultimately identifying the optimal combination of gradient boosting machine and random forest for the task, which we refer to as GBMRF. This method enabled us to identify a small set of essential features, reducing patient testing costs by 35%, while achieving the highest accuracy rates of 88.82% and 89.78% on two datasets (a public TBI dataset and our self-collected dataset, TBI_MH103). Classification modules are available at https://github.com/auverngo110/Traumatic_Brain_Injury_103 .
Page 220 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.