Sort by:
Page 45 of 3533529 results

SPARSE Data, Rich Results: Few-Shot Semi-Supervised Learning via Class-Conditioned Image Translation

Guido Manni, Clemente Lauretti, Loredana Zollo, Paolo Soda

arxiv logopreprintAug 8 2025
Deep learning has revolutionized medical imaging, but its effectiveness is severely limited by insufficient labeled training data. This paper introduces a novel GAN-based semi-supervised learning framework specifically designed for low labeled-data regimes, evaluated across settings with 5 to 50 labeled samples per class. Our approach integrates three specialized neural networks -- a generator for class-conditioned image translation, a discriminator for authenticity assessment and classification, and a dedicated classifier -- within a three-phase training framework. The method alternates between supervised training on limited labeled data and unsupervised learning that leverages abundant unlabeled images through image-to-image translation rather than generation from noise. We employ ensemble-based pseudo-labeling that combines confidence-weighted predictions from the discriminator and classifier with temporal consistency through exponential moving averaging, enabling reliable label estimation for unlabeled data. Comprehensive evaluation across eleven MedMNIST datasets demonstrates that our approach achieves statistically significant improvements over six state-of-the-art GAN-based semi-supervised methods, with particularly strong performance in the extreme 5-shot setting where the scarcity of labeled data is most challenging. The framework maintains its superiority across all evaluated settings (5, 10, 20, and 50 shots per class). Our approach offers a practical solution for medical imaging applications where annotation costs are prohibitive, enabling robust classification performance even with minimal labeled data. Code is available at https://github.com/GuidoManni/SPARSE.

Explainable Cryobiopsy AI Model, CRAI, to Predict Disease Progression for Transbronchial Lung Cryobiopsies with Interstitial Pneumonia

Uegami, W., Okoshi, E. N., Lami, K., Nei, Y., Ozasa, M., Kataoka, K., Kitamura, Y., Kohashi, Y., Cooper, L. A. D., Sakanashi, H., Saito, Y., Kondoh, Y., the study group on CRYOSOLUTION,, Fukuoka, J.

medrxiv logopreprintAug 8 2025
BackgroundInterstitial lung disease (ILD) encompasses diverse pulmonary disorders with varied prognoses. Current pathological diagnoses suffer from inter-observer variability,necessitating more standardized approaches. We developed an ensemble model AI for cryobiopsy, CRAI, an artificial intelligence model to analyze transbronchial lung cryobiopsy (TBLC) specimens and predict patient outcomes. MethodsWe developed an explainable AI model, CRAI, to analyze TBLC. CRAI comprises seven modules for detecting histological features, generating 19 pathologically significant findings. A downstream XGBoost classifier was developed to predict disease progression using these findings. The models performance was evaluated using respiratory function changes and survival analysis in cross-validation and external test cohorts. FindingsIn the internal cross-validation (135 cases), the model predicted 105 cases without disease progression and 30 with disease progression. The annual {Delta}%FVC was -1.293 in the non-progressive group versus -5.198 in the progressive group, outperforming most pathologists diagnoses. In the external test cohort (48 cases), the model predicted 38 non-progressive and 10 progressive cases. Survival analysis demonstrated significantly shorter survival times in the progressive group (p=0.034). InterpretationCRAI provides a comprehensive, interpretable approach to analyzing TBLC specimens, offering potential for standardizing ILD diagnosis and predicting disease progression. The model could facilitate early identification of progressive cases and guide personalized therapeutic interventions. FundingNew Energy and Industrial Technology Development Organization (NEDO) and Japanese Ministry of Health, Labor, and Welfare.

Advanced Deep Learning Techniques for Accurate Lung Cancer Detection and Classification

Mobarak Abumohsen, Enrique Costa-Montenegro, Silvia García-Méndez, Amani Yousef Owda, Majdi Owda

arxiv logopreprintAug 8 2025
Lung cancer (LC) ranks among the most frequently diagnosed cancers and is one of the most common causes of death for men and women worldwide. Computed Tomography (CT) images are the most preferred diagnosis method because of their low cost and their faster processing times. Many researchers have proposed various ways of identifying lung cancer using CT images. However, such techniques suffer from significant false positives, leading to low accuracy. The fundamental reason results from employing a small and imbalanced dataset. This paper introduces an innovative approach for LC detection and classification from CT images based on the DenseNet201 model. Our approach comprises several advanced methods such as Focal Loss, data augmentation, and regularization to overcome the imbalanced data issue and overfitting challenge. The findings show the appropriateness of the proposal, attaining a promising performance of 98.95% accuracy.

Automated coronary artery segmentation / tissue characterization and detection of lipid-rich plaque: An integrated backscatter intravascular ultrasound study.

Masuda Y, Takeshita R, Tsujimoto A, Sahashi Y, Watanabe T, Fukuoka D, Hara T, Kanamori H, Okura H

pubmed logopapersAug 8 2025
Intravascular ultrasound (IVUS)-based tissue characterization has been used to detect vulnerable plaque or lipid-rich plaque (LRP). Recently, advancements in artificial intelligence (AI) technology have enabled automated coronary arterial plaque segmentation and tissue characterization. The purpose of this study was to evaluate the feasibility and diagnostic accuracy of a deep learning model for plaque segmentation, tissue characterization and identification of LRP. A total of 1,098 IVUS images from 67 patients who underwent IVUS-guided percutaneous coronary intervention were selected for the training group, while 1,100 IVUS images from 100 vessels (88 patients) were used for the validation group. A 7-layer U-Net ++ was applied for automated coronary artery segmentation and tissue characterization. Segmentation and quantification of the external elastic membrane (EEM), lumen and guidewire artifact were performed and compared with manual measurements. Plaque tissue characterization was conducted using integrated backscatter (IB)-IVUS as the gold standard. LRP was defined as %lipid area of ≥65 %. The deep learning model accurately segmented EEM and lumen. AI-predicted %lipid area (R = 0.90, P < 0.001), % fibrosis area (R = 0.89, P < 0.001), %dense fibrosis area (R = 0.81, P < 0.001) and % calcification area (R = 0.89, P < 0.001), showed strong correlation with IB-IVUS measurements. The model predicted LRP with a sensitivity of 62 %, specificity of 94 %, positive predictive value of 69 %, negative predictive value of 92 % and an area under the receiver operating characteristic curve of 0.919 (95 % CI:0.902-0.934), respectively. The deep-learning model demonstrated accurate automatic segmentation and tissue characterization of human coronary arteries, showing promise for identifying LRP.

Three-dimensional pulp chamber volume quantification in first molars using CBCT: Implications for machine learning-assisted age estimation

Ding, Y., Zhong, T., He, Y., Wang, W., Zhang, S., Zhang, X., Shi, W., jin, b.

medrxiv logopreprintAug 8 2025
Accurate adult age estimation represents a critical component of forensic individual identification. However, traditional methods relying on skeletal developmental characteristics are susceptible to preservation status and developmental variation. Teeth, owing to their exceptional taphonomic resistance and minimal postmortem alteration, emerge as premier biological samples. Utilizing the high-resolution capabilities of Cone Beam Computed Tomography (CBCT), this study retrospectively analyzed 1,857 right first molars obtained from Han Chinese adults in Sichuan Province (883 males, 974 females; aged 18-65 years). Pulp chamber volume (PCV) was measured using semi-automatic segmentation in Mimics software (v21.0). Statistically significant differences in PCV were observed based on sex and tooth position (maxillary vs. mandibular). Significant negative correlations existed between PCV and age (r = -0.86 to -0.81). The strongest correlation (r = -0.88) was identified in female maxillary first molars. Eleven curvilinear regression models and six machine learning models (Linear Regression, Lasso Regression, Neural Network, Random Forest, Gradient Boosting, and XGBoost) were developed. Among the curvilinear regression models, the cubic model demonstrated the best performance, with the female maxillary-specific model achieving a mean absolute error (MAE) of 4.95 years. Machine learning models demonstrated superior accuracy. Specifically, the sex- and tooth position-specific XGBoost model for female maxillary first molars achieved an MAE of 3.14 years (R{superscript 2} = 0.87). This represents a significant 36.5% reduction in error compared to the optimal cubic regression model. These findings demonstrate that PCV measurements in first molars, combined with machine learning algorithms (specifically XGBoost), effectively overcome the limitations of traditional methods, providing a highly precise and reproducible approach for forensic age estimation.

Towards MR-Based Trochleoplasty Planning

Michael Wehrli, Alicia Durrer, Paul Friedrich, Sidaty El Hadramy, Edwin Li, Luana Brahaj, Carol C. Hasler, Philippe C. Cattin

arxiv logopreprintAug 8 2025
To treat Trochlear Dysplasia (TD), current approaches rely mainly on low-resolution clinical Magnetic Resonance (MR) scans and surgical intuition. The surgeries are planned based on surgeons experience, have limited adoption of minimally invasive techniques, and lead to inconsistent outcomes. We propose a pipeline that generates super-resolved, patient-specific 3D pseudo-healthy target morphologies from conventional clinical MR scans. First, we compute an isotropic super-resolved MR volume using an Implicit Neural Representation (INR). Next, we segment femur, tibia, patella, and fibula with a multi-label custom-trained network. Finally, we train a Wavelet Diffusion Model (WDM) to generate pseudo-healthy target morphologies of the trochlear region. In contrast to prior work producing pseudo-healthy low-resolution 3D MR images, our approach enables the generation of sub-millimeter resolved 3D shapes compatible for pre- and intraoperative use. These can serve as preoperative blueprints for reshaping the femoral groove while preserving the native patella articulation. Furthermore, and in contrast to other work, we do not require a CT for our pipeline - reducing the amount of radiation. We evaluated our approach on 25 TD patients and could show that our target morphologies significantly improve the sulcus angle (SA) and trochlear groove depth (TGD). The code and interactive visualization are available at https://wehrlimi.github.io/sr-3d-planning/.

Deep Learning Chest X-Ray Age, Epigenetic Aging Clocks and Associations with Age-Related Subclinical Disease in the Project Baseline Health Study.

Chandra J, Short S, Rodriguez F, Maron DJ, Pagidipati N, Hernandez AF, Mahaffey KW, Shah SH, Kiel DP, Lu MT, Raghu VK

pubmed logopapersAug 8 2025
Chronological age is an important component of medical risk scores and decision-making. However, there is considerable variability in how individuals age. We recently published an open-source deep learning model to assess biological age from chest radiographs (CXR-Age), which predicts all-cause and cardiovascular mortality better than chronological age. Here, we compare CXR-Age to two established epigenetic aging clocks (First generation-Horvath Age; Second generation-DNAm PhenoAge) to test which is more strongly associated with cardiopulmonary disease and frailty. Our cohort consisted of 2,097 participants from the Project Baseline Health Study, a prospective cohort study of individuals from four US sites. We compared the association between the different aging clocks and measures of cardiopulmonary disease, frailty, and protein abundance collected at the participant's first annual visit using linear regression models adjusted for common confounders. We found that CXR-Age was associated with coronary calcium, cardiovascular risk factors, worsening pulmonary function, increased frailty, and abundance in plasma of two proteins implicated in neuroinflammation and aging. Associations with DNAm PhenoAge were weaker for pulmonary function and all metrics in middle-age adults. We identified thirteen proteins that were associated with DNAm PhenoAge, one (CDH13) of which was also associated with CXR-Age. No associations were found with Horvath Age. These results suggest that CXR-Age may serve as a better metric of cardiopulmonary aging than epigenetic aging clocks, especially in midlife adults.

impuTMAE: Multi-modal Transformer with Masked Pre-training for Missing Modalities Imputation in Cancer Survival Prediction

Maria Boyko, Aleksandra Beliaeva, Dmitriy Kornilov, Alexander Bernstein, Maxim Sharaev

arxiv logopreprintAug 8 2025
The use of diverse modalities, such as omics, medical images, and clinical data can not only improve the performance of prognostic models but also deepen an understanding of disease mechanisms and facilitate the development of novel treatment approaches. However, medical data are complex, often incomplete, and contains missing modalities, making effective handling its crucial for training multimodal models. We introduce impuTMAE, a novel transformer-based end-to-end approach with an efficient multimodal pre-training strategy. It learns inter- and intra-modal interactions while simultaneously imputing missing modalities by reconstructing masked patches. Our model is pre-trained on heterogeneous, incomplete data and fine-tuned for glioma survival prediction using TCGA-GBM/LGG and BraTS datasets, integrating five modalities: genetic (DNAm, RNA-seq), imaging (MRI, WSI), and clinical data. By addressing missing data during pre-training and enabling efficient resource utilization, impuTMAE surpasses prior multimodal approaches, achieving state-of-the-art performance in glioma patient survival prediction. Our code is available at https://github.com/maryjis/mtcp

Few-Shot Deployment of Pretrained MRI Transformers in Brain Imaging Tasks

Mengyu Li, Guoyao Shen, Chad W. Farris, Xin Zhang

arxiv logopreprintAug 7 2025
Machine learning using transformers has shown great potential in medical imaging, but its real-world applicability remains limited due to the scarcity of annotated data. In this study, we propose a practical framework for the few-shot deployment of pretrained MRI transformers in diverse brain imaging tasks. By utilizing the Masked Autoencoder (MAE) pretraining strategy on a large-scale, multi-cohort brain MRI dataset comprising over 31 million slices, we obtain highly transferable latent representations that generalize well across tasks and datasets. For high-level tasks such as classification, a frozen MAE encoder combined with a lightweight linear head achieves state-of-the-art accuracy in MRI sequence identification with minimal supervision. For low-level tasks such as segmentation, we propose MAE-FUnet, a hybrid architecture that fuses multiscale CNN features with pretrained MAE embeddings. This model consistently outperforms other strong baselines in both skull stripping and multi-class anatomical segmentation under data-limited conditions. With extensive quantitative and qualitative evaluations, our framework demonstrates efficiency, stability, and scalability, suggesting its suitability for low-resource clinical environments and broader neuroimaging applications.

Unsupervised learning for inverse problems in computed tomography

Laura Hellwege, Johann Christopher Engster, Moritz Schaar, Thorsten M. Buzug, Maik Stille

arxiv logopreprintAug 7 2025
This study presents an unsupervised deep learning approach for computed tomography (CT) image reconstruction, leveraging the inherent similarities between deep neural network training and conventional iterative reconstruction methods. By incorporating forward and backward projection layers within the deep learning framework, we demonstrate the feasibility of reconstructing images from projection data without relying on ground-truth images. Our method is evaluated on the two-dimensional 2DeteCT dataset, showcasing superior performance in terms of mean squared error (MSE) and structural similarity index (SSIM) compared to traditional filtered backprojection (FBP) and maximum likelihood (ML) reconstruction techniques. Additionally, our approach significantly reduces reconstruction time, making it a promising alternative for real-time medical imaging applications. Future work will focus on extending this methodology to three-dimensional reconstructions and enhancing the adaptability of the projection geometry.
Page 45 of 3533529 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.