Sort by:
Page 73 of 3493486 results

Deep Learning-Assisted Skeletal Muscle Radiation Attenuation at C3 Predicts Survival in Head and Neck Cancer

Barajas Ordonez, F., Xie, K., Ferreira, A., Siepmann, R., Chargi, N., Nebelung, S., Truhn, D., Berge, S., Bruners, P., Egger, J., Hölzle, F., Wirth, M., Kuhl, C., Puladi, B.

medrxiv logopreprintAug 21 2025
BackgroundHead and neck cancer (HNC) patients face an increased risk of malnutrition due to lifestyle, tumor localization, and treatment effects. While skeletal muscle area (SMA) and radiation attenuation (SM-RA) at the third lumbar vertebra (L3) are established prognostic markers, L3 is not routinely available in head and neck imaging. The prognostic value of SM-RA at the third cervical vertebra (C3) remains unclear. This study assesses whether SMA and SM-RA at C3 predict locoregional control (LRC) and overall survival (OS) in HNC. MethodsWe analyzed 904 HNC cases with head and neck CT scans. A deep learning pipeline identified C3, and SMA/SM-RA were quantified via automated segmentation with manual verification. Cox proportional hazards models assessed associations with LRC and OS, adjusting for clinical factors. ResultsMedian SMA and SM-RA were 36.64 cm{superscript 2} (IQR: 30.12-42.44) and 50.77 HU (IQR: 43.04-57.39). In multivariate analysis, lower SMA (HR 1.62, 95% CI: 1.02-2.58, p = 0.04), lower SM-RA (HR 1.89, 95% CI: 1.30-2.79, p < 0.001), and advanced T stage (HR 1.50, 95% CI: 1.06-2.12, p = 0.02) were prognostic for LRC. OS predictors included advanced T stage (HR 2.17, 95% CI: 1.64-2.87, p < 0.001), age [&ge;]70 years (HR 1.40, 95% CI: 1.00-1.96, p = 0.05), male sex (HR 1.64, 95% CI: 1.02-2.63, p = 0.04), and lower SM-RA (HR 2.15, 95% CI: 1.56-2.96, p < 0.001). ConclusionDeep learning-assisted SM-RA assessment at C3 outperforms SMA for LRC and OS in HNC, supporting its use as a routine biomarker and L3 alternative.

Automated Deep Learning Pipeline for Callosal Angle Quantification

shirzadeh barough, s., Bilgel, M., Ventura, C., Moghekar, A., Albert, M., Miller, M. I., Moghekar, A.

medrxiv logopreprintAug 21 2025
BACKGROUND AND PURPOSENormal pressure hydrocephalus (NPH) is a potentially treatable neurodegenerative disorder that remains underdiagnosed due to its clinical overlap with other conditions and the labor-intensive nature of manual imaging analyses. Imaging biomarkers, such as the callosal angle (CA), Evans Index (EI), and Disproportionately Enlarged Subarachnoid Space Hydrocephalus (DESH), play a crucial role in NPH diagnosis but are often limited by subjective interpretations. To address these challenges, we developed a fully automated and robust deep learning framework for measuring the CA directly from raw T1 MPRAGE and non-MPRAGE MRI scans. MATERIALS AND METHODSOur method integrates two complementary modules. First, a BrainSignsNET model is employed to accurately detect key anatomical landmarks, notably the anterior commissure (AC) and posterior commissure (PC). Preprocessed 3D MRI scans, reoriented to the Right Anterior Superior (RAS) system and resized to standardized cubes while preserving aspect ratios, serve as input for landmark localization. After detecting these landmarks, a coronal slice, perpendicular to the AC-PC line at the PC level, is extracted for subsequent analysis. Second, a UNet-based segmentation network, featuring a pretrained EfficientNetB0 encoder, generates multiclass masks of the lateral ventricles from the coronal slices which then used for calculation of the Callosal Angle. RESULTSTraining and internal validation were performed using datasets from the Baltimore Longitudinal Study of Aging (BLSA) and BIOCARD, while external validation utilized 216 clinical MRI scans from Johns Hopkins Bayview Hospital. Our framework achieved high concordance with manual measurements, demonstrating a strong correlation (r = 0.98, p < 0.001) and a mean absolute error (MAE) of 2.95 (SD 1.58) degrees. Moreover, error analysis confirmed that CA measurement performance was independent of patient age, gender, and EI, underscoring the broad applicability of this method. CONCLUSIONSThese results indicate that our fully automated CA measurement framework is a reliable and reproducible alternative to manual methods, outperforms reported interobserver variability in assessing the callosal angle, and offers significant potential to enhance early detection and diagnosis of NPH in both research and clinical settings.

Automated biometry for assessing cephalopelvic disproportion in 3D 0.55T fetal MRI at term

Uus, A., Bansal, S., Gerek, Y., Waheed, H., Neves Silva, S., Aviles Verdera, J., Kyriakopoulou, V., Betti, L., Jaufuraully, S., Hajnal, J. V., Siasakos, D., David, A., Chandiramani, M., Hutter, J., Story, L., Rutherford, M.

medrxiv logopreprintAug 21 2025
Fetal MRI offers detailed three-dimensional visualisation of both fetal and maternal pelvic anatomy, allowing for assessment of the risk of cephalopelvic disproportion and obstructed labour. However, conventional measurements of fetal and pelvic proportions and their relative positioning are typically performed manually in 2D, making them time-consuming, subject to inter-observer variability, and rarely integrated into routine clinical workflows. In this work, we present the first fully automated pipeline for pelvic and fetal head biometry in T2-weighted fetal MRI at late gestation. The method employs deep learning-based localisation of anatomical landmarks in 3D reconstructed MRI images, followed by computation of 12 standard linear and circumference measurements commonly used in the assessment of cephalopelvic disproportion. Landmark detection is based on 3D UNet models within MONAI framework, trained on 57 semi-manually annotated datasets. The full pipeline is quantitatively validated on 10 test cases. Furthermore, we demonstrate its clinical feasibility and relevance by applying it to 206 fetal MRI scans (36-40 weeks gestation) from the MiBirth study, which investigates prediction of mode of delivery using low field MRI.

Dynamic-Attentive Pooling Networks: A Hybrid Lightweight Deep Model for Lung Cancer Classification.

Ayivi W, Zhang X, Ativi WX, Sam F, Kouassi FAP

pubmed logopapersAug 21 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly advanced medical image analysis. However, conventional architectures such as ResNet50 that rely on first-order pooling often fall short. This study aims to overcome the limitations of CNNs in lung cancer classification by proposing a novel and dynamic model named LungSE-SOP. The model is based on Second-Order Pooling (SOP) and Squeeze-and-Excitation Networks (SENet) within a ResNet50 backbone to improve feature representation and class separation. A novel Dynamic Feature Enhancement (DFE) module is also introduced, which dynamically adjusts the flow of information through SOP and SENet blocks based on learned importance scores. The model was trained using a publicly available IQ-OTH/NCCD lung cancer dataset. The performance of the model was assessed using various metrics, including the accuracy, precision, recall, F1-score, ROC curves, and confidence intervals. For multiclass tumor classification, our model achieved 98.6% accuracy for benign, 98.7% for malignant, and 99.9% for normal cases. Corresponding F1-scores were 99.2%, 99.8%, and 99.9%, respectively, reflecting the model's high precision and recall across all tumor types and its strong potential for clinical deployment.

Hessian-Based Lightweight Neural Network HessNet for State-of-the-Art Brain Vessel Segmentation on a Minimal Training Dataset

Alexandra Bernadotte, Elfimov Nikita, Mikhail Shutov, Ivan Menshikov

arxiv logopreprintAug 21 2025
Accurate segmentation of blood vessels in brain magnetic resonance angiography (MRA) is essential for successful surgical procedures, such as aneurysm repair or bypass surgery. Currently, annotation is primarily performed through manual segmentation or classical methods, such as the Frangi filter, which often lack sufficient accuracy. Neural networks have emerged as powerful tools for medical image segmentation, but their development depends on well-annotated training datasets. However, there is a notable lack of publicly available MRA datasets with detailed brain vessel annotations. To address this gap, we propose a novel semi-supervised learning lightweight neural network with Hessian matrices on board for 3D segmentation of complex structures such as tubular structures, which we named HessNet. The solution is a Hessian-based neural network with only 6000 parameters. HessNet can run on the CPU and significantly reduces the resource requirements for training neural networks. The accuracy of vessel segmentation on a minimal training dataset reaches state-of-the-art results. It helps us create a large, semi-manually annotated brain vessel dataset of brain MRA images based on the IXI dataset (annotated 200 images). Annotation was performed by three experts under the supervision of three neurovascular surgeons after applying HessNet. It provides high accuracy of vessel segmentation and allows experts to focus only on the most complex important cases. The dataset is available at https://git.scinalytics.com/terilat/VesselDatasetPartly.

TPA: Temporal Prompt Alignment for Fetal Congenital Heart Defect Classification

Darya Taratynova, Alya Almsouti, Beknur Kalmakhanbet, Numan Saeed, Mohammad Yaqub

arxiv logopreprintAug 21 2025
Congenital heart defect (CHD) detection in ultrasound videos is hindered by image noise and probe positioning variability. While automated methods can reduce operator dependence, current machine learning approaches often neglect temporal information, limit themselves to binary classification, and do not account for prediction calibration. We propose Temporal Prompt Alignment (TPA), a method leveraging foundation image-text model and prompt-aware contrastive learning to classify fetal CHD on cardiac ultrasound videos. TPA extracts features from each frame of video subclips using an image encoder, aggregates them with a trainable temporal extractor to capture heart motion, and aligns the video representation with class-specific text prompts via a margin-hinge contrastive loss. To enhance calibration for clinical reliability, we introduce a Conditional Variational Autoencoder Style Modulation (CVAESM) module, which learns a latent style vector to modulate embeddings and quantifies classification uncertainty. Evaluated on a private dataset for CHD detection and on a large public dataset, EchoNet-Dynamic, for systolic dysfunction, TPA achieves state-of-the-art macro F1 scores of 85.40% for CHD diagnosis, while also reducing expected calibration error by 5.38% and adaptive ECE by 6.8%. On EchoNet-Dynamic's three-class task, it boosts macro F1 by 4.73% (from 53.89% to 58.62%). Temporal Prompt Alignment (TPA) is a framework for fetal congenital heart defect (CHD) classification in ultrasound videos that integrates temporal modeling, prompt-aware contrastive learning, and uncertainty quantification.

Clinical and Economic Evaluation of a Real-Time Chest X-Ray Computer-Aided Detection System for Misplaced Endotracheal and Nasogastric Tubes and Pneumothorax in Emergency and Critical Care Settings: Protocol for a Cluster Randomized Controlled Trial.

Tsai CL, Chu TC, Wang CH, Chang WT, Tsai MS, Ku SC, Lin YH, Tai HC, Kuo SW, Wang KC, Chao A, Tang SC, Liu WL, Tsai MH, Wang TA, Chuang SL, Lee YC, Kuo LC, Chen CJ, Kao JH, Wang W, Huang CH

pubmed logopapersAug 20 2025
Advancements in artificial intelligence (AI) have driven substantial breakthroughs in computer-aided detection (CAD) for chest x-ray (CXR) imaging. The National Taiwan University Hospital research team previously developed an AI-based emergency CXR system (Capstone project), which led to the creation of a CXR module. This CXR module has an established model supported by extensive research and is ready for application in clinical trials without requiring additional model training. This study will use 3 submodules of the system: detection of misplaced endotracheal tubes, detection of misplaced nasogastric tubes, and identification of pneumothorax. This study aims to apply a real-time CXR CAD system in emergency and critical care settings to evaluate its clinical and economic benefits without requiring additional CXR examinations or altering standard care and procedures. The study will evaluate the impact of CAD system on mortality reduction, postintubation complications, hospital stay duration, workload, and interpretation time, as wells as conduct a cost-effectiveness comparison with standard care. This study adopts a pilot trial and cluster randomized controlled trial design, with random assignment conducted at the ward level. In the intervention group, units are granted access to AI diagnostic results, while the control group continues standard care practices. Consent will be obtained from attending physicians, residents, and advanced practice nurses in each participating ward. Once consent is secured, these health care providers in the intervention group will be authorized to use the CAD system. Intervention units will have access to AI-generated interpretations, whereas control units will maintain routine medical procedures without access to the AI diagnostic outputs. The study was funded in September 2024. Data collection is expected to last from January 2026 to December 2027. This study anticipates that the real-time CXR CAD system will automate the identification and detection of misplaced endotracheal and nasogastric tubes on CXRs, as well as assist clinicians in diagnosing pneumothorax. By reducing the workload of physicians, the system is expected to shorten the time required to detect tube misplacement and pneumothorax, decrease patient mortality and hospital stays, and ultimately lower health care costs. PRR1-10.2196/72928.

Deep Learning Model for Breast Shear Wave Elastography to Improve Breast Cancer Diagnosis (INSPiRED 006): An International, Multicenter Analysis.

Cai L, Pfob A, Barr RG, Duda V, Alwafai Z, Balleyguier C, Clevert DA, Fastner S, Gomez C, Goncalo M, Gruber I, Hahn M, Kapetas P, Nees J, Ohlinger R, Riedel F, Rutten M, Stieber A, Togawa R, Sidey-Gibbons C, Tozaki M, Wojcinski S, Heil J, Golatta M

pubmed logopapersAug 20 2025
Shear wave elastography (SWE) has been investigated as a complement to B-mode ultrasound for breast cancer diagnosis. Although multicenter trials suggest benefits for patients with Breast Imaging Reporting and Data System (BI-RADS) 4(a) breast masses, widespread adoption remains limited because of the absence of validated velocity thresholds. This study aims to develop and validate a deep learning (DL) model using SWE images (artificial intelligence [AI]-SWE) for BI-RADS 3 and 4 breast masses and compare its performance with human experts using B-mode ultrasound. We used data from an international, multicenter trial (ClinicalTrials.gov identifier: NCT02638935) evaluating SWE in women with BI-RADS 3 or 4 breast masses across 12 institutions in seven countries. Images from 11 sites were used to develop an EfficientNetB1-based DL model. An external validation was conducted using data from the 12th site. Another validation was performed using the latest SWE software from a separate institutional cohort. Performance metrics included sensitivity, specificity, false-positive reduction, and area under the receiver operator curve (AUROC). The development set included 924 patients (4,026 images); the external validation sets included 194 patients (562 images) and 176 patients (188 images, latest SWE software). AI-SWE achieved an AUROC of 0.94 (95% CI, 0.91 to 0.96) and 0.93 (95% CI, 0.88 to 0.98) in the two external validation sets. Compared with B-mode ultrasound, AI-SWE significantly reduced false-positive rates by 62.1% (20.4% [30/147] <i>v</i> 53.8% [431/801]; <i>P</i> < .001) and 38.1% (33.3% [14/42] <i>v</i> 53.8% [431/801]; <i>P</i> < .001), with comparable sensitivity (97.9% [46/47] and 97.8% [131/134] <i>v</i> 98.1% [311/317]; <i>P</i> = .912 and <i>P</i> = .810). AI-SWE demonstrated accuracy comparable with human experts in malignancy detection while significantly reducing false-positive imaging findings (ie, unnecessary biopsies). Future studies should explore its integration into multimodal breast cancer diagnostics.

FedVGM: Enhancing Federated Learning Performance on Multi-Dataset Medical Images with XAI.

Tahosin MS, Sheakh MA, Alam MJ, Hassan MM, Bairagi AK, Abdulla S, Alshathri S, El-Shafai W

pubmed logopapersAug 20 2025
Advances in deep learning have transformed medical imaging, yet progress is hindered by data privacy regulations and fragmented datasets across institutions. To address these challenges, we propose FedVGM, a privacy-preserving federated learning framework for multi-modal medical image analysis. FedVGM integrates four imaging modalities, including brain MRI, breast ultrasound, chest X-ray, and lung CT, across 14 diagnostic classes without centralizing patient data. Using transfer learning and an ensemble of VGG16 and MobileNetV2, FedVGM achieves 97.7% $\pm$ 0.01 accuracy on the combined dataset and 91.9-99.1% across individual modalities. We evaluated three aggregation strategies and demonstrated median aggregation to be the most effective. To ensure clinical interpretability, we apply explainable AI techniques and validate results through performance metrics, statistical analysis, and k-fold cross-validation. FedVGM offers a robust, scalable solution for collaborative medical diagnostics, supporting clinical deployment while preserving data privacy.

A fully automated AI-based method for tumour detection and quantification on [<sup>18</sup>F]PSMA-1007 PET-CT images in prostate cancer.

Trägårdh E, Ulén J, Enqvist O, Larsson M, Valind K, Minarik D, Edenbrandt L

pubmed logopapersAug 20 2025
In this study, we further developed an artificial intelligence (AI)-based method for the detection and quantification of tumours in the prostate, lymph nodes and bone in prostate-specific membrane antigen (PSMA)-targeting positron emission tomography with computed tomography (PET-CT) images. A total of 1064 [<sup>18</sup>F]PSMA-1007 PET-CT scans were used (approximately twice as many compared to our previous AI model), of which 120 were used as test set. Suspected lesions were manually annotated and used as ground truth. A convolutional neural network was developed and trained. The sensitivity and positive predictive value (PPV) were calculated using two sets of manual segmentations as reference. Results were also compared to our previously developed AI method. The correlation between manually and AI-based calculations of total lesion volume (TLV) and total lesion uptake (TLU) were calculated. The sensitivities of the AI method were 85% for prostate tumour/recurrence, 91% for lymph node metastases and 61% for bone metastases (82%, 86% and 70% for manual readings and 66%, 88% and 71% for the old AI method). The PPVs of the AI method were 85%, 83% and 58%, respectively (63%, 86% and 39% for manual readings, and 69%, 70% and 39% for the old AI method). The correlations between manual and AI-based calculations of TLV and TLU ranged from r = 0.62 to r = 0.96. The performance of the newly developed and fully automated AI-based method for detecting and quantifying prostate tumour and suspected lymph node and bone metastases increased significantly, especially the PPV. The AI method is freely available to other researchers ( www.recomia.org ).
Page 73 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.