Sort by:
Page 14 of 4494481 results

Intratumoral heterogeneity score enhances invasiveness prediction in pulmonary ground-glass nodules via stacking ensemble machine learning.

Zuo Z, Zeng Y, Deng J, Lin S, Qi W, Fan X, Feng Y

pubmed logopapersSep 26 2025
The preoperative differentiation of adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma using computed tomography (CT) is crucial for guiding clinical management decisions. However, accurately classifying ground-glass nodules poses a significant challenge. Incorporating quantitative intratumoral heterogeneity scores may improve the accuracy of this ternary classification. In this multicenter retrospective study, we developed ternary classification models by leveraging insights from both base and stacking ensemble machine learning models, incorporating intratumoral heterogeneity scores along with clinical-radiological features to distinguish adenocarcinomas in situ, minimally invasive adenocarcinoma, and invasive adenocarcinoma. The machine learning models were trained, and final model selection depended on maximizing the macro-average area under the curve (macro-AUC) in both the internal and external validation sets. Data from 802 patients from three centers were divided into a training set (n = 477) and an internal test set (n = 205), in a 7:3 ratio, with an additional external validation set comprising 120 patients. The stacking classifier exhibited superior performance relative to the other models, achieving macro-AUC values of 0.7850 and 0.7717 for the internal and external validation sets, respectively. Moreover, an interpretability analysis utilizing the Shapley Additive Explanation identified four key features of this ternary classification: intratumoral heterogeneity score, nodule size, nodule type, and age. The stacking classifier, recognized as the optimal algorithm for integrating the intratumoral heterogeneity score and clinical-radiological features, effectively served as a ternary classification model for assessing the invasiveness of lung adenocarcinoma in chest CT images. Our stacking classifier integrated intratumoral heterogeneity scores and clinical-radiological features to improve the ternary classification of lung adenocarcinoma invasiveness (adenocarcinomas in situ/minimally invasive adenocarcinoma/invasive adenocarcinoma), aiding in precise diagnosis and clinical decision-making for ground-glass nodules. The intratumoral heterogeneity score effectively assessed the invasiveness of lung adenocarcinoma. The stacking classifier outperformed other methods for this ternary classification task. Intratumoral heterogeneity score, nodule size, nodule type, and age predict invasiveness.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

Segmental airway volume as a predictive indicator of postoperative extubation timing in patients with oral and maxillofacial space infections: a retrospective analysis.

Liu S, Shen H, Zhu B, Zhang X, Zhang X, Li W

pubmed logopapersSep 26 2025
The objective of this study was to investigate the significance of segmental airway volume in developing a predictive model to guide the timing of postoperative extubation in patients with oral and maxillofacial space infections (OMSIs). A retrospective cohort study was performed to analyse clinical data from 177 medical records, with a focus on key variables related to disease severity and treatment outcomes. The inclusion criteria of this study were as follows: adherence to the OMSI diagnostic criteria (local tissue inflammation characterized by erythema, oedema, hyperthermia and tenderness); compromised functions such as difficulties opening the mouth, swallowing, or breathing; the presence of purulent material confirmed by puncture or computed tomography (CT); and laboratory examinations indicating an underlying infection process. The data included age, sex, body mass index (BMI), blood test results, smoking history, history of alcohol abuse, the extent of mouth opening, the number of infected spaces, and the source of infection. DICOM files were imported into 3D Slicer for manual segmentation, followed by volume measurement of each segment. We observed statistically significant differences in age, neutrophil count, lymphocyte count, and C4 segment volume among patient subgroups stratified by extubation time. Regression analysis revealed that age and C4 segment volume were significantly correlated with extubation time. Additionally, the machine learning models yielded good evaluation metrics. Segmental airway volume shows promise as an indicator for predicting extubation time. Predictive models constructed using machine learning algorithms yield good predictive performance and may facilitate clinical decision-making.

Radiomics-based machine learning model integrating preoperative vertebral computed tomography and clinical features to predict cage subsidence after single-level anterior cervical discectomy and fusion with a zero-profile anchored spacer.

Zheng B, Yu P, Ma K, Zhu Z, Liang Y, Liu H

pubmed logopapersSep 26 2025
To develop machine-learning model that combines pre-operative vertebral-body CT radiomics with clinical data to predict cage subsidence after single-level ACDF with Zero-P. We retrospectively review 253 patients (2016-2023). Subsidence is defined as ≥ 3 mm loss of fused-segment height at final follow-up. Patients are split 8:2 into a training set (n = 202; 39 subsidence) and an independent test set (n = 51; 14 subsidence). Vertebral bodies adjacent to the target level are segmented on pre-operative CT, and high-throughput radiomic features are extracted with PyRadiomics. Features are z-score-normalized, then reduced by variance, correlation and LASSO. Age, vertebral Hounsfield units (HU) and T1-slope entered a clinical model. Eight classifiers are tuned by cross-validation; performance is assessed by AUC and related metrics, with thresholds optimized on the training cohort. Subsidence patients are older, lower HU and higher T1-slope (all P < 0.05). LASSO retained 11 radiomic features. In the independent test set, the clinical model had limited discrimination (AUC 0.595). The radiomics model improved performance (AUC 0.775; sensitivity 100%; specificity 60%). The combined model is best (AUC 0.813; sensitivity 80%; specificity 80%) and surpassed both single-source models (P < 0.05). A pre-operative model integrating CT-based radiomic signatures with key clinical variables predicts cage subsidence after ACDF with good accuracy. This tool may facilitate individualized risk stratification and guide strategies-such as endplate protection, implant choice and bone-quality optimization-to mitigate subsidence risk. Multicentre prospective validation is warranted.

Deep learning-based cardiac computed tomography angiography left atrial segmentation and quantification in atrial fibrillation patients: a multi-model comparative study.

Feng L, Lu W, Liu J, Chen Z, Jin J, Qian N, Pan J, Wang L, Xiang J, Jiang J, Wang Y

pubmed logopapersSep 26 2025
Quantitative assessment of left atrial volume (LAV) is an important factor in the study of the pathogenesis of atrial fibrillation. However, automated left atrial segmentation with quantitative assessment usually faces many challenges. The main objective of this study was to find the optimal left atrial segmentation model based on cardiac computed tomography angiography (CTA) and to perform quantitative LAV measurement. A multi-center left atrial study cohort containing 182 cardiac CTAs with atrial fibrillation was created, each case accompanied by expert image annotation by a cardiologist. Then, based on this left atrium dataset, five recent states-of-the-art (SOTA) models in the field of medical image segmentation were used to train and validate the left atrium segmentation model, including DAResUNet, nnFormer, xLSTM-UNet, UNETR, and VNet, respectively. Further, the optimal segmentation model was used to assess the consistency validation of the LAV. DAResUNet achieved the best performance in DSC (0.924 ± 0.023) and JI (0.859 ± 0.065) among all models, while VNet is the best performer in HD (12.457 ± 6.831) and ASD (1.034 ± 0.178). The Bland-Altman plot demonstrated the extremely strong agreement (mean bias - 5.69 mL, 95% LoA - 19-7.6 mL) between the model's automatic prediction and manual measurements. Deep learning models based on a study cohort of 182 CTA left atrial images were capable of achieving competitive results in left atrium segmentation. LAV assessment based on deep learning models may be useful for biomarkers of the onset of atrial fibrillation.

Deep Learning-Based Cross-Anatomy CT Synthesis Using Adapted nnResU-Net with Anatomical Feature Prioritized Loss

Javier Sequeiro González, Arthur Longuefosse, Miguel Díaz Benito, Álvaro García Martín, Fabien Baldacci

arxiv logopreprintSep 26 2025
We present a patch-based 3D nnUNet adaptation for MR to CT and CBCT to CT image translation using the multicenter SynthRAD2025 dataset, covering head and neck (HN), thorax (TH), and abdomen (AB) regions. Our approach leverages two main network configurations: a standard UNet and a residual UNet, both adapted from nnUNet for image synthesis. The Anatomical Feature-Prioritized (AFP) loss was introduced, which compares multilayer features extracted from a compact segmentation network trained on TotalSegmentator labels, enhancing reconstruction of clinically relevant structures. Input volumes were normalized per-case using zscore normalization for MRIs, and clipping plus dataset level zscore normalization for CBCT and CT. Training used 3D patches tailored to each anatomical region without additional data augmentation. Models were trained for 1000 and 1500 epochs, with AFP fine-tuning performed for 500 epochs using a combined L1+AFP objective. During inference, overlapping patches were aggregated via mean averaging with step size of 0.3, and postprocessing included reverse zscore normalization. Both network configurations were applied across all regions, allowing consistent model design while capturing local adaptations through residual learning and AFP loss. Qualitative and quantitative evaluation revealed that residual networks combined with AFP yielded sharper reconstructions and improved anatomical fidelity, particularly for bone structures in MR to CT and lesions in CBCT to CT, while L1only networks achieved slightly better intensity-based metrics. This methodology provides a stable solution for cross modality medical image synthesis, demonstrating the effectiveness of combining the automatic nnUNet pipeline with residual learning and anatomically guided feature losses.

Learning KAN-based Implicit Neural Representations for Deformable Image Registration

Nikita Drozdov, Marat Zinovev, Dmitry Sorokin

arxiv logopreprintSep 26 2025
Deformable image registration (DIR) is a cornerstone of medical image analysis, enabling spatial alignment for tasks like comparative studies and multi-modal fusion. While learning-based methods (e.g., CNNs, transformers) offer fast inference, they often require large training datasets and struggle to match the precision of classical iterative approaches on some organ types and imaging modalities. Implicit neural representations (INRs) have emerged as a promising alternative, parameterizing deformations as continuous mappings from coordinates to displacement vectors. However, this comes at the cost of requiring instance-specific optimization, making computational efficiency and seed-dependent learning stability critical factors for these methods. In this work, we propose KAN-IDIR and RandKAN-IDIR, the first integration of Kolmogorov-Arnold Networks (KANs) into deformable image registration with implicit neural representations (INRs). Our proposed randomized basis sampling strategy reduces the required number of basis functions in KAN while maintaining registration quality, thereby significantly lowering computational costs. We evaluated our approach on three diverse datasets (lung CT, brain MRI, cardiac MRI) and compared it with competing instance-specific learning-based approaches, dataset-trained deep learning models, and classical registration approaches. KAN-IDIR and RandKAN-IDIR achieved the highest accuracy among INR-based methods across all evaluated modalities and anatomies, with minimal computational overhead and superior learning stability across multiple random seeds. Additionally, we discovered that our RandKAN-IDIR model with randomized basis sampling slightly outperforms the model with learnable basis function indices, while eliminating its additional training-time complexity.

RAU: Reference-based Anatomical Understanding with Vision Language Models

Yiwei Li, Yikang Liu, Jiaqi Guo, Lin Zhao, Zheyuan Zhang, Xiao Chen, Boris Mailhe, Ankush Mukherjee, Terrence Chen, Shanhui Sun

arxiv logopreprintSep 26 2025
Anatomical understanding through deep learning is critical for automatic report generation, intra-operative navigation, and organ localization in medical imaging; however, its progress is constrained by the scarcity of expert-labeled data. A promising remedy is to leverage an annotated reference image to guide the interpretation of an unlabeled target. Although recent vision-language models (VLMs) exhibit non-trivial visual reasoning, their reference-based understanding and fine-grained localization remain limited. We introduce RAU, a framework for reference-based anatomical understanding with VLMs. We first show that a VLM learns to identify anatomical regions through relative spatial reasoning between reference and target images, trained on a moderately sized dataset. We validate this capability through visual question answering (VQA) and bounding box prediction. Next, we demonstrate that the VLM-derived spatial cues can be seamlessly integrated with the fine-grained segmentation capability of SAM2, enabling localization and pixel-level segmentation of small anatomical regions, such as vessel segments. Across two in-distribution and two out-of-distribution datasets, RAU consistently outperforms a SAM2 fine-tuning baseline using the same memory setup, yielding more accurate segmentations and more reliable localization. More importantly, its strong generalization ability makes it scalable to out-of-distribution datasets, a property crucial for medical image applications. To the best of our knowledge, RAU is the first to explore the capability of VLMs for reference-based identification, localization, and segmentation of anatomical structures in medical images. Its promising performance highlights the potential of VLM-driven approaches for anatomical understanding in automated clinical workflows.

MedIENet: medical image enhancement network based on conditional latent diffusion model.

Yuan W, Feng Y, Wen T, Luo G, Liang J, Sun Q, Liang S

pubmed logopapersSep 26 2025
Deep learning necessitates a substantial amount of data, yet obtaining sufficient medical images is difficult due to concerns about patient privacy and high collection costs. To address this issue, we propose a conditional latent diffusion model-based medical image enhancement network, referred to as the Medical Image Enhancement Network (MedIENet). To meet the rigorous standards required for image generation in the medical imaging field, a multi-attention module is incorporated in the encoder of the denoising U-Net backbone. Additionally Rotary Position Embedding (RoPE) is integrated into the self-attention module to effectively capture positional information, while cross-attention is utilised to embed integrate class information into the diffusion process. MedIENet is evaluated on three datasets: Chest CT-Scan images, Chest X-Ray Images (Pneumonia), and Tongue dataset. Compared to existing methods, MedIENet demonstrates superior performance in both fidelity and diversity of the generated images. Experimental results indicate that for downstream classification tasks using ResNet50, the Area Under the Receiver Operating Characteristic curve (AUROC) achieved with real data alone is 0.76 for the Chest CT-Scan images dataset, 0.87 for the Chest X-Ray Images (Pneumonia) dataset, and 0.78 for the Tongue Dataset. When using mixed data consisting of real data and generated data, the AUROC improves to 0.82, 0.94, and 0.82, respectively, reflecting increases of approximately 6%, 7%, and 4%. These findings indicate that the images generated by MedIENet can enhance the performance of downstream classification tasks, providing an effective solution to the scarcity of medical image training data.

COVID-19 Pneumonia Diagnosis Using Medical Images: Deep Learning-Based Transfer Learning Approach.

Dharmik A

pubmed logopapersSep 26 2025
SARS-CoV-2, the causative agent of COVID-19, remains a global health concern due to its high transmissibility and evolving variants. Although vaccination efforts and therapeutic advancements have mitigated disease severity, emerging mutations continue to challenge diagnostics and containment strategies. As of mid-February 2025, global test positivity has risen to 11%, marking the highest level in over 6 months, despite widespread immunization efforts. Newer variants demonstrate enhanced host cell binding, increasing both infectivity and diagnostic complexity. This study aimed to evaluate the effectiveness of deep transfer learning in delivering a rapid, accurate, and mutation-resilient COVID-19 diagnosis from medical imaging, with a focus on scalability and accessibility. An automated detection system was developed using state-of-the-art convolutional neural networks, including VGG16 (Visual Geometry Group network-16 layers), ResNet50 (residual network-50 layers), ConvNeXtTiny (convolutional next-tiny), MobileNet (mobile network), NASNetMobile (neural architecture search network-mobile version), and DenseNet121 (densely connected convolutional network-121 layers), to detect COVID-19 from chest X-ray and computed tomography (CT) images. Among all the models evaluated, DenseNet121 emerged as the best-performing architecture for COVID-19 diagnosis using X-ray and CT images. It achieved an impressive accuracy of 98%, with a precision of 96.9%, a recall of 98.9%, an F1-score of 97.9%, and an area under the curve score of 99.8%, indicating a high degree of consistency and reliability in detecting both positive and negative cases. The confusion matrix showed minimal false positives and false negatives, underscoring the model's robustness in real-world diagnostic scenarios. Given its performance, DenseNet121 is a strong candidate for deployment in clinical settings and serves as a benchmark for future improvements in artificial intelligence-assisted diagnostic tools. The study results underscore the potential of artificial intelligence-powered diagnostics in supporting early detection and global pandemic response. With careful optimization, deep learning models can address critical gaps in testing, particularly in settings constrained by limited resources or emerging variants.
Page 14 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.