Sort by:
Page 86 of 1421416 results

A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression.

Jiang C, Xing X, Nan Y, Fang Y, Zhang S, Walsh S, Yang G, Shen D

pubmed logopapersJul 1 2025
Idiopathic Pulmonary Fibrosis (IPF) is a progressive lung disease that continuously scars and thickens lung tissue, leading to respiratory difficulties. Timely assessment of IPF progression is essential for developing treatment plans and improving patient survival rates. However, current clinical standards require multiple (usually two) CT scans at certain intervals to assess disease progression. This presents a dilemma: the disease progression is identified only after the disease has already progressed. To address this issue, a feasible solution is to generate the follow-up CT image from the patient's initial CT image to achieve early prediction of IPF. To this end, we propose a lung structure and function information-guided residual diffusion model. The key components of our model include (1) using a 2.5D generation strategy to reduce computational cost of generating 3D images with the diffusion model; (2) designing structural attention to mitigate negative impact of spatial misalignment between the two CT images on generation performance; (3) employing residual diffusion to accelerate model training and inference while focusing more on differences between the two CT images (i.e., the lesion areas); and (4) developing a CLIP-based text extraction module to extract lung function test information and further using such extracted information to guide the generation. Extensive experiments demonstrate that our method can effectively predict IPF progression and achieve superior generation performance compared to state-of-the-art methods.

Deep learning-based auto-contouring of organs/structures-at-risk for pediatric upper abdominal radiotherapy.

Ding M, Maspero M, Littooij AS, van Grotel M, Fajardo RD, van Noesel MM, van den Heuvel-Eibrink MM, Janssens GO

pubmed logopapersJul 1 2025
This study aimed to develop a computed tomography (CT)-based multi-organ segmentation model for delineating organs-at-risk (OARs) in pediatric upper abdominal tumors and evaluate its robustness across multiple datasets. In-house postoperative CTs from pediatric patients with renal tumors and neuroblastoma (n = 189) and a public dataset (n = 189) with CTs covering thoracoabdominal regions were used. Seventeen OARs were delineated: nine by clinicians (Type 1) and eight using TotalSegmentator (Type 2). Auto-segmentation models were trained using in-house (Model-PMC-UMCU) and a combined dataset of public data (Model-Combined). Performance was assessed with Dice Similarity Coefficient (DSC), 95 % Hausdorff Distance (HD95), and mean surface distance (MSD). Two clinicians rated clinical acceptability on a 5-point Likert scale across 15 patient contours. Model robustness was evaluated against sex, age, intravenous contrast, and tumor type. Model-PMC-UMCU achieved mean DSC values above 0.95 for five of nine OARs, while the spleen and heart ranged between 0.90 and 0.95. The stomach-bowel and pancreas exhibited DSC values below 0.90. Model-Combined demonstrated improved robustness across both datasets. Clinical evaluation revealed good usability, with both clinicians rating six of nine Type 1 OARs above four and six of eight Type 2 OARs above three. Significant performance differences were only found across age groups in both datasets, specifically in the left lung and pancreas. The 0-2 age group showed the lowest performance. A multi-organ segmentation model was developed, showcasing enhanced robustness when trained on combined datasets. This model is suitable for various OARs and can be applied to multiple datasets in clinical settings.

Automated vertebrae identification and segmentation with structural uncertainty analysis in longitudinal CT scans of patients with multiple myeloma.

Madzia-Madzou DK, Jak M, de Keizer B, Verlaan JJ, Minnema MC, Gilhuijs K

pubmed logopapersJul 1 2025
Optimize deep learning-based vertebrae segmentation in longitudinal CT scans of multiple myeloma patients using structural uncertainty analysis. Retrospective CT scans from 474 multiple myeloma patients were divided into train (179 patients, 349 scans, 2005-2011) and test cohort (295 patients, 671 scans, 2012-2020). An enhanced segmentation pipeline was developed on the train cohort. It integrated vertebrae segmentation using an open-source deep learning method (Payer's) with a post-hoc structural uncertainty analysis. This analysis identified inconsistencies, automatically correcting them or flagging uncertain regions for human review. Segmentation quality was assessed through vertebral shape analysis using topology. Metrics included 'identification rate', 'longitudinal vertebral match rate', 'success rate' and 'series success rate' and evaluated across age/sex subgroups. Statistical analysis included McNemar and Wilcoxon signed-rank tests, with p < 0.05 indicating significant improvement. Payer's method achieved an identification rate of 95.8% and success rate of 86.7%. The proposed pipeline automatically improved these metrics to 98.8% and 96.0%, respectively (p < 0.001). Additionally, 3.6% of scans were marked for human inspection, increasing the success rate from 96.0% to 98.8% (p < 0.001). The vertebral match rate increased from 97.0% to 99.7% (p < 0.001), and the series success rate from 80.0% to 95.4% (p < 0.001). Subgroup analysis showed more consistent performance across age and sex groups. The proposed pipeline significantly outperforms Payer's method, enhancing segmentation accuracy and reducing longitudinal matching errors while minimizing evaluation workload. Its uncertainty analysis ensures robust performance, making it a valuable tool for longitudinal studies in multiple myeloma.

Tumor grade-titude: XGBoost radiomics paves the way for RCC classification.

Ellmann S, von Rohr F, Komina S, Bayerl N, Amann K, Polifka I, Hartmann A, Sikic D, Wullich B, Uder M, Bäuerle T

pubmed logopapersJul 1 2025
This study aimed to develop and evaluate a non-invasive XGBoost-based machine learning model using radiomic features extracted from pre-treatment CT images to differentiate grade 4 renal cell carcinoma (RCC) from lower-grade tumours. A total of 102 RCC patients who underwent contrast-enhanced CT scans were included in the analysis. Radiomic features were extracted, and a two-step feature selection methodology was applied to identify the most relevant features for classification. The XGBoost model demonstrated high performance in both training (AUC = 0.87) and testing (AUC = 0.92) sets, with no significant difference between the two (p = 0.521). The model also exhibited high sensitivity, specificity, positive predictive value, and negative predictive value. The selected radiomic features captured both the distribution of intensity values and spatial relationships, which may provide valuable insights for personalized treatment decision-making. Our findings suggest that the XGBoost model has the potential to be integrated into clinical workflows to facilitate personalized adjuvant immunotherapy decision-making, ultimately improving patient outcomes. Further research is needed to validate the model in larger, multicentre cohorts and explore the potential of combining radiomic features with other clinical and molecular data.

Automated Finite Element Modeling of the Lumbar Spine: A Biomechanical and Clinical Approach to Spinal Load Distribution and Stress Analysis.

Ahmadi M, Zhang X, Lin M, Tang Y, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 30 2025
Biomechanical analysis of the lumbar spine is vital for understanding load distribution and stress patterns under physiological conditions. Traditional finite element analysis (FEA) relies on time-consuming manual segmentation and meshing, leading to long runtimes and inconsistent accuracy. Automating this process improves efficiency and reproducibility. This study introduces an automated FEA methodology for lumbar spine biomechanics, integrating deep learning-based segmentation with computational modeling to streamline workflows from imaging to simulation. Medical imaging data were segmented using deep learning frameworks for vertebrae and intervertebral discs. Segmented structures were transformed into optimized surface meshes via Laplacian smoothing and decimation. Using the Gibbon library and FEBio, FEA models incorporated cortical and cancellous bone, nucleus, annulus, cartilage, and ligaments. Ligament attachments used spherical coordinate-based segmentation; vertebral endplates were extracted via principal component analysis (PCA) for cartilage modeling. Simulations assessed stress, strain, and displacement under axial rotation, extension, flexion, and lateral bending. The automated pipeline cut model preparation time by 97.9%, from over 24 hours to 30 minutes and 49.48 seconds. Biomechanical responses aligned with experimental and traditional FEA data, showing high posterior element loads in extension and flexion, consistent ligament forces, and disc deformations. The approach enhanced reproducibility with minimal manual input. This automated methodology provides an efficient, accurate framework for lumbar spine biomechanics, eliminating manual segmentation challenges. It supports clinical diagnostics, implant design, and rehabilitation, advancing computational and patient-specific spinal studies. Rapid simulations enhance implant optimization, and early detection of degenerative spinal issues, improving personalized treatment and research.

Using a large language model for post-deployment monitoring of FDA approved AI: pulmonary embolism detection use case.

Sorin V, Korfiatis P, Bratt AK, Leiner T, Wald C, Butler C, Cook CJ, Kline TL, Collins JD

pubmed logopapersJun 30 2025
Artificial intelligence (AI) is increasingly integrated into clinical workflows. The performance of AI in production can diverge from initial evaluations. Post-deployment monitoring (PDM) remains a challenging ingredient of ongoing quality assurance once AI is deployed in clinical production. To develop and evaluate a PDM framework that uses large language models (LLMs) for free-text classification of radiology reports, and human oversight. We demonstrate its application to monitor a commercially vended pulmonary embolism (PE) detection AI (CVPED). We retrospectively analyzed 11,999 CT pulmonary angiography (CTPA) studies performed between 04/30/2023-06/17/2024. Ground truth was determined by combining LLM-based radiology-report classification and the CVPED outputs, with human review of discrepancies. We simulated a daily monitoring framework to track discrepancies between CVPED and the LLM. Drift was defined when discrepancy rate exceeded a fixed 95% confidence interval (CI) for seven consecutive days. The CI and the optimal retrospective assessment period were determined from a stable dataset with consistent performance. We simulated drift by systematically altering CVPED or LLM sensitivity and specificity, and we modeled an approach to detect data shifts. We incorporated a human-in-the-loop selective alerting framework for continuous prospective evaluation and to investigate potential for incremental detection. Of 11,999 CTPAs, 1,285 (10.7%) had PE. Overall, 373 (3.1%) had discrepant classifications between CVPED and LLM. Among 111 CVPED-positive and LLM-negative cases, 29 would have triggered an alert due to the radiologist not interacting with CVPED. Of those, 24 were CVPED false-positives, one was an LLM false-negative, and the framework ultimately identified four true-alerts for incremental PE cases. The optimal retrospective assessment period for drift detection was determined to be two months. A 2-3% decline in model specificity caused a 2-3-fold increase in discrepancies, while a 10% drop in sensitivity was required to produce a similar effect. For example, a 2.5% drop in LLM specificity led to a 1.7-fold increase in CVPED-negative-LLM-positive discrepancies, which would have taken 22 days to detect using the proposed framework. A PDM framework combining LLM-based free-text classification with a human-in-the-loop alerting system can continuously track an image-based AI's performance, alert for performance drift, and provide incremental clinical value.

Statistical Toolkit for Analysis of Radiotherapy DICOM Data.

Kinz M, Molodowitch C, Killoran J, Hesser JW, Zygmanski P

pubmed logopapersJun 30 2025
&#xD;Radiotherapy (RT) has become increasingly sophisticated, necessitating advanced tools for analyzing extensive treatment data in hospital databases. Such analyses can enhance future treatments, particularly through Knowledge-Based Planning, and aid in developing new treatment modalities like convergent kV RT.&#xD;Purpose: The objective is to develop automated software tools for large-scale retrospective analysis of over 10,000 MeV x-ray radiotherapy plans. This aims to identify trends and references in plans delivered at our institution across all treatment sites, focusing on: (A) Planning-Target-Volume, Clinical-Target-Volume, Gross-Tumor-Volume, and Organ-At-Risk (PTV/CTV/GTV/OAR) topology, morphology, and dosimetry, and (B) RT plan efficiency and complexity.&#xD;Methods:&#xD;The software tools are coded in Python. Topological metrics are evaluated using principal component analysis, including center of mass, volume, size, and depth. Morphology is quantified using Hounsfield Units, while dose distribution is characterized by conformity and homogeneity indexes. The total dose within the target versus the body is defined as the Dose Balance Index. &#xD;Results:&#xD;The primary outcome of this study is the toolkit and an analysis of our database. For example, the mean minimum and maximum PTV depths are about 2.5±2.3 cm and 9±3 cm, respectively.&#xD;Conclusions:&#xD;This study provides a statistical basis for RT plans and the necessary tools to generate them. It aids in selecting plans for knowledge-based models and deep-learning networks. The site-specific volume and depth results help identify the limitations and opportunities of current and future treatment modalities, in our case convergent kV RT. The compiled statistics and tools are versatile for training, quality assurance, comparing plans from different periods or institutions, and establishing guidelines. The toolkit is publicly available at https://github.com/m-kinz/STAR.

Enhanced abdominal multi-organ segmentation with 3D UNet and UNet +  + deep neural networks utilizing the MONAI framework.

Tejashwini PS, Thriveni J, Venugopal KR

pubmed logopapersJun 30 2025
Accurate segmentation of organs in the abdomen is a primary requirement for any medical analysis and treatment planning. In this study, we propose an approach based on 3D UNet and UNet +  + architectures implemented in the MONAI framework for addressing challenges that arise due to anatomical variability, complex shape rendering of organs, and noise in CT/MRI scans. The models can analyze information in three dimensions from volumetric data, making use of skip and dense connections, and optimizing the parameters using Secretary Bird Optimization (SBO), which together help in better feature extraction and boundary delineation of the structures of interest across sets of multi-organ tissues. The developed model's performance was evaluated on multiple datasets, ranging from Pancreas-CT to Liver-CT and BTCV. The results indicated that on the Pancreas-CT dataset, a DSC of 94.54% was achieved for 3D UNet, while a slightly higher DSC of 95.62% was achieved for 3D UNet +  +. Both models performed well on the Liver-CT dataset, with 3D UNet acquiring a DSC score of 95.67% and 3D UNet +  + a DSC score of 97.36%. And in the case of the BTCV dataset, both models had DSC values ranging from 93.42 to 95.31%. These results demonstrate the robustness and efficiency of the models presented for clinical applications and medical research in multi-organ segmentation. This study validates the proposed architectures, underpinning and accentuating accuracy in medical imaging, creating avenues for scalable solutions for complex abdominal-imaging tasks.

In-silico CT simulations of deep learning generated heterogeneous phantoms.

Salinas CS, Magudia K, Sangal A, Ren L, Segars PW

pubmed logopapersJun 30 2025
Current virtual imaging phantoms primarily emphasize geometric&#xD;accuracy of anatomical structures. However, to enhance realism, it is also important&#xD;to incorporate intra-organ detail. Because biological tissues are heterogeneous in&#xD;composition, virtual phantoms should reflect this by including realistic intra-organ&#xD;texture and material variation.&#xD;We propose training two 3D Double U-Net conditional generative adversarial&#xD;networks (3D DUC-GAN) to generate sixteen unique textures that encompass organs&#xD;found within the torso. The model was trained on 378 CT image-segmentation&#xD;pairs taken from a publicly available dataset with 18 additional pairs reserved for&#xD;testing. Textured phantoms were generated and imaged using DukeSim, a virtual CT&#xD;simulation platform.&#xD;Results showed that the deep learning model was able to synthesize realistic&#xD;heterogeneous phantoms from a set of homogeneous phantoms. These phantoms were&#xD;compared with original CT scans and had a mean absolute difference of 46.15 ± 1.06&#xD;HU. The structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR)&#xD;were 0.86 ± 0.004 and 28.62 ± 0.14, respectively. The maximum mean discrepancy&#xD;between the generated and actual distribution was 0.0016. These metrics marked&#xD;an improvement of 27%, 5.9%, 6.2%, and 28% respectively, compared to current&#xD;homogeneous texture methods. The generated phantoms that underwent a virtual&#xD;CT scan had a closer visual resemblance to the true CT scan compared to the previous&#xD;method.&#xD;The resulting heterogeneous phantoms offer a significant step toward more realistic&#xD;in silico trials, enabling enhanced simulation of imaging procedures with greater fidelity&#xD;to true anatomical variation.

Genetically Optimized Modular Neural Networks for Precision Lung Cancer Diagnosis

Agrawal, V. L., Agrawal, T.

medrxiv logopreprintJun 30 2025
Lung cancer remains one of the leading causes of cancer mortality, and while low dose CT screening improves mortality, radiological detection is challenging due to the increasing shortage of radiologists. Artificial intelligence can significantly improve the procedure and also decrease the overall workload of the entire healthcare department. Building upon the existing works of application of genetic algorithm this study aims to create a novel algorithm for lung cancer diagnosis with utmost precision. We included a total of 156 CT scans of patients divided into two databases, followed by feature extraction using image statistics, histograms, and 2D transforms (FFT, DCT, WHT). Optimal feature vectors were formed and organized into Excel based knowledge-bases. Genetically trained classifiers like MLP, GFF-NN, MNN and SVM, are then optimized, with experimentations with different combinations of parameters, activation functions, and data partitioning percentages. Evaluation metrics included classification accuracy, Mean Squared Error (MSE), Area under Receiver Operating Characteristics (ROC) curve, and computational efficiency. Computer simulations demonstrated that the MNN (Topology II) classifier, specifically when trained with FFT coefficients and a momentum learning rule, consistently achieved 100% average classification accuracy on the cross-validation dataset for both Data-base I and Data-base II, outperforming MLP-based classifiers. This genetically optimized and trained MNN (Topology II) classifier is therefore recommended as the optimal solution for lung cancer diagnosis from CT scan images.
Page 86 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.