Sort by:
Page 1 of 1111110 results
Next

PixelPrint 4D : A 3D Printing Method of Fabricating Patient-Specific Deformable CT Phantoms for Respiratory Motion Applications.

Im JY, Micah N, Perkins AE, Mei K, Geagan M, Roshkovan L, Noël PB

pubmed logopapersOct 1 2025
Respiratory motion poses a significant challenge for clinical workflows in diagnostic imaging and radiation therapy. Many technologies such as motion artifact reduction and tumor tracking have been developed to compensate for its effect. To assess these technologies, respiratory motion phantoms (RMPs) are required as preclinical testing environments, for instance, in computed tomography (CT). However, current CT RMPs are highly simplified and do not exhibit realistic tissue structures or deformation patterns. With the rise of more complex motion compensation technologies such as deep learning-based algorithms, there is a need for more realistic RMPs. This work introduces PixelPrint 4D , a 3D printing method for fabricating lifelike, patient-specific deformable lung phantoms for CT imaging. A 4DCT dataset of a lung cancer patient was acquired. The volumetric image data of the right lung at end inhalation was converted into 3D printer instructions using the previously developed PixelPrint software. A flexible 3D printing material was used to replicate variable densities voxel-by-voxel within the phantom. The accuracy of the phantom was assessed by acquiring CT scans of the phantom at rest, and under various levels of compression. These phantom images were then compiled into a pseudo-4DCT dataset and compared to the reference patient 4DCT images. Metrics used to assess the phantom structural accuracy included mean attenuation errors, 2-sample 2-sided Kolmogorov-Smirnov (KS) test on histograms, and structural similarity index (SSIM). The phantom deformation properties were assessed by calculating displacement errors of the tumor and throughout the full lung volume, attenuation change errors, and Jacobian errors, as well as the relationship between Jacobian and attenuation changes. The phantom closely replicated patient lung structures, textures, and attenuation profiles. SSIM was measured as 0.93 between the patient and phantom lung, suggesting a high level of structural accuracy. Furthermore, it exhibited realistic nonrigid deformation patterns. The mean tumor motion errors in the phantom were ≤0.7 ± 0.6 mm in each orthogonal direction. Finally, the relationship between attenuation and local volume changes in the phantom had a strong correlation with that of the patient, with analysis of covariance yielding P  = 0.83 and f  = 0.04, suggesting no significant difference between the phantom and patient. PixelPrint 4D facilitates the creation of highly realistic RMPs, exceeding the capabilities of existing models to provide enhanced testing environments for a wide range of emerging CT technologies.

Current and novel approaches for critical care management of aneurysmal subarachnoid hemorrhage in critical care.

Zoumprouli A, Carden R, Bilotta F

pubmed logopapersOct 1 2025
This review highlights recent advancements and evidence-based approaches in the critical care management of aneurysmal subarachnoid hemorrhage (aSAH), focusing on developments from the past 18 months. It addresses key challenges [rebleeding prevention, delayed cerebral ischemia (DCI), hydrocephalus, transfusion strategies, and temperature management], emphasizing multidisciplinary care and personalized treatment. Recent studies underscore the importance of systolic blood pressure control (<160 mmHg) to reduce rebleeding risk before aneurysm securing. Novel prognostic tools, including the modified 5-item frailty index and quantitative imaging software, show promise in improving outcome prediction. Prophylactic lumbar drainage may reduce DCI and improve neurological outcomes, while milrinone and computed tomography perfusion-guided therapies are being explored for vasospasm management. Transfusion strategies suggest a hemoglobin threshold of 9 g/dl may optimize outcomes. Temperature management remains contentious, but consensus recommends maintaining normothermia (36.0-37.5 °C) with continuous monitoring. Advances in aSAH care emphasize precision medicine, leveraging technology [e.g. Artificial intelligence (AI), quantitative imaging], and multidisciplinary collaboration. Key unresolved questions warrant multicenter trials to validate optimal blood pressure, transfusion, and temperature targets alongside emerging therapies for DCI.

Intelligent extraction of CT image landmarks for improving cam-type femoroacetabular impingement assessment.

Tayyebinezhad S, Fatehi M, Arabalibeik H, Ghadiri H

pubmed logopapersOct 1 2025
Femoroacetabular impingement (FAI) with cam-type morphology is a common hip disorder that can result in groin pain and eventually osteoarthritis. The pre-operative assessment is based on parameters obtained from x-ray or computed tomography (CT) scans, namely alpha angle (AA) and femoral head-neck offset (FHNO). The goal of our study was to develop a computer-aided detection (CAD) system to automatically select the hip region and measure diagnostic parameters from CT scans to overcome the limitations of the tedious and time-consuming process of subjectively selecting CT image slices to obtain parameters. 271 cases of ordinary abdominopelvic CT examination were collected retrospectively from two hospitals between 2018 and 2022, each equipped with a distinct CT scanner. First, a convolution neural network (CNN) was designed to select hip region slices among abdominopelvic CT scan image series. This CNN was trained using 80 CT scans divided into 50%, 20%, and 30% for training, validation and testing groups, respectively. Second, the most appropriate oblique slice passing through the femoral head-neck complex was selected, and AA and FHNO landmarks were calculated using image-processing algorithms. The best oblique slices were selected/measured manually for each hip as ground truth and its related parameters. CT hip-region selection using CNN yielded 99.34% accuracy. Pearson correlation coefficient between manual and automatic parameters measurement were 0.964 and 0.856 for AA and FHNO, respectively. The results of this study are promising for future development of a CAD software application for screening CT scans that may aid physicians to assess FAI. Question Femoroacetabular impingement is a common, underdiagnosed hip disorder requiring time-consuming image-based measurements. Can AI improve the efficiency and consistency of its radiologic assessment? Findings Automated slice selection and landmark detection using a hybrid AI method improved measurement efficiency and accuracy, with minimal bias confirmed through Bland-Altman analysis. Clinical relevance An AI-based method enables faster, more consistent evaluation of cam-type femoroacetabular impingement in routine CT images, supporting earlier identification and reducing dependency on operator experience in clinical workflows.

Machine learning combined with CT-based radiomics predicts the prognosis of oesophageal squamous cell carcinoma.

Liu M, Lu R, Wang B, Fan J, Wang Y, Zhu J, Luo J

pubmed logopapersOct 1 2025
This retrospective study aims to develop a machine learning model integrating preoperative CT radiomics and clinicopathological data to predict 3-year recurrence and recurrence patterns in postoperative oesophageal squamous cell carcinoma. Tumour regions were segmented using 3D-Slicer, and radiomic features were extracted via Python. LASSO regression selected prognostic features for model integration. Clinicopathological data include tumour length, lymph node positivity, differentiation grade, and neurovascular infiltration. Ultimately, a machine learning model was established by combining the screened imaging feature data and clinicopathological data and validating model performance. A nomogram was constructed for survival prediction, and risk stratification was carried out through the prediction results of the machine learning model and the nomogram. Survival analysis was performed for stage-based patient subgroups across risk stratifications to identify adjuvant therapy-benefiting cohorts. Patients were randomly divided into a 7:3 ratio of 368 patients in the training cohorts and 158 patients in the validation cohorts. The LASSO regression screens out 6 recurrence prediction and 9 recurrence pattern prediction features, respectively. Among 526 patients (mean age 63; 427 males), the model achieved high accuracy in predicting recurrence (training cohort AUC: 0.826 [logistic regression]/0.820 [SVM]; validation cohort: 0.830/0.825) and recurrence patterns (training:0.801/0.799; validation:0.806/0.798). Risk stratification based on a machine learning model and nomogram predictions revealed that adjuvant therapy significantly improved disease-free survival in stages II-III patients with predicted recurrence and low survival (HR 0.372, 95% CI: 0.206-0.669; p < 0.001). Machine learning models exhibit excellent performance in predicting recurrence after surgery for squamous oesophageal cancer. Radiomic features of contrast-enhanced CT imaging can predict the prognosis of patients with oesophageal squamous cell carcinoma, which in turn can help clinicians stratify risk and screen out patient populations that could benefit from adjuvant therapy, thereby aiding medical decision-making. There is a lack of prognostic models for oesophageal squamous cell carcinoma in current research. The prognostic prediction model that we have developed has high accuracy by combining radiomics features and clinicopathologic data. This model aids in risk stratification of patients and aids clinical decision-making through predictive outcomes.

Automated contouring of gross tumor volume lymph nodes in lung cancer by deep learning.

Huang Y, Yuan X, Xu L, Jian J, Gong C, Zhang Y, Zheng W

pubmed logopapersSep 30 2025
The precise contouring of gross tumor volume lymph nodes (GTVnd) is an essential step in clinical target volume delineation. This study aims to propose and evaluate a deep learning model for segmenting GTVnd specifically in lung cancer, representing one of the pioneering investigations into automated segmentation of GTVnd specifically for lung cancer. Ninety computed tomography (CT) scans of patients with stage Ш-Ⅳ small cell lung cancer (SCLC) were collected, of which 75 patients were assembled into a training dataset and 15 were used in a testing dataset. A new segmentation model was constructed to enable the automatic and accurate delineation of the GTVnd in lung cancer. This model integrates a contextual cue enhancement module and an edge-guided feature enhancement decoder. The contextual cues enhancement module was used to enforce the consistency of the contextual cues encoded in the deepest feature, and the edge-guided feature enhancement decoder was used to obtain edge-aware and edge-preserving segmentation predictions. The model was quantitatively evaluated using the three-dimensional Dice Similarity Coefficient (3D DSC) and the 95th Hausdorff Distance (95HD). Additionally, comparative analysis was conducted between predicted treatment plans derived from auto-contouring GTVnd and established clinical plans. The ECENet achieved a mean 3D DSC of 0.72 ± 0.09 and a 95HD of 6.39 ± 4.59 mm, showing significant improvement compared to UNet, with a DSC of 0.46 ± 0.19 and a 95HD of 12.24 ± 13.36 mm, and nnUNet, with a DSC of 0.52 ± 0.18 and a 95HD of 9.92 ± 6.49 mm. Its performance was intermediate, falling between mid-level physicians, with a DSC of 0.81 ± 0.06, and junior physicians, with a DSC of 0.68 ± 0.10. And the clinical and predicted treatment plans were further compared. The dosimetric analysis demonstrated excellent agreement between predicted and clinical plans, with average relative deviation of < 0.17% for PTV D2/D50/D98, < 3.5% for lung V30/V20/V10/V5/Dmean, and < 6.1% for heart V40/V30/Dmean. Furthermore, the TCP (66.99% ± 0.55 vs. 66.88% ± 0.45) and NTCP (3.13% ± 1.33 vs. 3.25% ± 1.42) analyses revealed strong concordance between predicted and clinical outcomes, confirming the clinical applicability of the proposed method. The proposed model could achieve the automatic delineation of the GTVnd in the thoracic region of lung cancer and showed certain advantages, making it a potential choice for the automatic delineation of the GTVnd in lung cancer, particularly for young radiation oncologists.

Artificial Intelligence in Low-Dose Computed Tomography Screening of the Chest: Past, Present, and Future.

Yip R, Jirapatnakul A, Avila R, Gutierrez JG, Naghavi M, Yankelevitz DF, Henschke CI

pubmed logopapersSep 30 2025
The integration of artificial intelligence (AI) with low-dose computed tomography (LDCT) has the potential to transform lung cancer screening into a comprehensive approach to early detection of multiple diseases. Building on over 3 decades of research and global implementation by the International Early Lung Cancer Action Program (I-ELCAP), this paper reviews the development and clinical integration of AI for interpreting LDCT scans. We describe the historical milestones in AI-assisted lung nodule detection, emphysema quantification, and cardiovascular risk assessment using visual and quantitative imaging features. We also discuss challenges related to image acquisition variability, ground truth curation, and clinical integration, with a particular focus on the design and implementation of the open-source IELCAP-AIRS system and the ScreeningPLUS infrastructure, which enable AI training, validation, and deployment in real-world screening environments. AI algorithms for rule-out decisions, nodule tracking, and disease quantification have the potential to reduce radiologist workload and advance precision screening. With the ability to evaluate multiple diseases from a single LDCT scan, AI-enabled screening offers a powerful, scalable tool for improving population health. Ongoing collaboration, standardized protocols, and large annotated datasets are critical to advancing the future of integrated, AI-driven preventive care.

A phase-aware Cross-Scale U-MAMba with uncertainty-aware segmentation and Switch Atrous Bifovea EfficientNetB7 classification of kidney lesion subtype.

Rmr SS, Mb S, R D, M T, P V

pubmed logopapersSep 30 2025
Kidney lesion subtype identification is essential for precise diagnosis and personalized treatment planning. However, achieving reliable classification remains challenging due to factors such as inter-patient anatomical variability, incomplete multi-phase CT acquisitions, and ill-defined or overlapping lesion boundaries. In addition, genetic and ethnic morphological variations introduce inconsistent imaging patterns, reducing the generalizability of conventional deep learning models. To address these challenges, we introduce a unified framework called Phase-aware Cross-Scale U-MAMba and Switch Atrous Bifovea EfficientNet B7 (PCU-SABENet), which integrates multi-phase reconstruction, fine-grained lesion segmentation, and robust subtype classification. The PhaseGAN-3D synthesizes missing CT phases using binary mask-guided inter-phase priors, enabling complete four-phase reconstruction even under partial acquisition conditions. The PCU segmentation module combines Contextual Attention Blocks, Cross-Scale Skip Connections, and uncertainty-aware pseudo-labeling to delineate lesion boundaries with high anatomical fidelity. These enhancements help mitigate low contrast and intra-class ambiguity. For classification, SABENet employs Switch Atrous Convolution for multi-scale receptive field adaptation, Hierarchical Tree Pooling for structure-aware abstraction, and Bi-Fovea Self-Attention to emphasize fine lesion cues and global morphology. This configuration is particularly effective in addressing morphological diversity across patient populations. Experimental results show that the proposed model achieves state-of-the-art performance, with 99.3% classification accuracy, 94.8% Dice similarity, 89.3% IoU, 98.8% precision, 99.2% recall, a phase-consistency score of 0.94, and a subtype confidence deviation of 0.08. Moreover, the model generalizes well on external datasets (TCIA) with 98.6% accuracy and maintains efficient computational performance, requiring only 0.138 GFLOPs and 8.2 ms inference time. These outcomes confirm the model's robustness in phase-incomplete settings and its adaptability to diverse patient cohorts. The PCU-SABENet framework sets a new standard in kidney lesion subtype analysis, combining segmentation precision with clinically actionable classification, thus offering a powerful tool for enhancing diagnostic accuracy and decision-making in real-world renal cancer management.

Centiloid values from deep learning-based CT parcellation: a valid alternative to freesurfer.

Yoon YJ, Seo S, Lee S, Lim H, Choo K, Kim D, Han H, So M, Kang H, Kang S, Kim D, Lee YG, Shin D, Jeon TJ, Yun M

pubmed logopapersSep 30 2025
Amyloid PET/CT is essential for quantifying amyloid-beta (Aβ) deposition in Alzheimer's disease (AD), with the Centiloid (CL) scale standardizing measurements across imaging centers. However, MRI-based CL pipelines face challenges: high cost, contraindications, and patient burden. To address these challenges, we developed a deep learning-based CT parcellation pipeline calibrated to the standard CL scale using CT images from PET/CT scans and evaluated its performance relative to standard pipelines. A total of 306 participants (23 young controls [YCs] and 283 patients) underwent 18 F-florbetaben (FBB) PET/CT and MRI. Based on visual assessment, 207 patients were classified as Aβ-positive and 76 as Aβ-negative. PET images were processed using the CT parcellation pipeline and compared to FreeSurfer (FS) and standard pipelines. Agreement was assessed via regression analyses. Effect size, variance, and ROC analyses were used to compare pipelines and determine the optimal CL threshold relative to visual Aβ assessment. The CT parcellation showed high concordance with the FS and provided reliable CL quantification (R² = 0.99). Both pipelines demonstrated similar variance in YCs and effect sizes between YCs and ADCI. ROC analyses confirmed comparable accuracy and similar CL thresholds, supporting CT parcellation as a viable MRI-free alternative. Our findings indicate that the CT parcellation pipeline achieves a level of accuracy similar to FS in CL quantification, demonstrating its reliability as an MRI-free alternative. In PET/CT, CT and PET are acquired sequentially within the same session on a shared bed and headrest, which helps maintain consistent positioning and adequate spatial alignment, reducing registration errors and supporting more reliable and precise quantification.

Artificial Intelligence Model for Imaging-Based Extranodal Extension Detection and Outcome Prediction in Human Papillomavirus-Positive Oropharyngeal Cancer.

Dayan GS, Hénique G, Bahig H, Nelson K, Brodeur C, Christopoulos A, Filion E, Nguyen-Tan PF, O'Sullivan B, Ayad T, Bissada E, Tabet P, Guertin L, Desilets A, Kadoury S, Letourneau-Guillon L

pubmed logopapersSep 30 2025
Although not included in the eighth edition of the American Joint Committee on Cancer Staging System, there is growing evidence suggesting that imaging-based extranodal extension (iENE) is associated with worse outcomes in HPV-associated oropharyngeal carcinoma (OPC). Key challenges with iENE include the lack of standardized criteria, reliance on radiological expertise, and interreader variability. To develop an artificial intelligence (AI)-driven pipeline for lymph node segmentation and iENE classification using pretreatment computed tomography (CT) scans, and to evaluate its association with oncologic outcomes in HPV-positive OPC. This was a single-center cohort study conducted at a tertiary oncology center in Montreal, Canada, of adult patients with HPV-positive cN+ OPC treated with up-front (chemo)radiotherapy from January 2009 to January 2020. Participants were followed up until January 2024. Data analysis was performed from March 2024 to April 2025. Pretreatment planning CT scans along with lymph node gross tumor volume segmentations performed by expert radiation oncologists were extracted. For lymph node segmentation, an nnU-Net model was developed. For iENE classification, radiomic and deep learning feature extraction methods were compared. iENE classification accuracy was assessed against 2 expert neuroradiologist evaluations using area under the receiver operating characteristic curve (AUC). Subsequently, the association of AI-predicted iENE with oncologic outcomes-ie, overall survival (OS), recurrence-free survival (RFS), distant control (DC), and locoregional control (LRC)-was assessed. Among 397 patients (mean [SD] age, 62.3 [9.1] years; 80 females [20.2%] and 317 males [79.8%]), AI-iENE classification using radiomics achieved an AUC of 0.81. Patients with AI-predicted iENE had worse 3-year OS (83.8% vs 96.8%), RFS (80.7% vs 93.7%), and DC (84.3% vs 97.1%), but similar LRC. AI-iENE had significantly higher Concordance indices than radiologist-assessed iENE for OS (0.64 vs 0.55), RFS (0.67 vs 0.60), and DC (0.79 vs 0.68). In multivariable analysis, AI-iENE remained independently associated with OS (adjusted hazard ratio [aHR], 2.82; 95% CI, 1.21-6.57), RFS (aHR, 4.20; 95% CI, 1.93-9.11), and DC (aHR, 12.33; 95% CI, 4.15-36.67), adjusting for age, tumor category, node category, and number of lymph nodes. This single-center cohort study found that an AI-driven pipeline can successfully automate lymph node segmentation and iENE classification from pretreatment CT scans in HPV-associated OPC. Predicted iENE was independently associated with worse oncologic outcomes. External validation is required to assess generalizability and the potential for implementation in institutions without specialized imaging expertise.

3D Convolutional Neural Network for Predicting Clinical Outcome from Coronary Computed Tomography Angiography in Patients with Suspected Coronary Artery Disease.

Stambollxhiu E, Freißmuth L, Moser LJ, Adolf R, Will A, Hendrich E, Bressem K, Hadamitzky M

pubmed logopapersSep 30 2025
This study aims to develop and assess an optimized three-dimensional convolutional neural network model (3D CNN) for predicting major cardiac events from coronary computed tomography angiography (CCTA) images in patients with suspected coronary artery disease. Patients undergoing CCTA with suspected coronary artery disease (CAD) were retrospectively included in this single-center study and split into training and test sets. The endpoint was defined as a composite of all-cause death, myocardial infarction, unstable angina, or revascularization events. Cardiovascular risk assessment relied on Morise score and the extent of CAD (eoCAD). An optimized 3D CNN mimicking the DenseNet architecture was trained on CCTA images to predict the clinical endpoints. The data was unannotated for presence of coronary plaque. A total of 5562 patients were assigned to the training group (66.4% male, median age 61.1 ± 11.2); 714 to the test group (69.3% male, 61.5 ± 11.4). Over a 7.2-year follow-up, the composite endpoint occurred in 760 training group and 83 test group patients. In the test cohort, the CNN achieved an AUC of 0.872 ± 0.020 for predicting the composite endpoint. The predictive performance improved in a stepwise manner: from an AUC of 0.652 ± 0.031 while using Morise score alone to 0.901 ± 0.016 when adding eoCAD and finally to 0.920 ± 0.015 when combining Morise score, eoCAD, and CNN (p < 0.001 and p = 0.012, respectively). Deep learning-based analysis of CCTA images improves prognostic risk stratification when combined with clinical and imaging risk factors in patients with suspected CAD.
Page 1 of 1111110 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.