Sort by:
Page 41 of 45442 results

Real-world Evaluation of Computer-aided Pulmonary Nodule Detection Software Sensitivity and False Positive Rate.

El Alam R, Jhala K, Hammer MM

pubmed logopapersMay 12 2025
Evaluate the false positive rate (FPR) of nodule detection software in real-world use. A total of 250 nonenhanced chest computed tomography (CT) examinations were randomly selected from an academic institution and submitted to the ClearRead nodule detection system (Riverain Technologies). Detected findings were reviewed by a thoracic imaging fellow. Nodules were classified as true nodules, lymph nodes, or other findings (branching opacity, vessel, mucus plug, etc.), and FPR was recorded. FPR was compared with the initial published FPR in the literature. True diagnosis was based on pathology or follow-up stability. For cases with malignant nodules, we recorded whether malignancy was detected by clinical radiology report (which was performed without software assistance) and/or ClearRead. Twenty-one CTs were excluded due to a lack of thin-slice images, and 229 CTs were included. A total of 594 findings were reported by ClearRead, of which 362 (61%) were true nodules and 232 (39%) were other findings. Of the true nodules, 297 were solid nodules, of which 79 (27%) were intrapulmonary lymph nodes. The mean findings identified by ClearRead per scan was 2.59. ClearRead mean FPR was 1.36, greater than the published rate of 0.58 (P<0.0001). If we consider true lung nodules <6 mm as false positive, FPR is 2.19. A malignant nodule was present in 30 scans; ClearRead identified it in 26 (87%), and the clinical report identified it in 28 (93%) (P=0.32). In real-world use, ClearRead had a much higher FPR than initially reported but a similar sensitivity for malignant nodule detection compared with unassisted radiologists.

Inference-specific learning for improved medical image segmentation.

Chen Y, Liu S, Li M, Han B, Xing L

pubmed logopapersMay 12 2025
Deep learning networks map input data to output predictions by fitting network parameters using training data. However, applying a trained network to new, unseen inference data resembles an interpolation process, which may lead to inaccurate predictions if the training and inference data distributions differ significantly. This study aims to generally improve the prediction accuracy of deep learning networks on the inference case by bridging the gap between training and inference data. We propose an inference-specific learning strategy to enhance the network learning process without modifying the network structure. By aligning training data to closely match the specific inference data, we generate an inference-specific training dataset, enhancing the network optimization around the inference data point for more accurate predictions. Taking medical image auto-segmentation as an example, we develop an inference-specific auto-segmentation framework consisting of initial segmentation learning, inference-specific training data deformation, and inference-specific segmentation refinement. The framework is evaluated on public abdominal, head-neck, and pancreas CT datasets comprising 30, 42, and 210 cases, respectively, for medical image segmentation. Experimental results show that our method improves the organ-averaged mean Dice by 6.2% (p-value = 0.001), 1.5% (p-value = 0.003), and 3.7% (p-value < 0.001) on the three datasets, respectively, with a more notable increase for difficult-to-segment organs (such as a 21.7% increase for the gallbladder [p-value = 0.004]). By incorporating organ mask-based weak supervision into the training data alignment learning, the inference-specific auto-segmentation accuracy is generally improved compared with the image intensity-based alignment. Besides, a moving-averaged calculation of the inference organ mask during the learning process strengthens both the robustness and accuracy of the final inference segmentation. By leveraging inference data during training, the proposed inference-specific learning strategy consistently improves auto-segmentation accuracy and holds the potential to be broadly applied for enhanced deep learning decision-making.

Prognostic Value Of Deep Learning Based RCA PCAT and Plaque Volume Beyond CT-FFR In Patients With Stent Implantation.

Huang Z, Tang R, Du X, Ding Y, Yang Z, Cao B, Li M, Wang X, Wang W, Li Z, Xiao J, Wang X

pubmed logopapersMay 12 2025
The study aims to investigate the prognostic value of deep learning based pericoronary adipose tissue attenuation computed tomography (PCAT) and plaque volume beyond coronary computed tomography angiography (CTA) -derived fractional flow reserve (CT-FFR) in patients with percutaneous coronary intervention (PCI). A total of 183 patients with PCI who underwent coronary CTA were included in this retrospective study. Imaging assessment included PCAT, plaque volume, and CT-FFR, which were performed using an artificial intelligence (AI) assisted workstation. Kaplan-Meier survival curves analysis and multivariate Cox regression were used to estimate major adverse cardiovascular events (MACE), including non-fatal myocardial infraction (MI), stroke, and mortality. In total, 22 (12%) MACE occurred during a median follow-up period of 38.0 months (34.6-54.6 months). Kaplan-Meier analysis revealed that right coronary artery (RCA) PCAT (p = 0.007) and plaque volume (p = 0.008) were significantly associated with the increase in MACE. Multivariable Cox regression indicated that RCA PCAT (hazard ratios (HR): 2.94, 95%CI: 1.15-7.50, p = 0.025) and plaque volume (HR: 3.91, 95%CI: 1.20-12.75, p = 0.024) were independent predictors of MACE after adjustment by clinical risk factors. However, CT-FFR was not independently associated with MACE in multivariable Cox regression (p = 0.271). Deep learning based RCA PCAT and plaque volume derived from coronary CTA were found to be more strongly associated with MACE than CTFFR in patients with PCI.

Fully volumetric body composition analysis for prognostic overall survival stratification in melanoma patients.

Borys K, Lodde G, Livingstone E, Weishaupt C, Römer C, Künnemann MD, Helfen A, Zimmer L, Galetzka W, Haubold J, Friedrich CM, Umutlu L, Heindel W, Schadendorf D, Hosch R, Nensa F

pubmed logopapersMay 12 2025
Accurate assessment of expected survival in melanoma patients is crucial for treatment decisions. This study explores deep learning-based body composition analysis to predict overall survival (OS) using baseline Computed Tomography (CT) scans and identify fully volumetric, prognostic body composition features. A deep learning network segmented baseline abdomen and thorax CTs from a cohort of 495 patients. The Sarcopenia Index (SI), Myosteatosis Fat Index (MFI), and Visceral Fat Index (VFI) were derived and statistically assessed for prognosticating OS. External validation was performed with 428 patients. SI was significantly associated with OS on both CT regions: abdomen (P ≤ 0.0001, HR: 0.36) and thorax (P ≤ 0.0001, HR: 0.27), with lower SI associated with prolonged survival. MFI was also associated with OS on abdomen (P ≤ 0.0001, HR: 1.16) and thorax CTs (P ≤ 0.0001, HR: 1.08), where higher MFI was linked to worse outcomes. Lastly, VFI was associated with OS on abdomen CTs (P ≤ 0.001, HR: 1.90), with higher VFI linked to poor outcomes. External validation replicated these results. SI, MFI, and VFI showed substantial potential as prognostic factors for OS in malignant melanoma patients. This approach leveraged existing CT scans without additional procedural or financial burdens, highlighting the seamless integration of DL-based body composition analysis into standard oncologic staging routines.

AI-based volumetric six-tissue body composition quantification from CT cardiac attenuation scans for mortality prediction: a multicentre study.

Yi J, Marcinkiewicz AM, Shanbhag A, Miller RJH, Geers J, Zhang W, Killekar A, Manral N, Lemley M, Buchwald M, Kwiecinski J, Zhou J, Kavanagh PB, Liang JX, Builoff V, Ruddy TD, Einstein AJ, Feher A, Miller EJ, Sinusas AJ, Berman DS, Dey D, Slomka PJ

pubmed logopapersMay 12 2025
CT attenuation correction (CTAC) scans are routinely obtained during cardiac perfusion imaging, but currently only used for attenuation correction and visual calcium estimation. We aimed to develop a novel artificial intelligence (AI)-based approach to obtain volumetric measurements of chest body composition from CTAC scans and to evaluate these measures for all-cause mortality risk stratification. We applied AI-based segmentation and image-processing techniques on CTAC scans from a large international image-based registry at four sites (Yale University, University of Calgary, Columbia University, and University of Ottawa), to define the chest rib cage and multiple tissues. Volumetric measures of bone, skeletal muscle, subcutaneous adipose tissue, intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and epicardial adipose tissue (EAT) were quantified between automatically identified T5 and T11 vertebrae. The independent prognostic value of volumetric attenuation and indexed volumes were evaluated for predicting all-cause mortality, adjusting for established risk factors and 18 other body composition measures via Cox regression models and Kaplan-Meier curves. The end-to-end processing time was less than 2 min per scan with no user interaction. Between 2009 and 2021, we included 11 305 participants from four sites participating in the REFINE SPECT registry, who underwent single-photon emission computed tomography cardiac scans. After excluding patients who had incomplete T5-T11 scan coverage, missing clinical data, or who had been used for EAT model training, the final study group comprised 9918 patients. 5451 (55%) of 9918 participants were male and 4467 (45%) of 9918 participants were female. Median follow-up time was 2·48 years (IQR 1·46-3·65), during which 610 (6%) patients died. High VAT, EAT, and IMAT attenuation were associated with an increased all-cause mortality risk (adjusted hazard ratio 2·39, 95% CI 1·92-2·96; p<0·0001, 1·55, 1·26-1·90; p<0·0001, and 1·30, 1·06-1·60; p=0·012, respectively). Patients with high bone attenuation were at reduced risk of death (0·77, 0·62-0·95; p=0·016). Likewise, high skeletal muscle volume index was associated with a reduced risk of death (0·56, 0·44-0·71; p<0·0001). CTAC scans obtained routinely during cardiac perfusion imaging contain important volumetric body composition biomarkers that can be automatically measured and offer important additional prognostic value. The National Heart, Lung, and Blood Institute, National Institutes of Health.

Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.

Kılıç R, Yalçın A, Alper F, Oral EA, Ozbek IY

pubmed logopapersMay 12 2025
Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.

Automatic CTA analysis for blood vessels and aneurysm features extraction in EVAR planning.

Robbi E, Ravanelli D, Allievi S, Raunig I, Bonvini S, Passerini A, Trianni A

pubmed logopapersMay 12 2025
Endovascular Aneurysm Repair (EVAR) is a minimally invasive procedure crucial for treating abdominal aortic aneurysms (AAA), where precise pre-operative planning is essential. Current clinical methods rely on manual measurements, which are time-consuming and prone to errors. Although AI solutions are increasingly being developed to automate aspects of these processes, most existing approaches primarily focus on computing volumes and diameters, falling short of delivering a fully automated pre-operative analysis. This work presents BRAVE (Blood Vessels Recognition and Aneurysms Visualization Enhancement), the first comprehensive AI-driven solution for vascular segmentation and AAA analysis using pre-operative CTA scans. BRAVE offers exhaustive segmentation, identifying both the primary abdominal aorta and secondary vessels, often overlooked by existing methods, providing a complete view of the vascular structure. The pipeline performs advanced volumetric analysis of the aneurysm sac, quantifying thrombotic tissue and calcifications, and automatically identifies the proximal and distal sealing zones, critical for successful EVAR procedures. BRAVE enables fully automated processing, reducing manual intervention and improving clinical workflow efficiency. Trained on a multi-center open-access dataset, it demonstrates generalizability across different CTA protocols and patient populations, ensuring robustness in diverse clinical settings. This solution saves time, ensures precision, and standardizes the process, enhancing vascular surgeons' decision-making.

A systematic review and meta-analysis of the utility of quantitative, imaging-based approaches to predict radiation-induced toxicity in lung cancer patients.

Tong D, Midroni J, Avison K, Alnassar S, Chen D, Parsa R, Yariv O, Liu Z, Ye XY, Hope A, Wong P, Raman S

pubmed logopapersMay 11 2025
To conduct a systematic review and meta-analysis of the performance of radiomics, dosiomics and machine learning in generating toxicity prediction in thoracic radiotherapy. An electronic database search was conducted and dual-screened by independent authors to identify eligible studies for systematic review and meta-analysis. Data was extracted and study quality was assessed using TRIPOD for machine learning studies, RQS for Radiomics and RoB for dosiomics. 10,703 studies were identified, and 5252 entered screening. 106 studies including 23,373 patients were eligible for systematic review. Primary toxicity predicted was radiation pneumonitis (81), followed by esophagitis (12) and lymphopenia (4). Fourty-two studies studying radiation pneumonitis were eligible for meta-analysis, with pooled area-under-curve (AUC) of 0.82 (95% CI 0.79-0.85). Studies with machine learning had the best performance, with classical and deep learning models having similar performance. There is a trend towards an improvement of the performance of models with the year of publication. There is variability in study quality among the three study categories and dosiomic studies scored the highest among these. Publication bias was not observed. The majority of existing literature using radiomics, dosiomics and machine learning has focused on radiation pneumonitis prediction. Future research should focus on toxicity prediction of other organs at risk and the adoption of these models into clinical practice.

Learning-based multi-material CBCT image reconstruction with ultra-slow kV switching.

Ma C, Zhu J, Zhang X, Cui H, Tan Y, Guo J, Zheng H, Liang D, Su T, Sun Y, Ge Y

pubmed logopapersMay 11 2025
ObjectiveThe purpose of this study is to perform multiple (<math xmlns="http://www.w3.org/1998/Math/MathML"><mo>≥</mo><mn>3</mn></math>) material decomposition with deep learning method for spectral cone-beam CT (CBCT) imaging based on ultra-slow kV switching.ApproachIn this work, a novel deep neural network called SkV-Net is developed to reconstruct multiple material density images from the ultra-sparse spectral CBCT projections acquired using the ultra-slow kV switching technique. In particular, the SkV-Net has a backbone structure of U-Net, and a multi-head axial attention module is adopted to enlarge the perceptual field. It takes the CT images reconstructed from each kV as input, and output the basis material images automatically based on their energy-dependent attenuation characteristics. Numerical simulations and experimental studies are carried out to evaluate the performance of this new approach.Main ResultsIt is demonstrated that the SkV-Net is able to generate four different material density images, i.e., fat, muscle, bone and iodine, from five spans of kV switched spectral projections. Physical experiments show that the decomposition errors of iodine and CaCl<math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mrow></mrow><mn>2</mn></msub></math> are less than 6<math xmlns="http://www.w3.org/1998/Math/MathML"><mi>%</mi></math>, indicating high precision of this novel approach in distinguishing materials.SignificanceSkV-Net provides a promising multi-material decomposition approach for spectral CBCT imaging systems implemented with the ultra-slow kV switching scheme.

Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM).

Yan W, Xu Y, Yan S

pubmed logopapersMay 11 2025
BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.
Page 41 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.