Sort by:
Page 58 of 66652 results

Detection, Classification, and Segmentation of Rib Fractures From CT Data Using Deep Learning Models: A Review of Literature and Pooled Analysis.

Den Hengst S, Borren N, Van Lieshout EMM, Doornberg JN, Van Walsum T, Wijffels MME, Verhofstad MHJ

pubmed logopapersMay 23 2025
Trauma-induced rib fractures are common injuries. The gold standard for diagnosing rib fractures is computed tomography (CT), but the sensitivity in the acute setting is low, and interpreting CT slices is labor-intensive. This has led to the development of new diagnostic approaches leveraging deep learning (DL) models. This systematic review and pooled analysis aimed to compare the performance of DL models in the detection, segmentation, and classification of rib fractures based on CT scans. A literature search was performed using various databases for studies describing DL models detecting, segmenting, or classifying rib fractures from CT data. Reported performance metrics included sensitivity, false-positive rate, F1-score, precision, accuracy, and mean average precision. A meta-analysis was performed on the sensitivity scores to compare the DL models with clinicians. Of the 323 identified records, 25 were included. Twenty-one studies reported on detection, four on segmentation, and 10 on classification. Twenty studies had adequate data for meta-analysis. The gold standard labels were provided by clinicians who were radiologists and orthopedic surgeons. For detecting rib fractures, DL models had a higher sensitivity (86.7%; 95% CI: 82.6%-90.2%) than clinicians (75.4%; 95% CI: 68.1%-82.1%). In classification, the sensitivity of DL models for displaced rib fractures (97.3%; 95% CI: 95.6%-98.5%) was significantly better than that of clinicians (88.2%; 95% CI: 84.8%-91.3%). DL models for rib fracture detection and classification achieved promising results. With better sensitivities than clinicians for detecting and classifying displaced rib fractures, the future should focus on implementing DL models in daily clinics. Level III-systematic review and pooled analysis.

COVID-19CT+: A public dataset of CT images for COVID-19 retrospective analysis.

Sun Y, Du T, Wang B, Rahaman MM, Wang X, Huang X, Jiang T, Grzegorzek M, Sun H, Xu J, Li C

pubmed logopapersMay 23 2025
Background and objectiveCOVID-19 is considered as the biggest global health disaster in the 21st century, and it has a huge impact on the world.MethodsThis paper publishes a publicly available dataset of CT images of multiple types of pneumonia (COVID-19CT+). Specifically, the dataset contains 409,619 CT images of 1333 patients, with subset-A containing 312 community-acquired pneumonia cases and subset-B containing 1021 COVID-19 cases. In order to demonstrate that there are differences in the methods used to classify COVID-19CT+ images across time, we selected 13 classical machine learning classifiers and 5 deep learning classifiers to test the image classification task.ResultsIn this study, two sets of experiments are conducted using traditional machine learning and deep learning methods, the first set of experiments is the classification of COVID-19 in Subset-B versus COVID-19 white lung disease, and the second set of experiments is the classification of community-acquired pneumonia in Subset-A versus COVID-19 in Subset-B, demonstrating that the different periods of the methods differed on COVID-19CT+. On the first set of experiments, the accuracy of traditional machine learning reaches a maximum of 97.3% and a minimum of only 62.6%. Deep learning algorithms reaches a maximum of 97.9% and a minimum of 85.7%. On the second set of experiments, traditional machine learning reaches a high of 94.6% accuracy and a low of 56.8%. The deep learning algorithm reaches a high of 91.9% and a low of 86.3%.ConclusionsThe COVID-19CT+ in this study covers a large number of CT images of patients with COVID-19 and community-acquired pneumonia and is one of the largest datasets available. We expect that this dataset will attract more researchers to participate in exploring new automated diagnostic algorithms to contribute to the improvement of the diagnostic accuracy and efficiency of COVID-19.

Generalizable AI approach for detecting projection type and left-right reversal in chest X-rays.

Ohta Y, Katayama Y, Ichida T, Utsunomiya A, Ishida T

pubmed logopapersMay 23 2025
The verification of chest X-ray images involves several checkpoints, including orientation and reversal. To address the challenges of manual verification, this study developed an artificial intelligence (AI)-based system using a deep convolutional neural network (DCNN) to automatically verify the consistency between the imaging direction and examination orders. The system classified the chest X-ray images into four categories: anteroposterior (AP), posteroanterior (PA), flipped AP, and flipped PA. To evaluate the impact of internal and external datasets on the classification accuracy, the DCNN was trained using multiple publicly available chest X-ray datasets and tested on both internal and external data. The results demonstrated that the DCNN accurately classified the imaging directions and detected image reversal. However, the classification accuracy was strongly influenced by the training dataset. When trained exclusively on NIH data, the network achieved an accuracy of 98.9% on the same dataset; however, this reduced to 87.8% when evaluated with PADChest data. When trained on a mixed dataset, the accuracy improved to 96.4%; however, it decreased to 76.0% when tested on an external COVID-CXNet dataset. Further, using Grad-CAM, we visualized the decision-making process of the network, highlighting the areas of influence, such as the cardiac silhouette and arm positioning, depending on the imaging direction. Thus, this study demonstrated the potential of AI in assisting in automating the verification of imaging direction and positioning in chest X-rays. However, the network must be fine-tuned to local data characteristics to achieve optimal performance.

Predictive value of machine learning for PD-L1 expression in NSCLC: a systematic review and meta-analysis.

Zheng T, Li X, Zhou L, Jin J

pubmed logopapersMay 22 2025
As machine learning (ML) continuously develops in cancer diagnosis and treatment, some researchers have attempted to predict the expression of programmed death ligand-1 (PD-L1) in non-small cell lung cancer (NSCLC) by ML. However, there is a lack of systematic evidence on the effectiveness of ML. We conducted a thorough search across Embase, PubMed, the Cochrane Library, and Web of Science from inception to December 14th, 2023.A systematic review and meta-analysis was conducted to assess the value of ML for predicting PD-L1 expression in NSCLC. Totally 30 studies with 12,898 NSCLC patients were included. The thresholds of PD-L1 expression level were < 1%, 1-49%, and ≥ 50%. In the validation set, in the binary classification for PD-L1 ≥ 1%, the pooled C-index was 0.646 (95%CI: 0.587-0.705), 0.799 (95%CI: 0.782-0.817), 0.806 (95%CI: 0.753-0.858), and 0.800 (95%CI: 0.717-0.883), respectively, for the clinical feature-, radiomics-, radiomics + clinical feature-, and pathomics-based ML models; in the binary classification for PD-L1 ≥ 50%, the pooled C-index was 0.649 (95%CI: 0.553-0.744), 0.771 (95%CI: 0.728-0.814), and 0.826 (95%CI: 0.783-0.869), respectively, for the clinical feature-, radiomics-, and radiomics + clinical feature-based ML models. At present, radiomics- or pathomics-based ML methods are applied for the prediction of PD-L1 expression in NSCLC, which both achieve satisfactory accuracy. In particular, the radiomics-based ML method seems to have wider clinical applicability as a non-invasive diagnostic tool. Both radiomics and pathomics serve as processing methods for medical images. In the future, we expect to develop medical image-based DL methods for intelligently predicting PD-L1 expression.

Deep Learning-Based Multimodal Feature Interaction-Guided Fusion: Enhancing the Evaluation of EGFR in Advanced Lung Adenocarcinoma.

Xu J, Feng B, Chen X, Wu F, Liu Y, Yu Z, Lu S, Duan X, Chen X, Li K, Zhang W, Dai X

pubmed logopapersMay 22 2025
The aim of this study is to develop a deep learning-based multimodal feature interaction-guided fusion (DL-MFIF) framework that integrates macroscopic information from computed tomography (CT) images with microscopic information from whole-slide images (WSIs) to predict the epidermal growth factor receptor (EGFR) mutations of primary lung adenocarcinoma in patients with advanced-stage disease. Data from 396 patients with lung adenocarcinoma across two medical institutions were analyzed. The data from 243 cases were divided into a training set (n=145) and an internal validation set (n=98) in a 6:4 ratio, and data from an additional 153 cases from another medical institution were included as an external validation set. All cases included CT scan images and WSIs. To integrate multimodal information, we developed the DL-MFIF framework, which leverages deep learning techniques to capture the interactions between radiomic macrofeatures derived from CT images and microfeatures obtained from WSIs. Compared to other classification models, the DL-MFIF model achieved significantly higher area under the curve (AUC) values. Specifically, the model outperformed others on both the internal validation set (AUC=0.856, accuracy=0.750) and the external validation set (AUC=0.817, accuracy=0.708). Decision curve analysis (DCA) demonstrated that the model provided superior net benefits(range 0.15-0.87). Delong's test for external validation confirmed the statistical significance of the results (P<0.05). The DL-MFIF model demonstrated excellent performance in evaluating and distinguishing the EGFR in patients with advanced lung adenocarcinoma. This model effectively aids radiologists in accurately classifying EGFR mutations in patients with primary lung adenocarcinoma, thereby improving treatment outcomes for this population.

Influence of content-based image retrieval on the accuracy and inter-reader agreement of usual interstitial pneumonia CT pattern classification.

Park S, Hwang HJ, Yun J, Chae EJ, Choe J, Lee SM, Lee HN, Shin SY, Park H, Jeong H, Kim MJ, Lee JH, Jo KW, Baek S, Seo JB

pubmed logopapersMay 22 2025
To investigate whether a content-based image retrieval (CBIR) of similar chest CT images can help usual interstitial pneumonia (UIP) CT pattern classifications among readers with varying levels of experience. This retrospective study included patients who underwent high-resolution chest CT between 2013 and 2015 for the initial workup for fibrosing interstitial lung disease. UIP classifications were assigned to CT images by three thoracic radiologists, which served as the ground truth. One hundred patients were selected as queries. The CBIR retrieved the top three similar CT images with UIP classifications using a deep learning algorithm. The diagnostic accuracies and inter-reader agreement of nine readers before and after CBIR were evaluated. Of 587 patients (mean age, 63 years; 356 men), 100 query cases (26 UIP patterns, 26 probable UIP patterns, 5 indeterminate for UIP, and 43 alternative diagnoses) were selected. After CBIR, the mean accuracy (61.3% to 67.1%; p = 0.011) and inter-reader agreement (Fleiss Kappa, 0.400 to 0.476; p = 0.003) were slightly improved. The accuracies of the radiologist group for all CT patterns except indeterminate for UIP increased after CBIR; however, they did not reach statistical significance. The resident and pulmonologist groups demonstrated mixed results: accuracy decreased for UIP pattern, increased for alternative diagnosis, and varied for others. CBIR slightly improved diagnostic accuracy and inter-reader agreement in UIP pattern classifications. However, its impact varied depending on the readers' level of experience, suggesting that the current CBIR system may be beneficial when used to complement the interpretations of experienced readers. Question CT pattern classification is important for the standardized assessment and management of idiopathic pulmonary fibrosis, but requires radiologic expertise and shows inter-reader variability. Findings CBIR slightly improved diagnostic accuracy and inter-reader agreement for UIP CT pattern classifications overall. Clinical relevance The proposed CBIR system may guide consistent work-up and treatment strategies by enhancing accuracy and inter-reader agreement in UIP CT pattern classifications by experienced readers whose expertise and experience can effectively interact with CBIR results.

Enhancing Boundary Accuracy in Semantic Segmentation of Chest X-Ray Images Using Gaussian Process Regression.

Aljaddouh B, D Malathi D

pubmed logopapersMay 22 2025
This research aims to enhance X-ray lung segmentation by addressing boundary distortions in anatomical structures, with the objective of refining segmentation boundaries and improving the morphological shape of segmented objects. The proposed approach combines the K-segment principal curve with Gaussian Process Regression (GPR) to refine segmentation boundaries, evaluated using lung X-ray datasets at varying resolutions. Several state-of-the-art models, including U-Net, SegNet, and TransUnet, were also assessed for comparison. The model employed a custom kernel for GPR, combining Radial Basis Function (RBF) with a cosine similarity term. The effectiveness of the model was evaluated using metrics such as the Dice Coefficient (DC) and Jaccard Index (JC) for segmentation accuracy, along with Average Symmetric Surface Distance (ASSD) and Hausdorff Distance (HD) for boundary alignment. The proposed method achieved superior segmentation performance, particularly at the highest resolution (1024x1024 pixels), with a DC of 95.7% for the left lung and 94.1% for the right lung. Among the different models, TransUnet outperformed others across both the semantic segmentation and boundary refinement stages, showing significant improvements in DC, JC, ASSD, and HD. The results indicate that the proposed boundary refinement approach effectively improves the segmentation quality of lung X-rays, excelling in refining well-defined structures and achieving superior boundary alignment, showcasing its potential for clinical applications. However, limitations exist when dealing with irregular or unpredictable shapes, suggesting areas for future enhancement.

CT-Agent: A Multimodal-LLM Agent for 3D CT Radiology Question Answering

Yuren Mao, Wenyi Xu, Yuyang Qin, Yunjun Gao

arxiv logopreprintMay 22 2025
Computed Tomography (CT) scan, which produces 3D volumetric medical data that can be viewed as hundreds of cross-sectional images (a.k.a. slices), provides detailed anatomical information for diagnosis. For radiologists, creating CT radiology reports is time-consuming and error-prone. A visual question answering (VQA) system that can answer radiologists' questions about some anatomical regions on the CT scan and even automatically generate a radiology report is urgently needed. However, existing VQA systems cannot adequately handle the CT radiology question answering (CTQA) task for: (1) anatomic complexity makes CT images difficult to understand; (2) spatial relationship across hundreds slices is difficult to capture. To address these issues, this paper proposes CT-Agent, a multimodal agentic framework for CTQA. CT-Agent adopts anatomically independent tools to break down the anatomic complexity; furthermore, it efficiently captures the across-slice spatial relationship with a global-local token compression strategy. Experimental results on two 3D chest CT datasets, CT-RATE and RadGenome-ChestCT, verify the superior performance of CT-Agent.

Leveraging deep learning-based kernel conversion for more precise airway quantification on CT.

Choe J, Yun J, Kim MJ, Oh YJ, Bae S, Yu D, Seo JB, Lee SM, Lee HY

pubmed logopapersMay 22 2025
To evaluate the variability of fully automated airway quantitative CT (QCT) measures caused by different kernels and the effect of kernel conversion. This retrospective study included 96 patients who underwent non-enhanced chest CT at two centers. CT scans were reconstructed using four kernels (medium soft, medium sharp, sharp, very sharp) from three vendors. Kernel conversion targeting the medium soft kernel as reference was applied to sharp kernel images. Fully automated airway quantification was performed before and after conversion. The effects of kernel type and conversion on airway quantification were evaluated using analysis of variance, paired t-tests, and concordance correlation coefficient (CCC). Airway QCT measures (e.g., Pi10, wall thickness, wall area percentage, lumen diameter) decreased with sharper kernels (all, p < 0.001), with varying degrees of variability across variables and vendors. Kernel conversion substantially reduced variability between medium soft and sharp kernel images for vendors A (pooled CCC: 0.59 vs. 0.92) and B (0.40 vs. 0.91) and lung-dedicated sharp kernels of vendor C (0.26 vs. 0.71). However, it was ineffective for non-lung-dedicated sharp kernels of vendor C (0.81 vs. 0.43) and showed limited improvement in variability of QCT measures at the subsegmental level. Consistent airway segmentation and identical anatomic labeling improved subsegmental airway variability in theoretical tests. Deep learning-based kernel conversion reduced the measurement variability of airway QCT across various kernels and vendors but was less effective for non-lung-dedicated kernels and subsegmental airways. Consistent airway segmentation and precise anatomic labeling can further enhance reproducibility for reliable automated quantification. Question How do different CT reconstruction kernels affect the measurement variability of automated airway measurements, and can deep learning-based kernel conversion reduce this variability? Findings Kernel conversion improved measurement consistency across vendors for lung-dedicated kernels, but showed limited effectiveness for non-lung-dedicated kernels and subsegmental airways. Clinical relevance Understanding kernel-related variability in airway quantification and mitigating it through deep learning enables standardized analysis, but further refinements are needed for robust airway segmentation, particularly for improving measurement variability in subsegmental airways and specific kernels.

Deep Learning Image Reconstruction (DLIR) Algorithm to Maintain High Image Quality and Diagnostic Accuracy in Quadruple-low CT Angiography of Children with Pulmonary Sequestration: A Case Control Study.

Li H, Zhang Y, Hua S, Sun R, Zhang Y, Yang Z, Peng Y, Sun J

pubmed logopapersMay 22 2025
CT angiography (CTA) is a commonly used clinical examination to detect abnormal arteries and diagnose pulmonary sequestration (PS). Reducing the radiation dose, contrast medium dosage, and injection pressure in CTA, especially in children, has always been an important research topic, but few research is proven by pathology. The current study aimed to evaluate the diagnostic accuracy for children with PS in a quadruple-low CTA (4L-CTA: low tube voltage, radiation, contrast medium, and injection flow rate) using deep learning image reconstruction (DLIR) in comparison with routine protocol CTA with adaptive statistical iterative reconstruction-V (ASIR-V) MATERIALS AND METHODS: 53 patients (1.50±1.36years) suspected with PS were enrolled to undergo chest 4L-CTA using 70kVp tube voltage with radiation dose or 0.90 mGy in volumetric CT dose index (CTDIvol) and contrast medium dose of 0.8 ml/kg injected in 16 s. Images were reconstructed using DLIR. Another 53 patients (1.25±1.02years) with a routine dose protocol was used for comparison, and images were reconstructed with ASIR-V. The contrast-to-noise ratio (CNR) and edge-rise distance (ERD) of the aorta were calculated. The subjective overall image quality and artery visualization were evaluated using a 5-point scale (5, excellent; 3, acceptable). All patients underwent surgery after CT, the sensitivity and specificity for diagnosing PS were calculated. 4L-CTA reduced radiation dose by 51%, contrast dose by 47%, injection flow rate by 44% and injection pressure by 44% compared to the routine CTA (all p<0.05). Both groups had satisfactory subjective image quality and achieved 100% in both sensitivity and specificity for diagnosing PS. 4L-CTA had a reduced CNR (by 27%, p<0.05) but similar ERD, which reflects the image spatial resolution (p>0.05) compared to the routine CTA. 4L-CTA revealed small arteries with a diameter of 0.8 mm. DLIR ensures the realization of 4L-CTA in children with PS for significant radiation and contrast dose reduction, while maintaining image quality, visualization of small arteries, and high diagnostic accuracy.
Page 58 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.