Sort by:
Page 9 of 66652 results

Annotation-efficient deep learning detection and measurement of mediastinal lymph nodes in CT.

Olesinski A, Lederman R, Azraq Y, Sosna J, Joskowicz L

pubmed logopapersSep 13 2025
Manual detection and measurement of structures in volumetric scans is routine in clinical practice but is time-consuming and subject to observer variability. Automatic deep learning-based solutions are effective but require a large dataset of manual annotations by experts. We present a novel annotation-efficient semi-supervised deep learning method for automatic detection, segmentation, and measurement of the short axis length (SAL) of mediastinal lymph nodes (LNs) in contrast-enhanced CT (ceCT) scans. Our semi-supervised method combines the precision of expert annotations with the quantity advantages of pseudolabeled data. It uses an ensemble of 3D nnU-Net models trained on a few expert-annotated scans to generate pseudolabels on a large dataset of unannotated scans. The pseudolabels are then filtered to remove false positive LNs by excluding LNs outside the mediastinum and LNs overlapping with other anatomical structures. Finally, a single 3D nnU-Net model is trained using the filtered pseudo-labels. Our method optimizes the ratio of annotated/non-annotated dataset sizes to achieve the desired performance, thus reducing manual annotation effort. Experimental studies on three chest ceCT datasets with a total of 268 annotated scans (1817 LNs), of which 134 scans were used for testing and the remaining for ensemble training in batches of 17, 34, 67, and 134 scans, as well as 710 unannotated scans, show that the semi-supervised models' recall improvements were 11-24% (0.72-0.87) while maintaining comparable precision levels. The best model achieved mean SAL differences of 1.65 ± 0.92 mm for normal LNs and 4.25 ± 4.98 mm for enlarged LNs, both within the observer variability. Our semi-supervised method requires one-fourth to one-eighth less annotations to achieve a performance to supervised models trained on the same dataset for the automatic measurement of mediastinal LNs in chest ceCT. Using pseudolabels with anatomical filtering may be effective to overcome the challenges of the development of AI-based solutions in radiology.

Association of artificial intelligence-screened interstitial lung disease with radiation pneumonitis in locally advanced non-small cell lung cancer.

Bacon H, McNeil N, Patel T, Welch M, Ye XY, Bezjak A, Lok BH, Raman S, Giuliani M, Cho BCJ, Sun A, Lindsay P, Liu G, Kandel S, McIntosh C, Tadic T, Hope A

pubmed logopapersSep 13 2025
Interstitial lung disease (ILD) has been correlated with an increased risk for radiation pneumonitis (RP) following lung SBRT, but the degree to which locally advanced NSCLC (LA-NSCLC) patients are affected has yet to be quantified. An algorithm to identify patients at high risk for RP may help clinicians mitigate risk. All LA-NSCLC patients treated with definitive radiotherapy at our institution from 2006 to 2021 were retrospectively assessed. A convolutional neural network was previously developed to identify patients with radiographic ILD using planning computed tomography (CT) images. All screen-positive (AI-ILD + ) patients were reviewed by a thoracic radiologist to identify true radiographic ILD (r-ILD). The association between the algorithm output, clinical and dosimetric variables, and the outcomes of grade ≥ 3 RP and mortality were assessed using univariate (UVA) and multivariable (MVA) logistic regression, and Kaplan-Meier survival analysis. 698 patients were included in the analysis. Grade (G) 0-5 RP was reported in 51 %, 27 %, 17 %, 4.4 %, 0.14 % and 0.57 % of patients, respectively. Overall, 23 % of patients were classified as AI-ILD + . On MVA, only AI-ILD status (OR 2.15, p = 0.03) and AI-ILD score (OR 35.27, p < 0.01) were significant predictors of G3 + RP. Median OS was 3.6 years in AI-ILD- patients and 2.3 years in AI-ILD + patients (NS). Patients with r-ILD had significantly higher rates of severe toxicities, with G3 + RP 25 % and G5 RP 7 %. R-ILD was associated with an increased risk for G3 + RP on MVA (OR 5.42, p < 0.01). Our AI-ILD algorithm detects patients with significantly increased risk for G3 + RP.

Multi-pathology Chest X-ray Classification with Rejection Mechanisms

Yehudit Aperstein, Amit Tzahar, Alon Gottlib, Tal Verber, Ravit Shagan Damti, Alexander Apartsin

arxiv logopreprintSep 12 2025
Overconfidence in deep learning models poses a significant risk in high-stakes medical imaging tasks, particularly in multi-label classification of chest X-rays, where multiple co-occurring pathologies must be detected simultaneously. This study introduces an uncertainty-aware framework for chest X-ray diagnosis based on a DenseNet-121 backbone, enhanced with two selective prediction mechanisms: entropy-based rejection and confidence interval-based rejection. Both methods enable the model to abstain from uncertain predictions, improving reliability by deferring ambiguous cases to clinical experts. A quantile-based calibration procedure is employed to tune rejection thresholds using either global or class-specific strategies. Experiments conducted on three large public datasets (PadChest, NIH ChestX-ray14, and MIMIC-CXR) demonstrate that selective rejection improves the trade-off between diagnostic accuracy and coverage, with entropy-based rejection yielding the highest average AUC across all pathologies. These results support the integration of selective prediction into AI-assisted diagnostic workflows, providing a practical step toward safer, uncertainty-aware deployment of deep learning in clinical settings.

Risk prediction for lung cancer screening: a systematic review and meta-regression

Rezaeianzadeh, R., Leung, C., Kim, S. J., Choy, K., Johnson, K. M., Kirby, M., Lam, S., Smith, B. M., Sadatsafavi, M.

medrxiv logopreprintSep 12 2025
BackgroundLung cancer (LC) is the leading cause of cancer mortality, often diagnosed at advanced stages. Screening reduces mortality in high-risk individuals, but its efficiency can improve with pre- and post-screening risk stratification. With recent LC screening guideline updates in Europe and the US, numerous novel risk prediction models have emerged since the last systematic review of such models. We reviewed risk-based models for selecting candidates for CT screening, and post-CT stratification. MethodsWe systematically reviewed Embase and MEDLINE (2020-2024), identifying studies proposing new LC risk models for screening selection or nodule classification. Data extraction included study design, population, model type, risk horizon, and internal/external validation metrics. In addition, we performed an exploratory meta-regression of AUCs to assess whether sample size, model class, validation type, and biomarker use were associated with discrimination. ResultsOf 1987 records, 68 were included: 41 models were for screening selection (20 without biomarkers, 21 with), and 27 for nodule classification. Regression-based models predominated, though machine learning and deep learning approaches were increasingly common. Discrimination ranged from moderate (AUC{approx}0.70) to excellent (>0.90), with biomarker and imaging-enhanced models often outperforming traditional ones. Model calibration was inconsistently reported, and fewer than half underwent external validation. Meta-regression suggested that, among pre-screening models, larger sample sizes were modestly associated with higher AUC. Conclusion75 models had been identified prior to 2020, we found 68 models since. This reflects growing interest in personalized LC screening. While many demonstrate strong discrimination, inconsistent calibration and limited external validation hinder clinical adoption. Future efforts should prioritize improving existing models rather than developing new ones, transparent evaluation, cost-effectiveness analysis, and real-world implementation.

Machine-learning model for differentiating round pneumonia and primary lung cancer using CT-based radiomic analysis.

Genç H, Yildirim M

pubmed logopapersSep 12 2025
Round pneumonia is a benign lung condition that can radiologically mimic primary lung cancer, making diagnosis challenging. Accurately distinguishing between these diseases is critical to avoid unnecessary invasive procedures. This study aims to distinguish round pneumonia from primary lung cancer by developing machine-learning models based on radiomic features extracted from computed tomography (CT) images. This retrospective observational study included 24 patients diagnosed with round pneumonia and 24 with histopathologically confirmed primary lung cancer. The lesions were manually segmented on the CT images by 2 radiologists. In total, 107 radiomic features were extracted from each case. Feature selection was performed using an information-gain algorithm to identify the 5 most relevant features. Seven machine-learning classifiers (Naïve Bayes, support vector machine, Random Forest, Decision Tree, Neural Network, Logistic Regression, and k-NN) were trained and validated. The model performance was evaluated using AUC, classification accuracy, sensitivity, and specificity. The Naïve Bayes, support vector machine, and Random Forest models achieved perfect classification performance on the entire dataset (AUC = 1.000). After feature selection, the Naïve Bayes model maintained a high performance with an AUC of 1.000, accuracy of 0.979, sensitivity of 0.958, and specificity of 1.000. Machine-learning models using CT-based radiomics features can effectively differentiate round pneumonia from primary lung cancer. These models offer a promising noninvasive tool to aid in radiological diagnosis and reduce diagnostic uncertainty.

Automatic approach for B-lines detection in lung ultrasound images using You Only Look Once algorithm.

Bottino A, Botrugno C, Casciaro E, Conversano F, Lay-Ekuakille A, Lombardi FA, Morello R, Pisani P, Vetrugno L, Casciaro S

pubmed logopapersSep 11 2025
B-lines are among the key artifact signs observed in Lung Ultrasound (LUS), playing a critical role in differentiating pulmonary diseases and assessing overall lung condition. However, their accurate detection and quantification can be time-consuming and technically challenging, especially for less experienced operators. This study aims to evaluate the performance of a YOLO (You Only Look Once)-based algorithm for the automated detection of B-lines, offering a novel tool to support clinical decision-making. The proposed approach is designed to improve the efficiency and consistency of LUS interpretation, particularly for non-expert practitioners, and to enhance its utility in guiding respiratory management. In this observational agreement study, 644 images from both anonymized internal and clinical online database were evaluated. After a quality selection step, 386 images remained available for analysis from 46 patients. Ground truth was established by blinded expert sonographer identifying B-lines within rectangular Region Of Interest (ROI) on each frame. Algorithm performances were assessed through Precision, Recall and F1 Score, whereas to quantify the agreement between the YOLO-based algorithm and the expert operator, weighted kappa (kw) statistics were employed. The algorithm achieved a precision of 0.92 (95% CI 0.89-0.94), recall of 0.81 (95% CI 0.77-0.85), and F1-score of 0.86 (95% CI 0.83-0.88). The weighted kappa was 0.68 (95% CI 0.64-0.72), indicating substantial agreement algorithm and expert annotations. The proposed algorithm has demonstrated its potential to significantly enhance diagnostic support by accurately detecting B-lines in LUS images.

SqueezeViX-Net with SOAE: A Prevailing Deep Learning Framework for Accurate Pneumonia Classification using X-Ray and CT Imaging Modalities.

Kavitha N, Anand B

pubmed logopapersSep 11 2025
Pneumonia represents a dangerous respiratory illness that leads to severe health problems when proper diagnosis does not occur, followed by an increase in deaths, particularly among at-risk populations. Appropriate treatment requires the correct identification of pneumonia types in conjunction with swift and accurate diagnosis. This paper presents the deep learning framework SqueezeViX-Net, specifically designed for pneumonia classification. The model benefits from a Self-Optimized Adaptive Enhancement (SOAE) method, which makes programmed changes to the dropout rate during the training process. The adaptive dropout adjustment mechanism leads to better model suitability and stability. The evaluation of SqueezeViX-Net is conducted through the analysis of extensive X-ray and CT image collections derived from publicly accessible Kaggle repositories. SqueezeViX-Net outperformed various established deep learning architectures, including DenseNet-121, ResNet-152V2, and EfficientNet-B7, when evaluated in terms of performance. The model demonstrated better results, as it performed with higher accuracy levels, surpassing both precision and recall metrics, as well as the F1-score metric. The validation process of this model was conducted using a range of pneumonia data sets, comprising both CT images and X-ray images, which demonstrated its ability to handle modality variations. SqueezeViX-Net integrates SOAE technology to develop an advanced framework that enables the specific identification of pneumonia for clinical use. The model demonstrates excellent diagnostic potential for medical staff through its dynamic learning capabilities and high precision, contributing to improved patient treatment outcomes.

Enhanced U-Net with Attention Mechanisms for Improved Feature Representation in Lung Nodule Segmentation.

Aung TMM, Khan AA

pubmed logopapersSep 11 2025
Accurate segmentation of small and irregular pulmonary nodules remains a significant challenge in lung cancer diagnosis, particularly in complex imaging backgrounds. Traditional U-Net models often struggle to capture long-range dependencies and integrate multi-scale features, limiting their effectiveness in addressing these challenges. To overcome these limitations, this study proposes an enhanced U-Net hybrid model that integrates multiple attention mechanisms to enhance feature representation and improve the precision of segmentation outcomes. The assessment of the proposed model was conducted using the LUNA16 dataset, which contains annotated CT scans of pulmonary nodules. Multiple attention mechanisms, including Spatial Attention (SA), Dilated Efficient Channel Attention (Dilated ECA), Convolutional Block Attention Module (CBAM), and Squeeze-and-Excitation (SE) Block, were integrated into a U-Net backbone. These modules were strategically combined to enhance both local and global feature representations. The model's architecture and training procedures were designed to address the challenges of segmenting small and irregular pulmonary nodules. The proposed model achieved a Dice similarity coefficient of 84.30%, significantly outperforming the baseline U-Net model. This result demonstrates improved accuracy in segmenting small and irregular pulmonary nodules. The integration of multiple attention mechanisms significantly enhances the model's ability to capture both local and global features, addressing key limitations of traditional U-Net architectures. SA preserves spatial features for small nodules, while Dilated ECA captures long-range dependencies. CBAM and SE further refine feature representations. Together, these modules improve segmentation performance in complex imaging backgrounds. A potential limitation is that performance may still be constrained in cases with extreme anatomical variability or lowcontrast lesions, suggesting directions for future research. The Enhanced U-Net hybrid model outperforms the traditional U-Net, effectively addressing challenges in segmenting small and irregular pulmonary nodules within complex imaging backgrounds.

Clinical evaluation of motion robust reconstruction using deep learning in lung CT.

Kuwajima S, Oura D

pubmed logopapersSep 10 2025
In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.

Non-invasive prediction of invasive lung adenocarcinoma and high-risk histopathological characteristics in resectable early-stage adenocarcinoma by [18F]FDG PET/CT radiomics-based machine learning models: a prospective cohort Study.

Cao X, Lv Z, Li Y, Li M, Hu Y, Liang M, Deng J, Tan X, Wang S, Geng W, Xu J, Luo P, Zhou M, Xiao W, Guo M, Liu J, Huang Q, Hu S, Sun Y, Lan X, Jin Y

pubmed logopapersSep 10 2025
Precise preoperative discrimination of invasive lung adenocarcinoma (IA) from preinvasive lesions (adenocarcinoma in situ [AIS]/minimally invasive adenocarcinoma [MIA]) and prediction of high-risk histopathological features are critical for optimizing resection strategies in early-stage lung adenocarcinoma (LUAD). In this multicenter study, 813 LUAD patients (tumors ≤3 cm) formed the training cohort. A total of 1,709 radiomic features were extracted from the PET/CT images. Feature selection was performed using the max-relevance and min-redundancy (mRMR) algorithm and least absolute shrinkage and selection operator (LASSO). Hybrid machine learning models integrating [18F]FDG PET/CT radiomics and clinical-radiological features were developed using H2O.ai AutoML. Models were validated in a prospective internal cohort (N = 256, 2021-2022) and external multicenter cohort (N = 418). Performance was assessed via AUC, calibration, decision curve analysis (DCA) and survival assessment. The hybrid model achieved AUCs of 0.93 (95% CI: 0.90-0.96) for distinguishing IA from AIS/MIA (internal test) and 0.92 (0.90-0.95) in external testing. For predicting high-risk histopathological features (grade-III, lymphatic/pleural/vascular/nerve invasion, STAS), AUCs were 0.82 (0.77-0.88) and 0.85 (0.81-0.89) in internal/external sets. DCA confirmed superior net benefit over CT model. The model stratified progression-free (P = 0.002) and overall survival (P = 0.017) in the TCIA cohort. PET/CT radiomics-based models enable accurate non-invasive prediction of invasiveness and high-risk pathology in early-stage LUAD, guiding optimal surgical resection.
Page 9 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.