Sort by:
Page 52 of 66652 results

Computed Tomography Radiomics-based Combined Model for Predicting Thymoma Risk Subgroups: A Multicenter Retrospective Study.

Liu Y, Luo C, Wu Y, Zhou S, Ruan G, Li H, Chen W, Lin Y, Liu L, Quan T, He X

pubmed logopapersJun 1 2025
Accurately distinguishing histological subtypes and risk categorization of thymomas is difficult. To differentiate the histologic risk categories of thymomas, we developed a combined radiomics model based on non-enhanced and contrast-enhanced computed tomography (CT) radiomics, clinical, and semantic features. In total, 360 patients with pathologically-confirmed thymomas who underwent CT examinations were retrospectively recruited from three centers. Patients were classified using improved pathological classification criteria as low-risk (LRT: types A and AB) or high-risk (HRT: types B1, B2, and B3). The training and external validation sets comprised 274 (from centers 1 and 2) and 86 (center 3) patients, respectively. A clinical-semantic model was built using clinical and semantic variables. Radiomics features were filtered using intraclass correlation coefficients, correlation analysis, and univariate logistic regression. An optimal radiomics model (Rad_score) was constructed using the AutoML algorithm, while a combined model was constructed by integrating Rad_score with clinical and semantic features. The predictive and clinical performances of the models were evaluated using receiver operating characteristic/calibration curve analyses and decision-curve analysis, respectively. Radiomics and combined models (area under curve: training set, 0.867 and 0.884; external validation set, 0.792 and 0.766, respectively) exhibited performance superior to the clinical-semantic model. The combined model had higher accuracy than the radiomics model (0.79 vs. 0.78, p<0.001) in the entire cohort. The original_firstorder_median of venous phase had the highest relative importance among features in the radiomics model. Radiomics and combined radiomics models may serve as noninvasive discrimination tools to differentiate thymoma risk classifications.

Prediction of Malignancy and Pathological Types of Solid Lung Nodules on CT Scans Using a Volumetric SWIN Transformer.

Chen H, Wen Y, Wu W, Zhang Y, Pan X, Guan Y, Qin D

pubmed logopapersJun 1 2025
Lung adenocarcinoma and squamous cell carcinoma are the two most common pathological lung cancer subtypes. Accurate diagnosis and pathological subtyping are crucial for lung cancer treatment. Solitary solid lung nodules with lobulation and spiculation signs are often indicative of lung cancer; however, in some cases, postoperative pathology finds benign solid lung nodules. It is critical to accurately identify solid lung nodules with lobulation and spiculation signs before surgery; however, traditional diagnostic imaging is prone to misdiagnosis, and studies on artificial intelligence-assisted diagnosis are few. Therefore, we introduce a volumetric SWIN Transformer-based method. It is a multi-scale, multi-task, and highly interpretable model for distinguishing between benign solid lung nodules with lobulation and spiculation signs, lung adenocarcinomas, and lung squamous cell carcinoma. The technique's effectiveness was improved by using 3-dimensional (3D) computed tomography (CT) images instead of conventional 2-dimensional (2D) images to combine as much information as possible. The model was trained using 352 of the 441 CT image sequences and validated using the rest. The experimental results showed that our model could accurately differentiate between benign lung nodules with lobulation and spiculation signs, lung adenocarcinoma, and squamous cell carcinoma. On the test set, our model achieves an accuracy of 0.9888, precision of 0.9892, recall of 0.9888, and an F1-score of 0.9888, along with a class activation mapping (CAM) visualization of the 3D model. Consequently, our method could be used as a preoperative tool to assist in diagnosing solitary solid lung nodules with lobulation and spiculation signs accurately and provide a theoretical basis for developing appropriate clinical diagnosis and treatment plans for the patients.

Foundational Segmentation Models and Clinical Data Mining Enable Accurate Computer Vision for Lung Cancer.

Swinburne NC, Jackson CB, Pagano AM, Stember JN, Schefflein J, Marinelli B, Panyam PK, Autz A, Chopra MS, Holodny AI, Ginsberg MS

pubmed logopapersJun 1 2025
This study aims to assess the effectiveness of integrating Segment Anything Model (SAM) and its variant MedSAM into the automated mining, object detection, and segmentation (MODS) methodology for developing robust lung cancer detection and segmentation models without post hoc labeling of training images. In a retrospective analysis, 10,000 chest computed tomography scans from patients with lung cancer were mined. Line measurement annotations were converted to bounding boxes, excluding boxes < 1 cm or > 7 cm. The You Only Look Once object detection architecture was used for teacher-student learning to label unannotated lesions on the training images. Subsequently, a final tumor detection model was trained and employed with SAM and MedSAM for tumor segmentation. Model performance was assessed on a manually annotated test dataset, with additional evaluations conducted on an external lung cancer dataset before and after detection model fine-tuning. Bootstrap resampling was used to calculate 95% confidence intervals. Data mining yielded 10,789 line annotations, resulting in 5403 training boxes. The baseline detection model achieved an internal F1 score of 0.847, improving to 0.860 after self-labeling. Tumor segmentation using the final detection model attained internal Dice similarity coefficients (DSCs) of 0.842 (SAM) and 0.822 (MedSAM). After fine-tuning, external validation showed an F1 of 0.832 and DSCs of 0.802 (SAM) and 0.804 (MedSAM). Integrating foundational segmentation models into the MODS framework results in high-performing lung cancer detection and segmentation models using only mined clinical data. Both SAM and MedSAM hold promise as foundational segmentation models for radiology images.

Integrating VAI-Assisted Quantified CXRs and Multimodal Data to Assess the Risk of Mortality.

Chen YC, Fang WH, Lin CS, Tsai DJ, Hsiang CW, Chang CK, Ko KH, Huang GS, Lee YT, Lin C

pubmed logopapersJun 1 2025
To address the unmet need for a widely available examination for mortality prediction, this study developed a foundation visual artificial intelligence (VAI) to enhance mortality risk stratification using chest X-rays (CXRs). The VAI employed deep learning to extract CXR features and a Cox proportional hazard model to generate a hazard score ("CXR-risk"). We retrospectively collected CXRs from patients visited outpatient department and physical examination center. Subsequently, we reviewed mortality and morbidity outcomes from electronic medical records. The dataset consisted of 41,945, 10,492, 31,707, and 4441 patients in the training, validation, internal test, and external test sets, respectively. During the median follow-up of 3.2 (IQR, 1.2-6.1) years of both internal and external test sets, the "CXR-risk" demonstrated C-indexes of 0.859 (95% confidence interval (CI), 0.851-0.867) and 0.870 (95% CI, 0.844-0.896), respectively. Patients with high "CXR-risk," above 85th percentile, had a significantly higher risk of mortality than those with low risk, below 50th percentile. The addition of clinical and laboratory data and radiographic report further improved the predictive accuracy, resulting in C-indexes of 0.888 and 0.900. The VAI can provide accurate predictions of mortality and morbidity outcomes using just a single CXR, and it can complement other risk prediction indicators to assist physicians in assessing patient risk more effectively.

Developing approaches to incorporate donor-lung computed tomography images into machine learning models to predict severe primary graft dysfunction after lung transplantation.

Ma W, Oh I, Luo Y, Kumar S, Gupta A, Lai AM, Puri V, Kreisel D, Gelman AE, Nava R, Witt CA, Byers DE, Halverson L, Vazquez-Guillamet R, Payne PRO, Sotiras A, Lu H, Niazi K, Gurcan MN, Hachem RR, Michelson AP

pubmed logopapersJun 1 2025
Primary graft dysfunction (PGD) is a common complication after lung transplantation associated with poor outcomes. Although risk factors have been identified, the complex interactions between clinical variables affecting PGD risk are not well understood, which can complicate decisions about donor-lung acceptance. Previously, we developed a machine learning model to predict grade 3 PGD using donor and recipient electronic health record data, but it lacked granular information from donor-lung computed tomography (CT) scans, which are routinely assessed during offer review. In this study, we used a gated approach to determine optimal methods for analyzing donor-lung CT scans among patients receiving first-time, bilateral lung transplants at a single center over 10 years. We assessed 4 computer vision approaches and fused the best with electronic health record data at 3 points in the machine learning process. A total of 160 patients had donor-lung CT scans for analysis. The best imaging-only approach employed a 3D ResNet model, yielding median (interquartile range) areas under the receiver operating characteristic and precision-recall curves of 0.63 (0.49-0.72) and 0.48 (0.35-0.6), respectively. Combining imaging with clinical data using late fusion provided the highest performance, with median areas under the receiver operating characteristic and precision-recall curves of 0.74 (0.59-0.85) and 0.61 (0.47-0.72), respectively.

Retaking assessment system based on the inspiratory state of chest X-ray image.

Matsubara N, Teramoto A, Takei M, Kitoh Y, Kawakami S

pubmed logopapersJun 1 2025
When taking chest X-rays, the patient is encouraged to take maximum inspiration and the radiological technologist takes the images at the appropriate time. If the image is not taken at maximum inspiration, retaking of the image is required. However, there is variation in the judgment of whether retaking is necessary between the operators. Therefore, we considered that it might be possible to reduce variation in judgment by developing a retaking assessment system that evaluates whether retaking is necessary using a convolutional neural network (CNN). To train the CNN, the input chest X-ray image and the corresponding correct label indicating whether retaking is necessary are required. However, chest X-ray images cannot distinguish whether inspiration is sufficient and does not need to be retaken, or insufficient and retaking is required. Therefore, we generated input images and labels from dynamic digital radiography (DDR) and conducted the training. Verification using 18 dynamic chest X-ray cases (5400 images) and 48 actual chest X-ray cases (96 images) showed that the VGG16-based architecture achieved an assessment accuracy of 82.3% even for actual chest X-ray images. Therefore, if the proposed method is used in hospitals, it could possibly reduce the variability in judgment between operators.

DKCN-Net: Deep kronecker convolutional neural network-based lung disease detection with federated learning.

Meda A, Nelson L, Jagdish M

pubmed logopapersJun 1 2025
In the healthcare field, lung disease detection techniques based on deep learning (DL) are widely used. However, achieving high stability while maintaining privacy remains a challenge. To address this, this research employs Federated Learning (FL), enabling doctors to train models without sharing patient data with unauthorized parties, preserving privacy in local models. The study introduces the Deep Kronecker Convolutional Neural Network (DKCN-Net) for lung disease detection. Input Computed Tomography (CT) images are sourced from the LIDC-IDRI database and denoised using the Adaptive Gaussian Filter (AGF). After that, the Lung lobe and nodule segmentation are performed using Deep Fuzzy Clustering (DFC) and a 3-Dimensional Fully Convolutional Neural Network (3D-FCN). During feature extraction, various features, including statistical, Convolutional Neural Networks (CNN), and Gray-Level Co-Occurrence Matrix (GLCM), are obtained. Lung diseases are then detected using DKCN-Net, which combines the Deep Kronecker Neural Network (DKN) and Parallel Convolutional Neural Network (PCNN). The DKCN-Net achieves an accuracy of 92.18 %, a loss of 7.82 %, a Mean Squared Error (MSE) of 0.858, a True Positive Rate (TPR) of 92.99 %, and a True Negative Rate (TNR) of 92.19 %, with a processing time of 50 s per timestamp.

IM- LTS: An Integrated Model for Lung Tumor Segmentation using Neural Networks and IoMT.

J J, Haw SC, Palanichamy N, Ng KW, Thillaigovindhan SK

pubmed logopapersJun 1 2025
In recent days, Internet of Medical Things (IoMT) and Deep Learning (DL) techniques are broadly used in medical data processing in decision-making. A lung tumour, one of the most dangerous medical diseases, requires early diagnosis with a higher precision rate. With that concern, this work aims to develop an Integrated Model (IM- LTS) for Lung Tumor Segmentation using Neural Networks (NN) and the Internet of Medical Things (IoMT). The model integrates two architectures, MobileNetV2 and U-NET, for classifying the input lung data. The input CT lung images are pre-processed using Z-score Normalization. The semantic features of lung images are extracted based on texture, intensity, and shape to provide information to the training network.•In this work, the transfer learning technique is incorporated, and the pre-trained NN was used as an encoder for the U-NET model for segmentation. Furthermore, Support Vector Machine is used here to classify input lung data as benign and malignant.•The results are measured based on the metrics such as, specificity, sensitivity, precision, accuracy and F-Score, using the data from benchmark datasets. Compared to the existing lung tumor segmentation and classification models, the proposed model provides better results and evidence for earlier disease diagnosis.

Polygenic risk scores for rheumatoid arthritis and idiopathic pulmonary fibrosis and associations with RA, interstitial lung abnormalities, and quantitative interstitial abnormalities among smokers.

McDermott GC, Moll M, Cho MH, Hayashi K, Juge PA, Doyle TJ, Paudel ML, Kinney GL, Kronzer VL, Kim JS, O'Keeffe LA, Davis NA, Bernstein EJ, Dellaripa PF, Regan EA, Hunninghake GM, Silverman EK, Ash SY, San Jose Estepar R, Washko GR, Sparks JA

pubmed logopapersJun 1 2025
Genome-wide association studies (GWAS) facilitate construction of polygenic risk scores (PRSs) for rheumatoid arthritis (RA) and idiopathic pulmonary fibrosis (IPF). We investigated associations of RA and IPF PRSs with RA and high-resolution chest computed tomography (HRCT) parenchymal lung abnormalities. Participants in COPDGene, a prospective multicenter cohort of current/former smokers, had chest HRCT at study enrollment. Using genome-wide genotyping, RA and IPF PRSs were constructed using GWAS summary statistics. HRCT imaging underwent visual inspection for interstitial lung abnormalities (ILA) and quantitative CT (QCT) analysis using a machine-learning algorithm that quantified percentage of normal lung, interstitial abnormalities, and emphysema. RA was identified through self-report and DMARD use. We investigated associations of RA and IPF PRSs with RA, ILA, and QCT features using multivariable logistic and linear regression. We analyzed 9,230 COPDGene participants (mean age 59.6 years, 46.4 % female, 67.2 % non-Hispanic White, 32.8 % Black/African American). In non-Hispanic White participants, RA PRS was associated with RA diagnosis (OR 1.32 per unit, 95 %CI 1.18-1.49) but not ILA or QCT features. Among non-Hispanic White participants, IPF PRS was associated with ILA (OR 1.88 per unit, 95 %CI 1.52-2.32) and quantitative interstitial abnormalities (adjusted β=+0.50 % per unit, p = 7.3 × 10<sup>-8</sup>) but not RA. There were no statistically significant associations among Black/African American participants. RA and IPF PRSs were associated with their intended phenotypes among non-Hispanic White participants but performed poorly among Black/African American participants. PRS may have future application to risk stratify for RA diagnosis among patients with ILD or for ILD among patients with RA.

A scoping review on the integration of artificial intelligence in point-of-care ultrasound: Current clinical applications.

Kim J, Maranna S, Watson C, Parange N

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is used increasingly in point-of-care ultrasound (POCUS). However, the true role, utility, advantages, and limitations of AI tools in POCUS have been poorly understood. to conduct a scoping review on the current literature of AI in POCUS to identify (1) how AI is being applied in POCUS, and (2) how AI in POCUS could be utilized in clinical settings. The review followed the JBI scoping review methodology. A search strategy was conducted in Medline, Embase, Emcare, Scopus, Web of Science, Google Scholar, and AI POCUS manufacturer websites. Selection criteria, evidence screening, and selection were performed in Covidence. Data extraction and analysis were performed on Microsoft Excel by the primary investigator and confirmed by the secondary investigators. Thirty-three papers were included. AI POCUS on the cardiopulmonary region was the most prominent in the literature. AI was most frequently used to automatically measure biometry using POCUS images. AI POCUS was most used in acute settings. However, novel applications in non-acute and low-resource settings were also explored. AI had the potential to increase POCUS accessibility and usability, expedited care and management, and had a reasonably high diagnostic accuracy in limited applications such as measurement of Left Ventricular Ejection Fraction, Inferior Vena Cava Collapsibility Index, Left-Ventricular Outflow Tract Velocity Time Integral and identifying B-lines of the lung. However, AI could not interpret poor images, underperformed compared to standard-of-care diagnostic methods, and was less effective in patients with specific disease states, such as severe illnesses that limit POCUS image acquisition. This review uncovered the applications of AI in POCUS and the advantages and limitations of AI POCUS in different clinical settings. Future research in the field must first establish the diagnostic accuracy of AI POCUS tools and explore their clinical utility through clinical trials.
Page 52 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.