Sort by:
Page 37 of 66656 results

Intelligent diagnosis model for chest X-ray images diseases based on convolutional neural network.

Yang S, Wu Y

pubmed logopapersJul 2 2025
To address misdiagnosis caused by feature coupling in multi-label medical image classification, this study introduces a chest X-ray pathology reasoning method. It combines hierarchical attention convolutional networks with a multi-label decoupling loss function. This method aims to enhance the precise identification of complex lesions. It dynamically captures multi-scale lesion morphological features and integrates lung field partitioning with lesion localization through a dual-path attention mechanism, thereby improving clinical disease prediction accuracy. An adaptive dilated convolution module with 3 × 3 deformable kernels dynamically captures multi-scale lesion features. A channel-space dual-path attention mechanism enables precise feature selection for lung field partitioning and lesion localization. Cross-scale skip connections fuse shallow texture and deep semantic information, enhancing microlesion detection. A KL divergence-constrained contrastive loss function decouples 14 pathological feature representations via orthogonal regularization, effectively resolving multi-label coupling. Experiments on ChestX-ray14 show a weighted F1-score of 0.97, Hamming Loss of 0.086, and AUC values exceeding 0.94 for all pathologies. This study provides a reliable tool for multi-disease collaborative diagnosis.

Multimodal Generative Artificial Intelligence Model for Creating Radiology Reports for Chest Radiographs in Patients Undergoing Tuberculosis Screening.

Hong EK, Kim HW, Song OK, Lee KC, Kim DK, Cho JB, Kim J, Lee S, Bae W, Roh B

pubmed logopapersJul 2 2025
<b>Background:</b> Chest radiographs play a crucial role in tuberculosis screening in high-prevalence regions, although widespread radiographic screening requires expertise that may be unavailable in settings with limited medical resources. <b>Objectives:</b> To evaluate a multimodal generative artificial intelligence (AI) model for detecting tuberculosis-associated abnormalities on chest radiography in patients undergoing tuberculosis screening. <b>Methods:</b> This retrospective study evaluated 800 chest radiographs obtained from two public datasets originating from tuberculosis screening programs. A generative AI model was used to create free-text reports for the radiographs. AI-generated reports were classified in terms of presence versus absence and laterality of tuberculosis-related abnormalities. Two radiologists independently reviewed the radiographs for tuberculosis presence and laterality in separate sessions, without and with use of AI-generated reports and recorded if they would accept the report without modification. Two additional radiologists reviewed radiographs and clinical readings from the datasets to determine the reference standard. <b>Results:</b> By the reference standard, 422/800 radiographs were positive for tuberculosis-related abnormalities. For detection of tuberculosis-related abnormalities, sensitivity, specificity, and accuracy were 95.2%, 86.7%, and 90.8% for AI-generated reports; 93.1%, 93.6%, and 93.4% for reader 1 without AI-generated reports; 93.1%, 95.0%, and 94.1% for reader 1 with AI-generated reports; 95.8%, 87.2%, and 91.3% for reader 2 without AI-generated reports; and 95.8%, 91.5%, and 93.5% for reader 2 with AI-generated reports. Accuracy was significantly lower for AI-generated reports than for both readers alone (p<.001), but significantly higher with than without AI-generated reports for one reader (reader 1: p=.47; reader 2: p=.47). Localization performance was significantly lower (p<.001) for AI-generated reports (63.3%) than for reader 1 (79.9%) and reader 2 (77.9%) without AI-generated reports and did not significantly change for either reader with AI-generated reports (reader 1: 78.7%, p=.71; reader 2: 81.5%, p=.23). Among normal and abnormal radiographs, reader 1 accepted 91.7% and 52.4%, while reader 2 accepted 83.2% and 37.0%, respectively, of AI-generated reports. <b>Conclusion:</b> While AI-generated reports may augment radiologists' diagnostic assessments, the current model requires human oversight given inferior standalone performance. <b>Clinical Impact:</b> The generative AI model could have potential application to aid tuberculosis screening programs in medically underserved regions, although technical improvements remain required.

A deep learning-based computed tomography reading system for the diagnosis of lung cancer associated with cystic airspaces.

Hu Z, Zhang X, Yang J, Zhang B, Chen H, Shen W, Li H, Zhou Y, Zhang J, Qiu K, Xie Z, Xu G, Tan J, Pang C

pubmed logopapersJul 2 2025
To propose a deep learning model and explore its performance in the auxiliary diagnosis of lung cancer associated with cystic airspaces (LCCA) in computed tomography (CT) images. This study is a retrospective analysis that incorporated a total of 342 CT series, comprising 272 series from patients diagnosed with LCCA and 70 series from patients with pulmonary bulla. A deep learning model named LungSSFNet, developed based on nnUnet, was utilized for image recognition and segmentation by experienced thoracic surgeons. The dataset was divided into a training set (245 series), a validation set (62 series), and a test set (35 series). The performance of LungSSFNet was compared with other models such as UNet, M2Snet, TANet, MADGNet, and nnUnet to evaluate its effectiveness in recognizing and segmenting LCCA and pulmonary bulla. LungSSFNet achieved an intersection over union of 81.05% and a Dice similarity coefficient of 75.15% for LCCA, and 93.03% and 92.04% for pulmonary bulla, respectively. These outcomes demonstrate that LungSSFNet outperformed many existing models in segmentation tasks. Additionally, it attained an accuracy of 96.77%, a precision of 100%, and a sensitivity of 96.15%. LungSSFNet, a new deep-learning model, substantially improved the diagnosis of early-stage LCCA and is potentially valuable for auxiliary clinical decision-making. Our LungSSFNet code is available at https://github.com/zx0412/LungSSFNet .

Multichannel deep learning prediction of major pathological response after neoadjuvant immunochemotherapy in lung cancer: a multicenter diagnostic study.

Geng Z, Li K, Mei P, Gong Z, Yan R, Huang Y, Zhang C, Zhao B, Lu M, Yang R, Wu G, Ye G, Liao Y

pubmed logopapersJul 2 2025
This study aimed to develop a pretreatment CT-based multichannel predictor integrating deep learning features encoded by Transformer models for preoperative diagnosis of major pathological response (MPR) in non-small cell lung cancer (NSCLC) patients receiving neoadjuvant immunochemotherapy. This multicenter diagnostic study retrospectively included 332 NSCLC patients from four centers. Pretreatment computed tomography images were preprocessed and segmented into region of interest cubes for radiomics modeling. These cubes were cropped into four groups of 2 dimensional image modules. GoogLeNet architecture was trained independently on each group within a multichannel framework, with gradient-weighted class activation mapping and SHapley Additive exPlanations value‌ for visualization. Deep learning features were carefully extracted and fused across the four image groups using the Transformer fusion model. After models training, model performance was evaluated via the area under the curve (AUC), sensitivity, specificity, F1 score, confusion matrices, calibration curves, decision curve analysis, integrated discrimination improvement, net reclassification improvement, and DeLong test. The dataset was allocated into training (n = 172, Center 1), internal validation (n = 44, Center 1), and external test (n = 116, Centers 2-4) cohorts. Four optimal deep learning models and the best Transformer fusion model were developed. In the external test cohort, traditional radiomics model exhibited an AUC of 0.736 [95% confidence interval (CI): 0.645-0.826]. The‌ optimal deep learning imaging ‌module‌ showed superior AUC of 0.855 (95% CI: 0.777-0.934). The fusion model named Transformer_GoogLeNet further improved classification accuracy (AUC = 0.924, 95% CI: 0.875-0.973). The new method of fusing multichannel deep learning with the Transformer Encoder can accurately diagnose whether NSCLC patients receiving neoadjuvant immunochemotherapy will achieve MPR. Our findings may support improved surgical planning and contribute to better treatment outcomes through more accurate preoperative assessment.

Data-efficient generalization of AI transformers for noise reduction in ultra-fast lung PET scans.

Wang J, Zhang X, Miao Y, Xue S, Zhang Y, Shi K, Guo R, Li B, Zheng G

pubmed logopapersJul 1 2025
Respiratory motion during PET acquisition may produce lesion blurring. Ultra-fast 20-second breath-hold (U2BH) PET reduces respiratory motion artifacts, but the shortened scanning time increases statistical noise and may affect diagnostic quality. This study aims to denoise the U2BH PET images using a deep learning (DL)-based method. The study was conducted on two datasets collected from five scanners where the first dataset included 1272 retrospectively collected full-time PET data while the second dataset contained 46 prospectively collected U2BH and the corresponding full-time PET/CT images. A robust and data-efficient DL method called mask vision transformer (Mask-ViT) was proposed which, after fine-tuned on a limited number of training data from a target scanner, was directly applied to unseen testing data from new scanners. The performance of Mask-ViT was compared with state-of-the-art DL methods including U-Net and C-Gan taking the full-time PET images as the reference. Statistical analysis on image quality metrics were carried out with Wilcoxon signed-rank test. For clinical evaluation, two readers scored image quality on a 5-point scale (5 = excellent) and provided a binary assessment for diagnostic quality evaluation. The U2BH PET images denoised by Mask-ViT showed statistically significant improvement over U-Net and C-Gan on image quality metrics (p < 0.05). For clinical evaluation, Mask-ViT exhibited a lesion detection accuracy of 91.3%, 90.4% and 91.7%, when it was evaluated on three different scanners. Mask-ViT can effectively enhance the quality of the U2BH PET images in a data-efficient generalization setup. The denoised images meet clinical diagnostic requirements of lesion detectability.

The Chest X- Ray: The Ship has Sailed, But Has It?

Iacovino JR

pubmed logopapersJul 1 2025
In the past, the chest X-ray (CXR) was a traditional age and amount requirement used to assess potential mortality risk in life insurance applicants. It fell out of favor due to inconvenience to the applicant, cost, and lack of protective value. With the advent of deep learning techniques, can the results of the CXR, as a requirement, now add additional value to underwriting risk analysis?

CZT-based photon-counting-detector CT with deep-learning reconstruction: image quality and diagnostic confidence for lung tumor assessment.

Sasaki T, Kuno H, Nomura K, Muramatsu Y, Aokage K, Samejima J, Taki T, Goto E, Wakabayashi M, Furuya H, Taguchi H, Kobayashi T

pubmed logopapersJul 1 2025
This is a preliminary analysis of one of the secondary endpoints in the prospective study cohort. The aim of this study is to assess the image quality and diagnostic confidence for lung cancer of CT images generated by using cadmium-zinc-telluride (CZT)-based photon-counting-detector-CT (PCD-CT) and comparing these super-high-resolution (SHR) images with conventional normal-resolution (NR) CT images. Twenty-five patients (median age 75 years, interquartile range 66-78 years, 18 men and 7 women) with 29 lung nodules overall (including two patients with 4 and 2 nodules, respectively) were enrolled to undergo PCD-CT. Three types of images were reconstructed: a 512 × 512 matrix with adaptive iterative dose reduction 3D (AIDR 3D) as the NR<sub>AIDR3D</sub> image, a 1024 × 1024 matrix with AIDR 3D as the SHR<sub>AIDR3D</sub> image, and a 1024 × 1024 matrix with deep-learning reconstruction (DLR) as the SHR<sub>DLR</sub> image. For qualitative analysis, two radiologists evaluated the matched reconstructed series twice (NR<sub>AIDR3D</sub> vs. SHR<sub>AIDR3D</sub> and SHR<sub>AIDR3D</sub> vs. SHR<sub>DLR</sub>) and scored the presence of imaging findings, such as spiculation, lobulation, appearance of ground-glass opacity or air bronchiologram, image quality, and diagnostic confidence, using a 5-point Likert scale. For quantitative analysis, contrast-to-noise ratios (CNRs) of the three images were compared. In the qualitative analysis, compared to NR<sub>AIDR3D</sub>, SHR<sub>AIDR3D</sub> yielded higher image quality and diagnostic confidence, except for image noise (all P < 0.01). In comparison with SHR<sub>AIDR3D</sub>, SHR<sub>DLR</sub> yielded higher image quality and diagnostic confidence (all P < 0.01). In the quantitative analysis, CNRs in the modified NR<sub>AIDR3D</sub> and SHR<sub>DLR</sub> groups were higher than those in the SHR<sub>AIDR3D</sub> group (P = 0.003, <0.001, respectively). In PCD-CT, SHR<sub>DLR</sub> images provided the highest image quality and diagnostic confidence for lung tumor evaluation, followed by SHR<sub>AIDR3D</sub> and NR<sub>AIDR3D</sub> images. DLR demonstrated superior noise reduction compared to other reconstruction methods.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

A vision transformer-convolutional neural network framework for decision-transparent dual-energy X-ray absorptiometry recommendations using chest low-dose CT.

Kuo DP, Chen YC, Cheng SJ, Hsieh KL, Li YT, Kuo PC, Chang YC, Chen CY

pubmed logopapersJul 1 2025
This study introduces an ensemble framework that integrates Vision Transformer (ViT) and Convolutional Neural Networks (CNN) models to leverage their complementary strengths, generating visualized and decision-transparent recommendations for dual-energy X-ray absorptiometry (DXA) scans from chest low-dose computed tomography (LDCT). The framework was developed using data from 321 individuals and validated with an independent test cohort of 186 individuals. It addresses two classification tasks: (1) distinguishing normal from abnormal bone mineral density (BMD) and (2) differentiating osteoporosis from non-osteoporosis. Three field-of-view (FOV) settings-fitFOV (entire vertebra), halfFOV (vertebral body only), and largeFOV (fitFOV + 20 %)-were analyzed to assess their impact on model performance. Model predictions were weighted and combined to enhance classification accuracy, and visualizations were generated to improve decision transparency. DXA scans were recommended for individuals classified as having abnormal BMD or osteoporosis. The ensemble framework significantly outperformed individual models in both classification tasks (McNemar test, p < 0.001). In the development cohort, it achieved 91.6 % accuracy for task 1 with largeFOV (area under the receiver operating characteristic curve [AUROC]: 0.97) and 86.0 % accuracy for task 2 with fitFOV (AUROC: 0.94). In the test cohort, it demonstrated 86.6 % accuracy for task 1 (AUROC: 0.93) and 76.9 % accuracy for task 2 (AUROC: 0.99). DXA recommendation accuracy was 91.6 % and 87.1 % in the development and test cohorts, respectively, with notably high accuracy for osteoporosis detection (98.7 % and 100 %). This combined ViT-CNN framework effectively assesses bone status from LDCT images, particularly when utilizing fitFOV and largeFOV settings. By visualizing classification confidence and vertebral abnormalities, the proposed framework enhances decision transparency and supports clinicians in making informed DXA recommendations following opportunistic osteoporosis screening.
Page 37 of 66656 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.