Sort by:
Page 49 of 66652 results

Automatic Segmentation of Ultrasound-Guided Transverse Thoracic Plane Block Using Convolutional Neural Networks.

Liu W, Ma X, Han X, Yu J, Zhang B, Liu L, Liu Y, Chu F, Liu Y, Wei S, Li B, Tang Z, Jiang J, Wang Q

pubmed logopapersJun 6 2025
Ultrasound-guided transverse thoracic plane (TTP) block has been shown to be highly effective in relieving postoperative pain in a variety of surgeries involving the anterior chest wall. Accurate identification of the target structure on ultrasound images is key to the successful implementation of TTP block. Nevertheless, the complexity of anatomical structures in the targeted blockade area coupled with the potential for adverse clinical incidents presents considerable challenges, particularly for anesthesiologists who are less experienced. This study applied deep learning methods to TTP block and developed a deep learning model to achieve real-time region segmentation in ultrasound to assist doctors in the accurate identification of the target nerve. Using 2329 images from 155 patients, we successfully segmented key structures associated with TTP areas and nerve blocks, including the transversus thoracis muscle, lungs, and bones. The achieved IoU (Intersection over Union) scores are 0.7272, 0.9736, and 0.8244 in that order. Recall metrics were 0.8305, 0.9896, and 0.9336 respectively, whilst Dice coefficients reached 0.8421, 0.9866, and 0.9037, particularly with an accuracy surpassing 97% in the identification of perilous lung regions. The real-time segmentation frame rate of the model for ultrasound video was as high as 42.7 fps, thus meeting the exigencies of performing nerve blocks under real-time ultrasound guidance in clinical practice. This study introduces TTP-Unet, a deep learning model specifically designed for TTP block, capable of automatically identifying crucial anatomical structures within ultrasound images of TTP block, thereby offering a practicable solution to attenuate the clinical difficulty associated with TTP block technique.

Preoperative Prognosis Prediction for Pathological Stage IA Lung Adenocarcinoma: 3D-Based Consolidation Tumor Ratio is Superior to 2D-Based Consolidation Tumor Ratio.

Zhao L, Dong H, Chen Y, Wu F, Han C, Kuang P, Guan X, Xu X

pubmed logopapersJun 5 2025
The two-dimensional computed tomography measurement of the consolidation tumor ratio (2D-CTR) has limitations in the prognostic evaluation of early-stage lung adenocarcinoma: the measurement is subject to inter-observer variability and lacks spatial information, which undermines its reliability as a prognostic tool. This study aims to investigate the value of the three-dimensional volume-based CTR (3D-CTR) in preoperative prognosis prediction for pathological Stage IA lung adenocarcinoma, and compare its predictive performance with that of 2D-CTR. A retrospective cohort of 980 patients with pathological Stage IA lung adenocarcinoma who underwent surgery was included. Preoperative thin-section CT images were processed using artificial intelligence (AI) software for 3D segmentation. Tumor solid component volume was quantified using different density thresholds (-300 to -150 HU, in 50 HU intervals), and 3D-CTR was calculated. The optimal threshold associated with prognosis was selected using multivariate Cox regression. The predictive performance of 3D-CTR and 2D-CTR for recurrence-free survival (RFS) post-surgery was compared using receiver operating characteristic (ROC) curves, and the best cutoff value was determined. The integrated discrimination improvement (IDI) was utilized to assess the enhancement in predictive efficacy of 3D-CTR relative to 2D-CTR. Among traditional preoperative factors, 2D-CTR (cutoff value 0.54, HR=1.044, P=0.001) and carcinoembryonic antigen (CEA) were identified as independent prognostic factors for RFS. In 3D analysis, -150 HU was determined as the optimal threshold for distinguishing solid components from ground-glass opacity (GGO) components. The corresponding 3D-CTR (cutoff value 0.41, HR=1.033, P<0.001) was an independent risk factor for RFS. The predictive performance of 3D-CTR was significantly superior to that of 2D-CTR (AUC: 0.867 vs. 0.840, P=0.006), with a substantial enhancement in predictive capacity, as evidenced by an IDI of 0.038 (95% CI: 0.021-0.055, P<0.001). Kaplan-Meier analysis revealed that the 5-year RFS rate for the 3D-CTR >0.41 group was significantly lower than that of the ≤0.41 group (68.5% vs. 96.7%, P<0.001). The 3D-CTR based on a -150 HU density threshold provides a more accurate prediction of postoperative recurrence risk in pathological Stage IA lung adenocarcinoma, demonstrating superior performance compared to traditional 2D-CTR.

Performance analysis of large language models in multi-disease detection from chest computed tomography reports: a comparative study: Experimental Research.

Luo P, Fan C, Li A, Jiang T, Jiang A, Qi C, Gan W, Zhu L, Mou W, Zeng D, Tang B, Xiao M, Chu G, Liang Z, Shen J, Liu Z, Wei T, Cheng Q, Lin A, Chen X

pubmed logopapersJun 5 2025
Computed Tomography (CT) is widely acknowledged as the gold standard for diagnosing thoracic diseases. However, the accuracy of interpretation significantly depends on radiologists' expertise. Large Language Models (LLMs) have shown considerable promise in various medical applications, particularly in radiology. This study aims to assess the performance of leading LLMs in analyzing unstructured chest CT reports and to examine how different questioning methodologies and fine-tuning strategies influence their effectiveness in enhancing chest CT diagnosis. This retrospective analysis evaluated 13,489 chest CT reports encompassing 13 common thoracic conditions across pulmonary, cardiovascular, pleural, and upper abdominal systems. Five LLMs (Claude-3.5-Sonnet, GPT-4, GPT-3.5-Turbo, Gemini-Pro, Qwen-Max) were assessed using dual questioning methodologies: multiple-choice and open-ended. Radiologist-curated datasets underwent rigorous preprocessing, including RadLex terminology standardization, multi-step diagnostic validation, and exclusion of ambiguous cases. Model performance was quantified via Subjective Answer Accuracy Rate (SAAR), Reference Answer Accuracy Rate (RAAR), and Area Under the Receiver Operating Characteristic (ROC) Curve analysis. GPT-3.5-Turbo underwent fine-tuning (100 iterations with one training epoch) on 200 high-performing cases to enhance diagnostic precision for initially misclassified conditions. GPT-4 demonstrated superior performance with the highest RAAR of 75.1% in multiple-choice questioning, followed by Qwen-Max (66.0%) and Claude-3.5 (63.5%), significantly outperforming GPT-3.5-Turbo (41.8%) and Gemini-Pro (40.8%) across the entire patient cohort. Multiple-choice questioning consistently improved both RAAR and SAAR for all models compared to open-ended questioning, with RAAR consistently surpassing SAAR. Model performance demonstrated notable variations across different diseases and organ conditions. Notably, fine-tuning substantially enhanced the performance of GPT-3.5-Turbo, which initially exhibited suboptimal results in most scenarios. This study demonstrated that general-purpose LLMs can effectively interpret chest CT reports, with performance varying significantly across models depending on the questioning methodology and fine-tuning approaches employed. For surgical practice, these findings provided evidence-based guidance for selecting appropriate AI tools to enhance preoperative planning, particularly for thoracic procedures. The integration of optimized LLMs into surgical workflows may improve decision-making efficiency, risk stratification, and diagnostic speed, potentially contributing to better surgical outcomes through more accurate preoperative assessment.

Analysis of Research Hotspots and Development Trends in the Diagnosis of Lung Diseases Using Low-Dose CT Based on Bibliometrics.

Liu X, Chen X, Jiang Y, Chen Y, Zhang D, Fan L

pubmed logopapersJun 5 2025
Lung cancer is one of the main threats to global health, among lung diseases. Low-Dose Computed Tomography (LDCT) provides significant benefits for its screening but also brings new diagnostic challenges that require close attention. By searching the Web of Science core collection, we selected articles and reviews published in English between 2005 and June 2024 on topics such as "Low-dose", "CT image", and "Lung". These literatures were analyzed by bibliometric method, and CiteSpace software was used to explore the cooperation between countries, the cooperative relationship between authors, highly cited literature, and the distribution of keywords to reveal the research hotspots and trends in this field. The number of LDCT research articles show a trend of continuous growth between 2019 and 2022. The United States is at the forefront of research in this field, with a centrality of 0.31; China has also rapidly conducted research with a centrality of 0.26. The authors' co-occurrence map shows that research teams in this field are highly cooperative, and their research questions are closely related. The analysis of highly cited literature and keywords confirmed the significant advantages of LDCT in lung cancer screening, which can help reduce the mortality of lung cancer patients and improve the prognosis. "Lung cancer" and "CT" have always been high-frequency keywords, while "image quality" and "low dose CT" have become new hot keywords, indicating that LDCT using deep learning techniques has become a hot topic in early lung cancer research. The study revealed that advancements in CT technology have driven in-depth research from application challenges to image processing, with the research trajectory evolving from technical improvements to health risk assessments and subsequently to AI-assisted diagnosis. Currently, the research focus has shifted toward integrating deep learning with LDCT technology to address complex diagnostic challenges. The study also presents global research trends and geographical distributions of LDCT technology, along with the influence of key research institutions and authors. The comprehensive analysis aims to promote the development and application of LDCT technology in pulmonary disease diagnosis and enhance diagnostic accuracy and patient management efficiency. The future will focus on LDCT reconstruction algorithms to balance image noise and radiation dose. AI-assisted multimodal imaging supports remote diagnosis and personalized health management by providing dynamic analysis, risk assessment, and follow-up recommendations to support early diagnosis.

Are presentations of thoracic CT performed on admission to the ICU associated with mortality at day-90 in COVID-19 related ARDS?

Le Corre A, Maamar A, Lederlin M, Terzi N, Tadié JM, Gacouin A

pubmed logopapersJun 5 2025
Computed tomography (CT) analysis of lung morphology has significantly advanced our understanding of acute respiratory distress syndrome (ARDS). During the Coronavirus Disease 2019 (COVID-19) pandemic, CT imaging was widely utilized to evaluate lung injury and was suggested as a tool for predicting patient outcomes. However, data specifically focused on patients with ARDS admitted to intensive care units (ICUs) remain limited. This retrospective study analyzed patients admitted to ICUs between March 2020 and November 2022 with moderate to severe COVID-19 ARDS. All CT scans performed within 48 h of ICU admission were independently reviewed by three experts. Lung injury severity was quantified using the CT Severity Score (CT-SS; range 0-25). Patients were categorized as having severe disease (CT-SS ≥ 18) or non-severe disease (CT-SS < 18). The primary outcome was all-cause mortality at 90 days. Secondary outcomes included ICU mortality and medical complications during the ICU stay. Additionally, we evaluated a computer-assisted CT-score assessment using artificial intelligence software (CT Pneumonia Analysis<sup>®</sup>, SIEMENS Healthcare) to explore the feasibility of automated measurement and routine implementation. A total of 215 patients with moderate to severe COVID-19 ARDS were included. The median CT-SS at admission was 18/25 [interquartile range, 15-21]. Among them, 120 patients (56%) had a severe CT-SS (≥ 18), while 95 patients (44%) had a non-severe CT-SS (< 18). The 90-day mortality rates were 20.8% for the severe group and 15.8% for the non-severe group (p = 0.35). No significant association was observed between CT-SS severity and patient outcomes. In patients with moderate to severe COVID-19 ARDS, systematic CT assessment of lung parenchymal injury was not a reliable predictor of 90-day mortality or ICU-related complications.

Enhancing image quality in fast neutron-based range verification of proton therapy using a deep learning-based prior in LM-MAP-EM reconstruction.

Setterdahl LM, Skjerdal K, Ratliff HN, Ytre-Hauge KS, Lionheart WRB, Holman S, Pettersen HES, Blangiardi F, Lathouwers D, Meric I

pubmed logopapersJun 5 2025
This study investigates the use of list-mode (LM) maximum a posteriori (MAP) expectation maximization (EM) incorporating prior information predicted by a convolutional neural network for image reconstruction in fast neutron (FN)-based proton therapy range verification.&#xD;Approach. A conditional generative adversarial network (pix2pix) was trained on progressively noisier data, where detector resolution effects were introduced gradually to simulate realistic conditions. FN data were generated using Monte Carlo simulations of an 85 MeV proton pencil beam in a computed tomography (CT)-based lung cancer patient model, with range shifts emulating weight gain and loss. The network was trained to estimate the expected two-dimensional (2D) ground truth FN production distribution from simple back-projection images. Performance was evaluated using mean squared error (MSE), structural similarity index (SSIM), and the correlation between shifts in predicted distributions and true range shifts. &#xD;Main results. Our results show that pix2pix performs well on noise-free data but suffers from significant degradation when detector resolution effects are introduced. Among the LM-MAP-EM approaches tested, incorporating a mean prior estimate into the reconstruction process improved performance, with LM-MAP-EM using a mean prior estimate outperforming naïve LM maximum likelihood EM (LM-MLEM) and conventional LM-MAP-EM with a smoothing quadratic energy function in terms of SSIM. &#xD;Significance. Findings suggest that deep learning techniques can enhance iterative reconstruction for range verification in proton therapy. However, the effectiveness of the model is highly dependent on data quality, limiting its robustness in high-noise scenarios.&#xD.

Association between age and lung cancer risk: evidence from lung lobar radiomics.

Li Y, Lin C, Cui L, Huang C, Shi L, Huang S, Yu Y, Zhou X, Zhou Q, Chen K, Shi L

pubmed logopapersJun 5 2025
Previous studies have highlighted the prominent role of age in lung cancer risk, with signs of lung aging visible in computed tomography (CT) imaging. This study aims to characterize lung aging using quantitative radiomic features extracted from five delineated lung lobes and explore how age contributes to lung cancer development through these features. We analyzed baseline CT scans from the Wenling lung cancer screening cohort, consisting of 29,810 participants. Deep learning-based segmentation method was used to delineate lung lobes. A total of 1,470 features were extracted from each lobe. The minimum redundancy maximum relevance algorithm was applied to identify the top 10 age-related radiomic features among 13,137 never smokers. Multiple regression analyses were used to adjust for confounders in the association of age, lung lobar radiomic features, and lung cancer. Linear, Cox proportional hazards, and parametric accelerated failure time models were applied as appropriate. Mediation analyses were conducted to evaluate whether lobar radiomic features mediate the relationship between age and lung cancer risk. Age was significantly associated with an increased lung cancer risk, particularly among current smokers (hazard ratio = 1.07, P = 2.81 × 10<sup>- 13</sup>). Age-related radiomic features exhibited distinct effects across lung lobes. Specifically, the first order mean (mean attenuation value) filtered by wavelet in the right upper lobe increased with age (β = 0.019, P = 2.41 × 10<sup>- 276</sup>), whereas it decreased in the right lower lobe (β = -0.028, P = 7.83 × 10<sup>- 277</sup>). Three features, namely wavelet_HL_firstorder_Mean of the right upper lobe, wavelet_LH_firstorder_Mean of the right lower lobe, and original_shape_MinorAxisLength of the left upper lobe, were independently associated with lung cancer risk at Bonferroni-adjusted P value. Mediation analyses revealed that density and shape features partially mediated the relationship between age and lung cancer risk while a suppression effect was observed in the wavelet first order mean of right upper lobe. The study reveals lobe-specific heterogeneity in lung aging patterns through radiomics and their associations with lung cancer risk. These findings may contribute to identify new approaches for early intervention in lung cancer related to aging. Not applicable.

High-definition motion-resolved MRI using 3D radial kooshball acquisition and deep learning spatial-temporal 4D reconstruction.

Murray V, Wu C, Otazo R

pubmed logopapersJun 5 2025
&#xD;To develop motion-resolved volumetric MRI with 1.1mm isotropic resolution and scan times <5 minutes using a combination of 3D radial kooshball acquisition and spatial-temporal deep learning 4D reconstruction for free-breathing high-definition lung MRI. &#xD;Approach: &#xD;Free-breathing lung MRI was conducted on eight healthy volunteers and ten patients with lung tumors on a 3T MRI scanner using a 3D radial kooshball sequence with half-spoke (ultrashort echo time, UTE, TE=0.12ms) and full-spoke (T1-weighted, TE=1.55ms) acquisitions. Data were motion-sorted using amplitude-binning on a respiratory motion signal. Two high-definition Movienet (HD-Movienet) deep learning models were proposed to reconstruct 3D radial kooshball data: slice-by-slice reconstruction in the coronal orientation using 2D convolutional kernels (2D-based HD-Movienet) and reconstruction on blocks of eight coronal slices using 3D convolutional kernels (3D-based HD-Movienet). Two applications were considered: (a) anatomical imaging at expiration and inspiration with four motion states and a scan time of 2 minutes, and (b) dynamic motion imaging with 10 motion states and a scan time of 4 minutes. The training was performed using XD-GRASP 4D images reconstructed from 4.5-minute and 6.5-minute acquisitions as references. &#xD;Main Results: &#xD;2D-based HD-Movienet achieved a reconstruction time of <6 seconds, significantly faster than the iterative XD-GRASP reconstruction (>10 minutes with GPU optimization) while maintaining comparable image quality to XD-GRASP with two extra minutes of scan time. The 3D-based HD-Movienet improved reconstruction quality at the expense of longer reconstruction times (<11 seconds). &#xD;Significance: &#xD;HD-Movienet demonstrates the feasibility of motion-resolved 4D MRI with isotropic 1.1mm resolution and scan times of only 2 minutes for four motion states and 4 minutes for 10 motion states, marking a significant advancement in clinical free-breathing lung MRI.

Computed tomography-based radiomics model for predicting station 4 lymph node metastasis in non-small cell lung cancer.

Kang Y, Li M, Xing X, Qian K, Liu H, Qi Y, Liu Y, Cui Y, Zhang H

pubmed logopapersJun 4 2025
This study aimed to develop and validate machine learning models for preoperative identification of metastasis to station 4 mediastinal lymph nodes (MLNM) in non-small cell lung cancer (NSCLC) patients at pathological N0-N2 (pN0-pN2) stage, thereby enhancing the precision of clinical decision-making. We included a total of 356 NSCLC patients at pN0-pN2 stage, divided into training (n = 207), internal test (n = 90), and independent test (n = 59) sets. Station 4 mediastinal lymph nodes (LNs) regions of interest (ROIs) were semi-automatically segmented on venous-phase computed tomography (CT) images for radiomics feature extraction. Using least absolute shrinkage and selection operator (LASSO) regression to select features with non-zero coefficients. Four machine learning algorithms-decision tree (DT), logistic regression (LR), random forest (RF), and support vector machine (SVM)-were employed to construct radiomics models. Clinical predictors were identified through univariate and multivariate logistic regression, which were subsequently integrated with radiomics features to develop combined models. Models performance were evaluated using receiver operating characteristic (ROC) analysis, calibration curves, decision curve analysis (DCA), and DeLong's test. Out of 1721 radiomics features, eight radiomics features were selected using LASSO regression. The RF-based combined model exhibited the strongest discriminative power, with an area under the curve (AUC) of 0.934 for the training set and 0.889 for the internal test set. The calibration curve and DCA further indicated the superior performance of the combined model based on RF. The independent test set further verified the model's robustness. The combined model based on RF, integrating radiomics and clinical features, effectively and non-invasively identifies metastasis to the station 4 mediastinal LNs in NSCLC patients at pN0-pN2 stage. This model serves as an effective auxiliary tool for clinical decision-making and has the potential to optimize treatment strategies and improve prognostic assessment for pN0-pN2 patients. Not applicable.

ReXVQA: A Large-scale Visual Question Answering Benchmark for Generalist Chest X-ray Understanding

Ankit Pal, Jung-Oh Lee, Xiaoman Zhang, Malaikannan Sankarasubbu, Seunghyeon Roh, Won Jung Kim, Meesun Lee, Pranav Rajpurkar

arxiv logopreprintJun 4 2025
We present ReXVQA, the largest and most comprehensive benchmark for visual question answering (VQA) in chest radiology, comprising approximately 696,000 questions paired with 160,000 chest X-rays studies across training, validation, and test sets. Unlike prior efforts that rely heavily on template based queries, ReXVQA introduces a diverse and clinically authentic task suite reflecting five core radiological reasoning skills: presence assessment, location analysis, negation detection, differential diagnosis, and geometric reasoning. We evaluate eight state-of-the-art multimodal large language models, including MedGemma-4B-it, Qwen2.5-VL, Janus-Pro-7B, and Eagle2-9B. The best-performing model (MedGemma) achieves 83.24% overall accuracy. To bridge the gap between AI performance and clinical expertise, we conducted a comprehensive human reader study involving 3 radiology residents on 200 randomly sampled cases. Our evaluation demonstrates that MedGemma achieved superior performance (83.84% accuracy) compared to human readers (best radiology resident: 77.27%), representing a significant milestone where AI performance exceeds expert human evaluation on chest X-ray interpretation. The reader study reveals distinct performance patterns between AI models and human experts, with strong inter-reader agreement among radiologists while showing more variable agreement patterns between human readers and AI models. ReXVQA establishes a new standard for evaluating generalist radiological AI systems, offering public leaderboards, fine-grained evaluation splits, structured explanations, and category-level breakdowns. This benchmark lays the foundation for next-generation AI systems capable of mimicking expert-level clinical reasoning beyond narrow pathology classification. Our dataset will be open-sourced at https://huggingface.co/datasets/rajpurkarlab/ReXVQA
Page 49 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.