Sort by:
Page 204 of 2352341 results

Can intraoperative improvement of radial endobronchial ultrasound imaging enhance the diagnostic yield in peripheral pulmonary lesions?

Nishida K, Ito T, Iwano S, Okachi S, Nakamura S, Chrétien B, Chen-Yoshikawa TF, Ishii M

pubmed logopapersMay 26 2025
Data regarding the diagnostic efficacy of radial endobronchial ultrasound (R-EBUS) findings obtained via transbronchial needle aspiration (TBNA)/biopsy (TBB) with endobronchial ultrasonography with a guide sheath (EBUS-GS) for peripheral pulmonary lesions (PPLs) are lacking. We evaluated whether intraoperative probe repositioning improves R-EBUS imaging and affects diagnostic yield and safety of EBUS-guided sampling for PPLs. We retrospectively studied 363 patients with PPLs who underwent TBNA/TBB (83 lesions) or TBB (280 lesions) using EBUS-GS. Based on the R-EBUS findings before and after these procedures, patients were categorized into three groups: the improved R-EBUS image (n = 52), unimproved R-EBUS image (n = 69), and initial within-lesion groups (n = 242). The impact of improved R-EBUS findings on diagnostic yield and complications was assessed using multivariable logistic regression, adjusting for lesion size, lesion location, and the presence of a bronchus leading to the lesion on CT. A separate exploratory random-forest model with SHAP analysis was used to explore factors associated with successful repositioning in lesions not initially "within." The diagnostic yield in the improved R-EBUS group was significantly higher than that in the unimproved R-EBUS group (76.9% vs. 46.4%, p = 0.001). The regression model revealed that the improvement in intraoperative R-EBUS findings was associated with a high diagnostic yield (odds ratio: 3.55, 95% confidence interval, 1.57-8.06, p = 0.002). Machine learning analysis indicated that inner lesion location and radiographic visibility were the most influential predictors of successful repositioning. The complication rates were similar across all groups (total complications: 5.8% vs. 4.3% vs. 6.2%, p = 0.943). Improved R-EBUS findings during TBNA/TBB or TBB with EBUS-GS were associated with a high diagnostic yield without an increase in complications, even when the initial R-EBUS findings were inadequate. This suggests that repeated intraoperative probe repositioning can safely boost outcomes.

Predicting Surgical Versus Nonsurgical Management of Acute Isolated Distal Radius Fractures in Patients Under Age 60 Using a Convolutional Neural Network.

Hsu D, Persitz J, Noori A, Zhang H, Mashouri P, Shah R, Chan A, Madani A, Paul R

pubmed logopapersMay 26 2025
Distal radius fractures (DRFs) represent up to 20% of the fractures in the emergency department. Delays to surgery of more than 14 days are associated with poorer functional outcomes and increased health care utilization/costs. At our institution, the average time to surgery is more than 19 days because of the separation of surgical and nonsurgical care pathways and a lengthy referral process. To address this challenge, we aimed to create a convolutional neural network (CNN) capable of automating DRF x-ray analysis and triaging. We hypothesize that this model will accurately predict whether an acute isolated DRF fracture in a patient under the age of 60 years will be treated surgically or nonsurgically at our institution based on the radiographic input. We included 163 patients under the age of 60 years who presented to the emergency department between 2018 and 2023 with an acute isolated DRF and who were referred for clinical follow-up. Radiographs taken within 4 weeks of injury were collected in posterior-anterior and lateral views and then preprocessed for model training. The surgeons' decision to treat surgically or nonsurgically at our institution was the reference standard for assessing the model prediction accuracy. We included 723 radiographic posterior-anterior and lateral pairs (385 surgical and 338 nonsurgical) for model training. The best-performing model (seven CNN layers, one fully connected layer, an image input size of 256 × 256 pixels, and a 1.5× weighting for volarly displaced fractures) achieved 88% accuracy and 100% sensitivity. Values for true positive (100%), true negative (72.7%), false positive (27.3%), and false negative (0%) were calculated. After training based on institution-specific indications, a CNN-based algorithm can predict with 88% accuracy whether treatment of an acute isolated DRF in a patient under the age of 60 years will be treated surgically or nonsurgically. By promptly identifying patients who would benefit from expedited surgical treatment pathways, this model can reduce times for referral.

Advancements in Medical Image Classification through Fine-Tuning Natural Domain Foundation Models

Mobina Mansoori, Sajjad Shahabodini, Farnoush Bayatmakou, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi

arxiv logopreprintMay 26 2025
Using massive datasets, foundation models are large-scale, pre-trained models that perform a wide range of tasks. These models have shown consistently improved results with the introduction of new methods. It is crucial to analyze how these trends impact the medical field and determine whether these advancements can drive meaningful change. This study investigates the application of recent state-of-the-art foundation models, DINOv2, MAE, VMamba, CoCa, SAM2, and AIMv2, for medical image classification. We explore their effectiveness on datasets including CBIS-DDSM for mammography, ISIC2019 for skin lesions, APTOS2019 for diabetic retinopathy, and CHEXPERT for chest radiographs. By fine-tuning these models and evaluating their configurations, we aim to understand the potential of these advancements in medical image classification. The results indicate that these advanced models significantly enhance classification outcomes, demonstrating robust performance despite limited labeled data. Based on our results, AIMv2, DINOv2, and SAM2 models outperformed others, demonstrating that progress in natural domain training has positively impacted the medical domain and improved classification outcomes. Our code is publicly available at: https://github.com/sajjad-sh33/Medical-Transfer-Learning.

Deep Learning for Pneumonia Diagnosis: A Custom CNN Approach with Superior Performance on Chest Radiographs

Mehta, A., Vyas, M.

medrxiv logopreprintMay 26 2025
A major global health and wellness issue causing major health problems and death, pneumonia underlines the need of quickly and precisely identifying and treating it. Though imaging technology has advanced, radiologists manual reading of chest X-rays still constitutes the basic method for pneumonia detection, which causes delays in both treatment and medical diagnosis. This study proposes a pneumonia detection method to automate the process using deep learning techniques. The concept employs a bespoke convolutional neural network (CNN) trained on different pneumonia-positive and pneumonia-negative cases from several healthcare providers. Various pre-processing steps were done on the chest radiographs to increase integrity and efficiency before teaching the design. Based on the comparison study with VGG19, ResNet50, InceptionV3, DenseNet201, and MobileNetV3, our bespoke CNN model was discovered to be the most efficient in balancing accuracy, recall, and parameter complexity. It shows 96.5% accuracy and 96.6% F1 score. This study contributes to the expansion of an automated, paired with a reliable, pneumonia finding system, which could improve personal outcomes and increase healthcare efficiency. The full project is available at here.

Improving brain tumor diagnosis: A self-calibrated 1D residual network with random forest integration.

Sumithra A, Prathap PMJ, Karthikeyan A, Dhanasekaran S

pubmed logopapersMay 26 2025
Medical specialists need to perform precise MRI analysis for accurate diagnosis of brain tumors. Current research has developed multiple artificial intelligence (AI) techniques for the process automation of brain tumor identification. However, existing approaches often depend on singular datasets, limiting their generalization capabilities across diverse clinical scenarios. The research introduces SCR-1DResNet as a new diagnostic tool for brain tumor detection that incorporates self-calibrated Random Forest along with one-dimensional residual networks. The research starts with MRI image acquisition from multiple Kaggle datasets then proceeds through stepwise processing that eliminates noise, enhances images, and performs resizing and normalization and conducts skull stripping operations. After data collection the WaveSegNet mode l extracts important attributes from tumors at multiple scales. Components of Random Forest classifier together with One-Dimensional Residual Network form the SCR-1DResNet model via self-calibration optimization to improve prediction reliability. Tests show the proposed system produces classification precision of 98.50% accompanied by accuracy of 98.80% and recall reaching 97.80% respectively. The SCR-1DResNet model demonstrates superior diagnostic capability and enhanced performance speed which shows strong prospects towards clinical decision support systems and improved neurological and oncological patient treatments.

MobNas ensembled model for breast cancer prediction.

Shahzad T, Saqib SM, Mazhar T, Iqbal M, Almogren A, Ghadi YY, Saeed MM, Hamam H

pubmed logopapersMay 25 2025
Breast cancer poses a real and immense threat to humankind, thus a need to develop a way of diagnosing this devastating disease early, accurately, and in a simpler manner. Thus, while substantial progress has been made in developing machine learning algorithms, deep learning, and transfer learning models, issues with diagnostic accuracy and minimizing diagnostic errors persist. This paper introduces MobNAS, a model that uses MobileNetV2 and NASNetLarge to sort breast cancer images into benign, malignant, or normal classes. The study employs a multi-class classification design and uses a publicly available dataset comprising 1,578 ultrasound images, including 891 benign, 421 malignant, and 266 normal cases. By deploying MobileNetV2, it is easy to work well on devices with less computational capability than is used by NASNetLarge, which enhances its applicability and effectiveness in other tasks. The performance of the proposed MobNAS model was tested on the breast cancer image dataset, and the accuracy level achieved was 97%, the Mean Absolute Error (MAE) was 0.05, and the Matthews Correlation Coefficient (MCC) was 95%. From the findings of this research, it is evident that MobNAS can enhance diagnostic accuracy and reduce existing shortcomings in breast cancer detection.

Improving Medical Reasoning with Curriculum-Aware Reinforcement Learning

Shaohao Rui, Kaitao Chen, Weijie Ma, Xiaosong Wang

arxiv logopreprintMay 25 2025
Recent advances in reinforcement learning with verifiable, rule-based rewards have greatly enhanced the reasoning capabilities and out-of-distribution generalization of VLMs/LLMs, obviating the need for manually crafted reasoning chains. Despite these promising developments in the general domain, their translation to medical imaging remains limited. Current medical reinforcement fine-tuning (RFT) methods predominantly focus on close-ended VQA, thereby restricting the model's ability to engage in world knowledge retrieval and flexible task adaptation. More critically, these methods fall short of addressing the critical clinical demand for open-ended, reasoning-intensive decision-making. To bridge this gap, we introduce \textbf{MedCCO}, the first multimodal reinforcement learning framework tailored for medical VQA that unifies close-ended and open-ended data within a curriculum-driven RFT paradigm. Specifically, MedCCO is initially fine-tuned on a diverse set of close-ended medical VQA tasks to establish domain-grounded reasoning capabilities, and is then progressively adapted to open-ended tasks to foster deeper knowledge enhancement and clinical interpretability. We validate MedCCO across eight challenging medical VQA benchmarks, spanning both close-ended and open-ended settings. Experimental results show that MedCCO consistently enhances performance and generalization, achieving a 11.4\% accuracy gain across three in-domain tasks, and a 5.7\% improvement on five out-of-domain benchmarks. These findings highlight the promise of curriculum-guided RL in advancing robust, clinically-relevant reasoning in medical multimodal language models.

CardioCoT: Hierarchical Reasoning for Multimodal Survival Analysis

Shaohao Rui, Haoyang Su, Jinyi Xiang, Lian-Ming Wu, Xiaosong Wang

arxiv logopreprintMay 25 2025
Accurate prediction of major adverse cardiovascular events recurrence risk in acute myocardial infarction patients based on postoperative cardiac MRI and associated clinical notes is crucial for precision treatment and personalized intervention. Existing methods primarily focus on risk stratification capability while overlooking the need for intermediate robust reasoning and model interpretability in clinical practice. Moreover, end-to-end risk prediction using LLM/VLM faces significant challenges due to data limitations and modeling complexity. To bridge this gap, we propose CardioCoT, a novel two-stage hierarchical reasoning-enhanced survival analysis framework designed to enhance both model interpretability and predictive performance. In the first stage, we employ an evidence-augmented self-refinement mechanism to guide LLM/VLMs in generating robust hierarchical reasoning trajectories based on associated radiological findings. In the second stage, we integrate the reasoning trajectories with imaging data for risk model training and prediction. CardioCoT demonstrates superior performance in MACE recurrence risk prediction while providing interpretable reasoning processes, offering valuable insights for clinical decision-making.

[Clinical value of medical imaging artificial intelligence in the diagnosis and treatment of peritoneal metastasis in gastrointestinal cancers].

Fang MJ, Dong D, Tian J

pubmed logopapersMay 25 2025
Peritoneal metastasis is a key factor in the poor prognosis of advanced gastrointestinal cancer patients. Traditional radiological diagnostic faces challenges such as insufficient sensitivity. Through technologies like radiomics and deep learning, artificial intelligence can deeply analyze the tumor heterogeneity and microenvironment features in medical images, revealing markers of peritoneal metastasis and constructing high-precision predictive models. These technologies have demonstrated advantages in tasks such as predicting peritoneal metastasis, assessing the risk of peritoneal recurrence, and identifying small metastatic foci during surgery. This paper summarizes the representative progress and application prospects of medical imaging artificial intelligence in the diagnosis and treatment of peritoneal metastasis, and discusses potential development directions such as multimodal data fusion and large model. The integration of medical imaging artificial intelligence with clinical practice is expected to advance personalized and precision medicine in the diagnosis and treatment of peritoneal metastasis in gastrointestinal cancers.
Page 204 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.