Sort by:
Page 14 of 3313307 results

MIND: A Noise-Adaptive Denoising Framework for Medical Images Integrating Multi-Scale Transformer

Tao Tang, Chengxu Yang

arxiv logopreprintAug 11 2025
The core role of medical images in disease diagnosis makes their quality directly affect the accuracy of clinical judgment. However, due to factors such as low-dose scanning, equipment limitations and imaging artifacts, medical images are often accompanied by non-uniform noise interference, which seriously affects structure recognition and lesion detection. This paper proposes a medical image adaptive denoising model (MI-ND) that integrates multi-scale convolutional and Transformer architecture, introduces a noise level estimator (NLE) and a noise adaptive attention module (NAAB), and realizes channel-spatial attention regulation and cross-modal feature fusion driven by noise perception. Systematic testing is carried out on multimodal public datasets. Experiments show that this method significantly outperforms the comparative methods in image quality indicators such as PSNR, SSIM, and LPIPS, and improves the F1 score and ROC-AUC in downstream diagnostic tasks, showing strong prac-tical value and promotional potential. The model has outstanding benefits in structural recovery, diagnostic sensitivity, and cross-modal robustness, and provides an effective solution for medical image enhancement and AI-assisted diagnosis and treatment.

Using Machine Learning to Improve the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System Diagnosis of Hepatocellular Carcinoma in Indeterminate Liver Nodules.

Hoopes JR, Lyshchik A, Xiao TS, Berzigotti A, Fetzer DT, Forsberg F, Sidhu PS, Wessner CE, Wilson SR, Keith SW

pubmed logopapersAug 11 2025
Liver cancer ranks among the most lethal cancers. Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer and better diagnostic tools are needed to diagnose patients at risk. The aim is to develop a machine learning algorithm that enhances the sensitivity and specificity of the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System (CEUS-LIRADS) in classifying indeterminate at-risk liver nodules (LR-M, LR-3, LR-4) as HCC or non-HCC. Our study includes patients at risk for HCC with untreated indeterminate focal liver observations detected on US or contrast-enhanced CT or MRI performed as part of their clinical standard of care from January 2018 to November 2022. Recursive partitioning was used to improve HCC diagnosis in indeterminate at-risk nodules. Demographics, blood biomarkers, and CEUS imaging features were evaluated as potential predictors for the algorithm to classify nodules as HCC or non-HCC. We evaluated 244 indeterminate liver nodules from 224 patients (mean age 62.9 y). Of the nodules, 73.2% (164/224) were from males. The algorithm was trained on a random 2/3 partition of 163 liver nodules and correctly reclassified more than half of the HCC liver nodules previously categorized as indeterminate in the independent 1/3 test partition of 81 liver nodules, achieving a sensitivity of 56.3% (95% CI: 42.0%, 70.2%) and specificity of 93.9% (95% CI: 84.4%, 100.0%). Machine learning was applied to the multicenter, multinational study of CEUS LI-RADS indeterminate at-risk liver nodules and correctly diagnosed HCC in more than half of the HCC nodules.

Machine learning models for the prediction of preclinical coal workers' pneumoconiosis: integrating CT radiomics and occupational health surveillance records.

Ma Y, Cui F, Yao Y, Shen F, Qin H, Li B, Wang Y

pubmed logopapersAug 11 2025
This study aims to integrate CT imaging with occupational health surveillance data to construct a multimodal model for preclinical CWP identification and individualized risk evaluation. CT images and occupational health surveillance data were retrospectively collected from 874 coal workers, including 228 Stage I and 4 Stage II pneumoconiosis patients, along with 600 healthy and 42 subcategory 0/1 coal workers. First, the YOLOX was employed for automated 3D lung extraction to extract radiomics features. Second, two feature selection algorithms were applied to select critical features from both CT radiomics and occupational health data. Third, three distinct feature sets were constructed for model training: CT radiomics features, occupational health data, and their multimodal integration. Finally, five machine learning models were implemented to predict the preclinical stage of CWP. The model's performance was evaluated using the receiver operating characteristic curve (ROC), accuracy, sensitivity, and specificity. SHapley Additive exPlanation (SHAP) values were calculated to determine the prediction role of each feature in the model with the highest predictive performance. The YOLOX-based lung extraction demonstrated robust performance, achieving an Average Precision (AP) of 0.98. 8 CT radiomic features and 4 occupational health surveillance data were selected for the multimodal model. The optimal occupational health surveillance feature subset comprised the Length of service. Among 5 machine learning algorithms evaluated, the Decision Tree-based multimodal model showed superior predictive capacity on the test set of 142 samples, with an AUC of 0.94 (95% CI 0.88-0.99), accuracy 0.95, specificity 1.00, and Youden's index 0.83. SHAP analysis indicated that Total Protein Results, original shape Flatness, diagnostics Image original Mean were the most influential contributors. Our study demonstrated that the multimodal model demonstrated strong predictive capability for the preclinical stage of CWP by integrating CT radiomic features with occupational health data.

Automated Prediction of Bone Volume Removed in Mastoidectomy.

Nagururu NV, Ishida H, Ding AS, Ishii M, Unberath M, Taylor RH, Munawar A, Sahu M, Creighton FX

pubmed logopapersAug 11 2025
The bone volume drilled by surgeons during mastoidectomy is determined by the need to localize the position, optimize the view, and reach the surgical endpoint while avoiding critical structures. Predicting the volume of bone removed before an operation can significantly enhance surgical training by providing precise, patient-specific guidance and enable the development of more effective computer-assisted and robotic surgical interventions. Single institution, cross-sectional. VR simulation. We developed a deep learning pipeline to automate the prediction of bone volume removed during mastoidectomy using data from virtual reality mastoidectomy simulations. The data set included 15 deidentified temporal bone computed tomography scans. The network was evaluated using fivefold cross-validation, comparing predicted and actual bone removal with metrics such as the Dice score (DSC) and Hausdorff distance (HD). Our method achieved a median DSC of 0.775 (interquartile range [IQR]: 0.725-0.810) and a median HD of 0.492 mm (IQR: 0.298-0.757 mm). Predictions reached the mastoidectomy endpoint of visualizing the horizontal canal and incus in 80% (12/15) of temporal bones. Qualitative analysis indicated that predictions typically produced realistic mastoidectomy endpoints, though some cases showed excessive or insufficient bone removal, particularly at the temporal bone cortex and tegmen mastoideum. This study establishes a foundational step in using deep learning to predict bone volume removal during mastoidectomy. The results indicate that learning-based methods can reasonably approximate the surgical endpoint of mastoidectomy. Further refinement with larger, more diverse data sets and improved model architectures will be essential for enhancing prediction accuracy.

Artificial Intelligence-Driven Body Composition Analysis Enhances Chemotherapy Toxicity Prediction in Colorectal Cancer.

Liu YZ, Su PF, Tai AS, Shen MR, Tsai YS

pubmed logopapersAug 11 2025
Body surface area (BSA)-based chemotherapy dosing remains standard despite its limitations in predicting toxicity. Variations in body composition, particularly skeletal muscle and adipose tissue, influence drug metabolism and toxicity risk. This study aims to investigate the mediating role of body composition in the relationship between BSA-based dosing and dose-limiting toxicities (DLTs) in colorectal cancer patients receiving oxaliplatin-based chemotherapy. We retrospectively analyzed 483 stage III colorectal cancer patients treated at National Cheng Kung University Hospital (2013-2021). An artificial intelligence (AI)-driven algorithm quantified skeletal muscle and adipose tissue compartments from lumbar 3 (L3) vertebral-level computed tomography (CT) scans. Mediation analysis evaluated body composition's role in chemotherapy-related toxicities. Among the cohort, 18.2% (n = 88) experienced DLTs. While BSA alone was not significantly associated with DLTs (OR = 0.473, p = 0.376), increased intramuscular adipose tissue (IMAT) significantly predicted higher DLT risk (OR = 1.047, p = 0.038), whereas skeletal muscle area was protective. Mediation analysis confirmed that IMAT partially mediated the relationship between BSA and DLTs (indirect effect: 0.05, p = 0.040), highlighting adipose infiltration's role in chemotherapy toxicity. BSA-based dosing inadequately accounts for interindividual variations in chemotherapy tolerance. AI-assisted body composition analysis provides a precision oncology framework for identifying high-risk patients and optimizing chemotherapy regimens. Prospective validation is warranted to integrate body composition into routine clinical decision-making.

Enhancing Reliability of Medical Image Diagnosis through Top-rank Learning with Rejection Module

Xiaotong Ji, Ryoma Bise, Seiichi Uchida

arxiv logopreprintAug 11 2025
In medical image processing, accurate diagnosis is of paramount importance. Leveraging machine learning techniques, particularly top-rank learning, shows significant promise by focusing on the most crucial instances. However, challenges arise from noisy labels and class-ambiguous instances, which can severely hinder the top-rank objective, as they may be erroneously placed among the top-ranked instances. To address these, we propose a novel approach that enhances toprank learning by integrating a rejection module. Cooptimized with the top-rank loss, this module identifies and mitigates the impact of outliers that hinder training effectiveness. The rejection module functions as an additional branch, assessing instances based on a rejection function that measures their deviation from the norm. Through experimental validation on a medical dataset, our methodology demonstrates its efficacy in detecting and mitigating outliers, improving the reliability and accuracy of medical image diagnoses.

ChatRadio-Valuer: A Chat Large Language Model for Generalizable Radiology Impression Generation on Multi-institution and Multi-system Data.

Zhong T, Zhao W, Zhang Y, Pan Y, Dong P, Jiang Z, Jiang H, Zhou Y, Kui X, Shang Y, Zhao L, Yang L, Wei Y, Li Z, Zhang J, Yang L, Chen H, Zhao H, Liu Y, Zhu N, Li Y, Wang Y, Yao J, Wang J, Zeng Y, He L, Zheng C, Zhang Z, Li M, Liu Z, Dai H, Wu Z, Zhang L, Zhang S, Cai X, Hu X, Zhao S, Jiang X, Zhang X, Liu W, Li X, Zhu D, Guo L, Shen D, Han J, Liu T, Liu J, Zhang T

pubmed logopapersAug 11 2025
Achieving clinical level performance and widespread deployment for generating radiology impressions encounters a giant challenge for conventional artificial intelligence models tailored to specific diseases and organs. Concurrent with the increasing accessibility of radiology reports and advancements in modern general AI techniques, the emergence and potential of deployable radiology AI exploration have been bolstered. Here, we present ChatRadio-Valuer, the first general radiology diagnosis large language model for localized deployment within hospitals and being close to clinical use for multi-institution and multi-system diseases. ChatRadio-Valuer achieved 15 state-of-the-art results across five human systems and six institutions in clinical-level events (n=332,673) through rigorous and full-spectrum assessment, including engineering metrics, clinical validation, and efficiency evaluation. Notably, it exceeded OpenAI's GPT-3.5 and GPT-4 models, achieving superior performance in comprehensive disease diagnosis compared to the average level of radiology experts. Besides, ChatRadio-Valuer supports zero-shot transfer learning, greatly boosting its effectiveness as a radiology assistant, while ensuring adherence to privacy standards and being readily utilized for large-scale patient populations. Our expeditions suggest the development of localized LLMs would become an imperative avenue in hospital applications.

Outcome Prediction in Pediatric Traumatic Brain Injury Utilizing Social Determinants of Health and Machine Learning Methods.

Kaliaev A, Vejdani-Jahromi M, Gunawan A, Qureshi M, Setty BN, Farris C, Takahashi C, AbdalKader M, Mian A

pubmed logopapersAug 11 2025
Considerable socioeconomic disparities exist among pediatric traumatic brain injury (TBI) patients. This study aims to analyze the effects of social determinants of health on head injury outcomes and to create a novel machine-learning algorithm (MLA) that incorporates socioeconomic factors to predict the likelihood of a positive or negative trauma-related finding on head computed tomography (CT). A cohort of blunt trauma patients under age 15 who presented to the largest safety net hospital in New England between January 2006 and December 2013 (n=211) was included in this study. Patient socioeconomic data such as race, language, household income, and insurance type were collected alongside other parameters like Injury Severity Score (ISS), age, sex, and mechanism of injury. Multivariable analysis was performed to identify significant factors in predicting a positive head CT outcome. The cohort was split into 80% training (168 samples) and 20% testing (43 samples) datasets using stratified sampling. Twenty-two multi-parametric MLAs were trained with 5-fold cross-validation and hyperparameter tuning via GridSearchCV, and top-performing models were evaluated on the test dataset. Significant factors associated with pediatric head CT outcome included ISS, age, and insurance type (p<0.05). The age of the subjects with a clinically relevant trauma-related head CT finding (median= 1.8 years) was significantly different from the age of patients without such findings (median= 9.1 years). These predictors were utilized to train the machine learning models. With ISS, the Fine Gaussian SVM achieved the highest test AUC (0.923), with accuracy=0.837, sensitivity=0.647, and specificity=0.962. The Coarse Tree yielded accuracy=0.837, AUC=0.837, sensitivity=0.824, and specificity=0.846. Without ISS, the Narrow Neural Network performed best with accuracy=0.837, AUC=0.857, sensitivity=0.765, and specificity=0.885. Key predictors of clinically relevant head CT findings in pediatric TBI include ISS, age, and social determinants of health, with children under 5 at higher risk. A novel Fine Gaussian SVM model outperformed other MLA, offering high accuracy in predicting outcomes. This tool shows promise for improving clinical decisions while minimizing radiation exposure in children. TBI = Traumatic Brain Injury; ISS = Injury Severity Score; MLA = Machine Learning Algorithm; CT = Computed Tomography; AUC = Area Under the Curve.

Generative Artificial Intelligence to Automate Cerebral Perfusion Mapping in Acute Ischemic Stroke from Non-contrast Head Computed Tomography Images: Pilot Study.

Primiano NJ, Changa AR, Kohli S, Greenspan H, Cahan N, Kummer BR

pubmed logopapersAug 11 2025
Acute ischemic stroke (AIS) is a leading cause of death and long-term disability worldwide, where rapid reperfusion remains critical for salvaging brain tissue. Although CT perfusion (CTP) imaging provides essential hemodynamic information, its limitations-including extended processing times, additional radiation exposure, and variable software outputs-can delay treatment. In contrast, non-contrast head CT (NCHCT) is ubiquitously available in acute stroke settings. This study explores a generative artificial intelligence approach to predict key perfusion parameters (relative cerebral blood flow [rCBF] and time-to-maximum [Tmax]) directly from NCHCT, potentially streamlining stroke imaging workflows and expanding access to critical perfusion data. We retrospectively identified patients evaluated for AIS who underwent NCHCT, CT angiography, and CTP. Ground truth perfusion maps (rCBF and Tmax) were extracted from VIZ.ai post-processed CTP studies. A modified pix2pix-turbo generative adversarial network (GAN) was developed to translate co-registered NCHCT images into corresponding perfusion maps. The network was trained using paired NCHCT-CTP data, with training, validation, and testing splits of 80%:10%:10%. Performance was assessed on the test set using quantitative metrics including the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and Fréchet inception distance (FID). Out of 120 patients, studies from 99 patients fitting our inclusion and exclusion criteria were used as the primary cohort (mean age 73.3 ± 13.5 years; 46.5% female). Cerebral occlusions were predominantly in the middle cerebral artery. GAN-generated Tmax maps achieved an SSIM of 0.827, PSNR of 16.99, and FID of 62.21, while the rCBF maps demonstrated comparable performance (SSIM 0.79, PSNR 16.38, FID 59.58). These results indicate that the model approximates ground truth perfusion maps to a moderate degree and successfully captures key cerebral hemodynamic features. Our findings demonstrate the feasibility of generating functional perfusion maps directly from widely available NCHCT images using a modified GAN. This cross-modality approach may serve as a valuable adjunct in AIS evaluation, particularly in resource-limited settings or when traditional CTP provides limited diagnostic information. Future studies with larger, multicenter datasets and further model refinements are warranted to enhance clinical accuracy and utility.

Enhanced MRI brain tumor detection using deep learning in conjunction with explainable AI SHAP based diverse and multi feature analysis.

Rahman A, Hayat M, Iqbal N, Alarfaj FK, Alkhalaf S, Alturise F

pubmed logopapersAug 11 2025
Recent innovations in medical imaging have markedly improved brain tumor identification, surpassing conventional diagnostic approaches that suffer from low resolution, radiation exposure, and limited contrast. Magnetic Resonance Imaging (MRI) is pivotal in precise and accurate tumor characterization owing to its high-resolution, non-invasive nature. This study investigates the synergy among multiple feature representation schemes such as local Binary Patterns (LBP), Gabor filters, Discrete Wavelet Transform, Fast Fourier Transform, Convolutional Neural Networks (CNN), and Gray-Level Run Length Matrix alongside five learning algorithms namely: k-nearest Neighbor, Random Forest, Support Vector Classifier (SVC), and probabilistic neural network (PNN), and CNN. Empirical findings indicate that LBP in conjunction with SVC and CNN obtained high specificity and accuracy, rendering it a promising method for MRI-based tumor diagnosis. Further to investigate the contribution of LBP, Statistical analysis chi-square and p-value tests are used to confirm the significant impact of LBP feature space for identification of brain Tumor. In addition, The SHAP analysis was used to identify the most important features in classification. In a small dataset, CNN obtained 97.8% accuracy while SVC yielded 98.06% accuracy. In subsequent analysis, a large benchmark dataset is also utilized to evaluate the performance of learning algorithms in order to investigate the generalization power of the proposed model. CNN achieves the highest accuracy of 98.9%, followed by SVC at 96.7%. These results highlight CNN's effectiveness in automated, high-precision tumor diagnosis. This achievement is ascribed with MRI-based feature extraction by combining high resolution, non-invasive imaging capabilities with the powerful analytical abilities of CNN. CNN demonstrates superiority in medical imaging owing to its ability to learn intricate spatial patterns and generalize effectively. This interaction enhances the accuracy, speed, and consistency of brain tumor detection, ultimately leading to better patient outcomes and more efficient healthcare delivery. https://github.com/asifrahman557/BrainTumorDetection .
Page 14 of 3313307 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.