Sort by:
Page 1 of 7207200 results
Next

Zhang J, Ji Z, Zhao C, Huang M, Li M, Zhang H

pubmed logopapersDec 8 2025
Endoscopic imaging is vital in Minimally Invasive Surgery (MIS), but its utility is often compromised by specular reflections that obscure important details and hinder diagnostic accuracy. Existing methods to address these reflections face limitations, particularly those relying on color-based thresholding and the underutilization of deep learning for highlight detection. To tackle these challenges, we propose the Specular Detection Median Filtering Fusion Network (SDMFFN), a novel framework designed to detect and remove specular reflections in endoscopic images. The SDMFFN employs a two-stage process: detection and removal. In the detection phase, we utilize the enhanced Specular Transformer Unet (S-TransUnet) model integrating Atrous Spatial Pyramid Pooling (ASPP), Information Bottleneck (IB) and Convolutional Block Attention Module (CBAM) to optimize multi-scale feature extraction, which helps to achieve accurate highlight detection. In the removal phase, we improve the advanced median filtering to smooth reflective areas and integrate color information for a natural restoration. Experimental results show that our proposed SDMFFN has outperformed other methods. Our method improves visual clarity and diagnostic precision, ultimately enhancing surgical outcomes and reducing the risk of misdiagnosis by delivering high-quality, reflection-free endoscopic images. The robust performance of SDMFFN suggests its adaptability to other medical imaging modalities, paving the way for broader clinical and research applications in robotic surgery, diagnostic endoscopy and telemedicine. To promote further progress in the research, we will make the code publicly available at: https://github.com/jize123457/SDMFFN.

Wu X, Zhang X, Xiao Z, Hu L, Higashita R, Liu J

pubmed logopapersDec 8 2025
Efficient convolutional neural network (CNN) architecture design has attracted growing research interests. However, they typically apply single receptive field (RF), small asymmetric RFs, or pyramid RFs to learn different feature representations, still encountering two significant challenges in medical image classification tasks: i) They have limitations in capturing diverse lesion characteristics efficiently, e.g., tiny, coordination, small and salient, which have unique roles on the classification results, especially imbalanced medical image classification. ii) The predictions generated by those CNNs are often unfair/biased, bringing a high risk when employing them to real-world medical diagnosis conditions. To tackle these issues, we develop a new concept, Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields (ERoHPRF), to simultaneously boost medical image classification performance and fairness. This concept aims to mimic the multi-expert consultation mode by applying the well-designed heterogeneous pyramid RF bag to capture lesion characteristics with varying significances effectively via convolution operations with multiple heterogeneous kernel sizes. Additionally, ERoHPRF introduces an expertlike structural reparameterization technique to merge its parameters with the two-stage strategy, ensuring competitive computation cost and inference speed through comparisons to a single RF. To manifest the effectiveness and generalization ability of ERoHPRF, we incorporate it into mainstream efficient CNN architectures. The extensive experiments show that our proposed ERoHPRF maintains a better trade-off than state-of-the-art methods in terms of medical image classification, fairness, and computation overhead. The code of this paper is available at https://github.com/XiaoLing12138/Expert-Like-Reparameterization-of-Heterogeneous-Pyramid-Receptive-Fields.

Wang F, Wang Z, Li Y, Lyu J, Qin C, Wang S, Guo K, Sun M, Huang M, Zhang H, Tanzer M, Li Q, Chen X, Huang J, Wu Y, Zhang H, Hamedani KA, Lyu Y, Sun L, Li Q, He T, Lan L, Yao Q, Xu Z, Xin B, Metaxas DN, Razizadeh N, Nabavi S, Yiasemis G, Teuwen J, Zhang Z, Wang S, Zhang C, Ennis DB, Xue Z, Hu C, Xu R, Oksuz I, Lyu D, Huang Y, Guo X, Hao R, Patel JH, Cai G, Chen B, Zhang Y, Hua S, Chen Z, Dou Q, Zhuang X, Tao Q, Bai W, Qin J, Wang H, Prieto C, Markl M, Young A, Li H, Hu X, Wu L, Qu X, Yang G, Wang C

pubmed logopapersDec 8 2025
Cardiovascular health is vital to human well-being, and cardiac magnetic resonance (CMR) imaging is considered the clinical reference standard for diagnosing cardiovascular disease. However, its adoption is hindered by long scan times, complex contrasts, and inconsistent quality. While deep learning methods perform well on specific CMR imaging sequences, they often fail to generalize across modalities and sampling schemes. The lack of benchmarks for high-quality, fast CMR image reconstruction further limits technology comparison and adoption. The CMRxRecon2024 challenge, attracting over 200 teams from 18 countries, addressed these issues with two tasks: generalization to unseen modalities and robustness to diverse undersampling patterns. We introduced the largest public multi-modality CMR raw dataset, an open benchmarking platform, and shared code. Analysis of the best-performing solutions revealed that prompt-based adaptation and enhanced physics-driven consistency enabled strong cross-scenario performance. These findings establish principles for generalizable reconstruction models and advance clinically translatable AI in cardiovascular imaging.

Yeghaian M, Trebeschi S, Herrero-Huertas M, Ferradás FJM, Bos P, van Alphen MJA, van Gerven MAJ, Beets-Tan RGH, Bodalal Z, van der Velden LA

pubmed logopapersDec 8 2025
Accurate prediction of treatment outcomes is crucial for personalized treatment in head and neck squamous cell carcinoma (HNSCC). Beyond one-year survival, assessing long-term enteral nutrition dependence is essential for optimizing patient counseling and resource allocation. This preliminary study aimed to predict one-year survival and feeding tube dependence in surgically treated HNSCC patients using classical machine learning. This proof-of-principle retrospective study included 558 surgically treated HNSCC patients. Baseline clinical data, routine blood markers, and MRI-based radiomic features were collected before treatment. Additional postsurgical treatments within one year were also recorded. Random forest classifiers were trained to predict one-year survival and feeding tube dependence. Model explainability was assessed using Shapley Additive exPlanation (SHAP) values. Using tenfold stratified cross-validation, clinical data showed the highest predictive performance for survival (AUC = 0.75 ± 0.10; p < 0.001). Blood (AUC = 0.67 ± 0.17; p = 0.001) and imaging (AUC = 0.68 ± 0.16; p = 0.26) showed moderate performance, and multimodal integration did not improve predictions (AUC = 0.68 ± 0.16; p = 0.38). For feeding tube dependence, all modalities had low predictive power (AUC ≤ 0.66; p > 0.05). However, postsurgical treatment information outperformed all other modalities (AUC = 0.67 ± 0.07; p = 0.002), but had the lowest predictive value for survival (AUC = 0.57 ± 0.11; p = 0.08). Clinical data appeared to be the strongest predictor of one-year survival in surgically treated HNSCC, although overall predictive performance was moderate. Postsurgical treatment information played a key role in predicting tube feeding dependence. While multimodal integration did not enhance overall model performance, it showed modest gains for weaker individual modalities, suggesting potential complementarity that warrants further investigation.

Tang C, Jolicoeur BW, Rice J, Doctor CA, Yardim ZS, Rivera-Rivera LA, Eisenmenger LB, Johnson KM

pubmed logopapersDec 8 2025
To develop accelerated 3D phase contrast (PC) MRI using jointly learned wave encoding and reconstruction. Pseudo-fully sampled neurovascular 4D flow data (N = 40) and a simulation framework were used to learn phase encoding locations, wave readout parameters, and model-based reconstruction network (MoDL) for a rapid 3D PC scan (2.25 min). Parameters were also learned for an otherwise identical scan without wave encoding. Prospective scans with and without wave sampling, time-matched 3D radial, and reference 3D radial (5.65 min) were conducted in a flow phantom and 12 healthy participants. Flow rate, pixel-wise velocity, and variability of maximum velocity ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msub><mi>σ</mi> <msub><mi>v</mi> <mi>max</mi></msub> </msub> </mrow> <annotation>$$ {\sigma}_{v_{max}} $$</annotation></semantics> </math> ) were compared. In the phantom, learned wave scans provided accurate flow rates compared to flow probe values (0.170 ± 0.002 vs. 0.17, 0.152 ± 0.003 vs. 0.15, 1.838 ± 0.044 vs. 1.83 L/min) and showed high correlation with reference scan (slope = 0.97, R<sup>2</sup> = 0.99). In vivo, learned wave scans demonstrated reduced aliasing and blurring, and better small vessel conspicuity compared to scans without wave sampling and time-matched 3D radial scans. The internal carotid artery (ICA) flow rate coefficient of variation (CV) and intraclass correlation coefficient (ICC) for learned wave scans were similar to reference 3D radial scans (CV = 6.569, ICC = 0.927; reference CV = 6.553, ICC = 0.910). Learned wave sampling demonstrated similar or lower <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msub><mi>σ</mi> <msub><mi>v</mi> <mi>max</mi></msub> </msub> </mrow> <annotation>$$ {\sigma}_{v_{max}} $$</annotation></semantics> </math> in middle cerebral artery (MCA), basilar artery (BA), superior sagittal sinus (SSS), and most ICA segments than the longer reference scan. This work demonstrates feasibility, improved image quality and accurate flow measurements of learned wave sampling and MoDL reconstruction for 3D PC MRI.

Bhatia H, Bhatia A, Singh A, Saini S, Sodhi KS

pubmed logopapersDec 8 2025
Artificial intelligence (AI) is being increasingly utilized in various aspects by the radiology department. With an ever-increasing burden on the healthcare system, particularly in emergency units, the need to incorporate AI in patient triage and workflow optimization cannot be overstated. Machine learning (ML)-based algorithms form the core of AI-based software, aiding healthcare professionals at nearly every step in delivering appropriate patient care. Regarding the radiology section of the hospital, AI-based algorithms have proven exceptionally useful in assisting radiologists and technicians with image acquisition. From accurate clinical referrals to scheduling computed tomography/magnetic resonance imaging scan appointments, from ensuring the lowest radiation exposure to offering timely follow-up reminders, ML-based software has indeed revolutionized the concept of modern image acquisition, especially in the pediatric radiology section. Although the implementation of these algorithms is swift, several technical challenges and the limited availability of pediatric datasets preclude their widespread use. The utility of multimodal pediatric datasets, which combine imaging, genomics, and clinical data, for comprehensive AI triage models can help AI systems evolve toward greater adaptability and integration, resulting in enhanced efficiency, reduced turnaround times, and improved patient outcomes in pediatric radiology departments in the future. In this article, we highlight and review the utility of AI and machine learning-based algorithms in efficiently aiding triage and streamlining the workflow in the pediatric radiology section, thereby ensuring an overall improvement in the departmental workflow.

van Poppel LM, de Vries L, Mojtahedi M, Kappelhof M, Olthuis SGH, van Oostenbrugge R, van Zwam WH, Jan van Doormaal P, Beenen LFM, Roos YBWEM, Majoie CBLM, Marquering HA, Emmer BJ

pubmed logopapersDec 8 2025
To compare machine learning models using different combinations of clinical and imaging variables for classifying ischemic stroke patients as having an onset-to-imaging (OTI) time within or beyond 4.5 h. We analyzed 993 patients with known OTI time from the MR CLEAN Registry and LATE trial. Data were split into training and test sets (80:20). We developed models using various combinations of variables to classify OTI time, including clinical-radiological information, and variables automatically extracted from segmented ischemic regions on non-contrast CT, such as net water uptake (NWU), lesion volume, and radiomics features. Performance was assessed using the area under the receiver operating characteristic curve (AUC). Of 993 patients, 199 (20%) presented beyond 4.5 h. The model including only clinical-radiological scores, and the one including only NWU, achieved an AUC of 0.65. Performance was higher for models that included NWU combined with lesion volume or clinical-radiological scores (AUCs ranging from 0.70 to 0.75). Radiomics-based models achieved the highest performance with AUCs of 0.81, significantly outperforming NWU-based models. Key predictors for identifying patients beyond 4.5 h included homogeneous lesion textures in both core and hypoperfused areas, smaller hypoperfused area volumes, higher core NWU, and lower baseline NIHSS scores. We found that radiomics-based models outperform models including NWU measurements for classifying stroke OTI time in this endovascular therapy population. The superior performance suggests that texture, shape, and intensity patterns of ischemic lesions may capture more information about lesion age than single metrics like NWU. External validation in broader stroke populations is needed to establish clinical utility. Question Which combinations of clinical and CT-derived variables enable the most accurate classification of stroke onset time within versus beyond 4.5 h using machine learning? Findings Models using radiomics features achieved superior accuracy (AUC 0.81) compared to models using net water uptake measurements (AUC 0.65) for onset time classification. Clinical relevance Automated CT-based radiomics models enable accurate stroke onset time classification without advanced imaging, potentially expanding treatment options for patients with unknown symptom onset times in centers lacking MRI capabilities.

Langkilde F, Gren M, Wallström J, Kuczera S, Maier SE

pubmed logopapersDec 8 2025
The goal of this study was to curate a prostate MRI dataset from a screening population and to train and evaluate a deep-learning segmentation method on the same data. An artificial intelligence (AI) system, based on a deep-learning-based segmentation model (nnU-Net method), was trained and evaluated with MRI data from a prostate cancer screening population (G2-trial). The goal of the AI was to detect clinically significant prostate cancer (csPC), defined as International Society of Urological Pathology (ISUP) grade 2 or higher. The AI system was compared to the performance of radiologists using PI-RADS v2 evaluation metrics. Histopathology was used as the reference standard in the dataset. To better verify negative cases, 288 men were subject to systematic biopsies regardless of MRI findings, and all men had at least 3 years of follow-up. A total of 1354 MRI examinations in 1254 men with a median age of 58 years (range 50-63 years) were randomly divided into a training set (1086 examinations) and a test set (268 examinations). The resulting area under the receiver operating characteristic curve (AUROC) was 0.83 (95% CI 0.73-0.92) for the AI system; however, with significantly lower specificity at matched sensitivity levels compared to radiologists. A prostate MRI dataset from a screening population with histological confirmation was curated and evaluated with AI. The neural network trained and tested on this data produced lower specificities than the radiologists. Question Does an AI system trained in a screening cohort perform as well as radiologists? Findings An AI trained on screening data achieved an AUROC of 0.83 (95% CI 0.73-0.92) with lower specificity at the same sensitivity levels as radiologists. Clinical relevance An AI system trained in a screening population has lower specificity than radiologists using PI-RADS v2.

Lee J, Jang DH, Jeon YJ, Kim YJ, Ahn H, Choi WS, Kang BK, Yoon YE, Lee DK, Oh J

pubmed logopapersDec 8 2025
Urinary stones, one of the most common emergency conditions, traverse the ureter, urine flow is obstructed, resulting in hydronephrosis and severe pain. However, vessel wall calcifications or phleboliths are frequently observed in abdominal and pelvic regions and distinguishing them from urinary stones can be challenging. This study was performed to implement deep learning techniques, specifically utilizing the UROAID (UROlothiasis AssIsted Diagnosis system) model, to detect urinary stones within the urinary tract. Noncontrast abdominopelvic computed topographies (CT) performed on adult patients at the emergency departments of the two tertiary academic hospitals were collected. The ROI Extraction and KUB Segmentation algorithms were a modified version of Uro-UNETR. The 3D labelling map and 3D stone classification were individual outputs that were then merged with the results from the Urinary System Estimation module in the UROAID detection module. In total, the CT scans of 6659 patients were included in the study. An accuracy of 0.9585 and an F1 score of 0.9605 were achieved using an ensemble model alongside a stone classification module that we also proposed to further improve the performance. The detection rate of UROAID for stones by location was highest for stones in the kidney, with a rate of 99.0%, followed by the proximal ureter (99.1%), middle ureter (98.0%), distal ureter (96.4%), and urinary bladder (91.3%). This study designed UROAID, an ensemble model of a segmentation-based stone detection module and a stone classification module, to follow the process of a radiologist accurately diagnosing urinary stones.

Zhao D, Kong X, Yang K, Wan J, Liu Z, Pan F, Sun P, Zheng C, Yang L

pubmed logopapersDec 8 2025
To investigate the impact of deep learning reconstruction (DLR) on the image quality of diffusion-weighted imaging (DWI) for liver and its ability to differentiate benign from malignant focal liver lesions (FLLs). Consecutive patients with suspected liver disease who underwent liver MRI between January and May 2025 were included. All patients received conventional DWI (DWI<sub>C</sub>) and an accelerated reconstructed DWI (DWI<sub>DLR</sub>) in which acquisition time was prospectively halved by reducing signal averages. Image quality was compared qualitatively using Likert scores (e.g., lesion conspicuity, overall quality) and quantitatively by measuring signal-to-noise ratio of the liver (SNR<sub>Liver</sub>) and lesion (SNR<sub>Lesion</sub>), contrast-to-noise ratio (CNR), and edge rise distance (ERD). Apparent diffusion coefficient (ADC) values and diagnostic performance for differentiating benign from malignant FLLs were assessed. A total of 193 patients (128 males, 65 females; age range, 23-81 years) were included. For quantitative assessment, DWI<sub>DLR</sub> demonstrated higher SNR<sub>Liver</sub>, SNR<sub>Lesion</sub>, CNR, and a shorter ERD (all p < 0.05). For qualitative assessment, DWI<sub>DLR</sub> showed improved lesion conspicuity, liver edge sharpness, and overall image quality (all p < 0.01), with no significant difference in artifacts (p = 0.08). ADC values were lower with DWI<sub>DLR</sub> for both benign and malignant FLLs (p < 0.001). In differentiating benign from malignant lesions, DWI<sub>DLR</sub> achieved better diagnostic performance (AUC: 0.921 vs. 0.904, p < 0.05). Deep learning-enhanced DWI enables a 50% reduction in acquisition time while simultaneously improving liver MRI image quality and diagnostic performance in differentiating benign from malignant FLLs. This study demonstrates that deep learning-based reconstruction enables faster, higher-quality liver MRI with improved diagnostic accuracy for focal liver lesions, supporting its integration into routine radiological practice. Diffusion-weighted liver MRI commonly suffers from limited image quality and efficiency. Deep learning reconstruction substantially improves liver MRI quality while enabling significantly shorter acquisition times. Improved lesion differentiation enables more accurate clinical diagnosis of liver lesions.
Page 1 of 7207200 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.