Sort by:
Page 413 of 4524519 results

The Role of Machine Learning to Detect Occult Neck Lymph Node Metastases in Early-Stage (T1-T2/N0) Oral Cavity Carcinomas.

Troise S, Ugga L, Esposito M, Positano M, Elefante A, Capasso S, Cuocolo R, Merola R, Committeri U, Abbate V, Bonavolontà P, Nocini R, Dell'Aversana Orabona G

pubmed logopapersMay 19 2025
Oral cavity carcinomas (OCCs) represent roughly 50% of all head and neck cancers. The risk of occult neck metastases for early-stage OCCs ranges from 15% to 35%, hence the need to develop tools that can support the diagnosis of detecting these neck metastases. Machine learning and radiomic features are emerging as effective tools in this field. Thus, the aim of this study is to demonstrate the effectiveness of radiomic features to predict the risk of occult neck metastases in early-stage (T1-T2/N0) OCCs. Retrospective study. A single-institution analysis (Maxillo-facial Surgery Unit, University of Naples Federico II). A retrospective analysis was conducted on 75 patients surgically treated for early-stage OCC. For all patients, data regarding TNM, in particular pN status after the histopathological examination, have been obtained and the analysis of radiomic features from MRI has been extrapolated. 56 patients confirmed N0 status after surgery, while 19 resulted in pN+. The radiomic features, extracted by a machine-learning algorithm, exhibited the ability to preoperatively discriminate occult neck metastases with a sensitivity of 78%, specificity of 83%, an AUC of 86%, accuracy of 80%, and a positive predictive value (PPV) of 63%. Our results seem to confirm that radiomic features, extracted by machine learning methods, are effective tools in detecting occult neck metastases in early-stage OCCs. The clinical relevance of this study is that radiomics could be used routinely as a preoperative tool to support diagnosis and to help surgeons in the surgical decision-making process, particularly regarding surgical indications for neck lymph node treatment.

Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields in Efficient CNNs for Fair Medical Image Classification

Xiao Wu, Xiaoqing Zhang, Zunjie Xiao, Lingxi Hu, Risa Higashita, Jiang Liu

arxiv logopreprintMay 19 2025
Efficient convolutional neural network (CNN) architecture designs have attracted growing research interests. However, they usually apply single receptive field (RF), small asymmetric RFs, or pyramid RFs to learn different feature representations, still encountering two significant challenges in medical image classification tasks: 1) They have limitations in capturing diverse lesion characteristics efficiently, e.g., tiny, coordination, small and salient, which have unique roles on results, especially imbalanced medical image classification. 2) The predictions generated by those CNNs are often unfair/biased, bringing a high risk by employing them to real-world medical diagnosis conditions. To tackle these issues, we develop a new concept, Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields (ERoHPRF), to simultaneously boost medical image classification performance and fairness. This concept aims to mimic the multi-expert consultation mode by applying the well-designed heterogeneous pyramid RF bags to capture different lesion characteristics effectively via convolution operations with multiple heterogeneous kernel sizes. Additionally, ERoHPRF introduces an expert-like structural reparameterization technique to merge its parameters with the two-stage strategy, ensuring competitive computation cost and inference speed through comparisons to a single RF. To manifest the effectiveness and generalization ability of ERoHPRF, we incorporate it into mainstream efficient CNN architectures. The extensive experiments show that our method maintains a better trade-off than state-of-the-art methods in terms of medical image classification, fairness, and computation overhead. The codes of this paper will be released soon.

Longitudinal Validation of a Deep Learning Index for Aortic Stenosis Progression

Park, J., Kim, J., Yoon, Y. E., Jeon, J., Lee, S.-A., Choi, H.-M., Hwang, I.-C., Cho, G.-Y., Chang, H.-J., Park, J.-H.

medrxiv logopreprintMay 19 2025
AimsAortic stenosis (AS) is a progressive disease requiring timely monitoring and intervention. While transthoracic echocardiography (TTE) remains the diagnostic standard, deep learning (DL)-based approaches offer potential for improved disease tracking. This study examined the longitudinal changes in a previously developed DL-derived index for AS continuum (DLi-ASc) and assessed its value in predicting progression to severe AS. Methods and ResultsWe retrospectively analysed 2,373 patients a(7,371 TTEs) from two tertiary hospitals. DLi-ASc (scaled 0-100), derived from parasternal long- and/or short-axis views, was tracked longitudinally. DLi-ASc increased in parallel with worsening AS stages (p for trend <0.001) and showed strong correlations with AV maximal velocity (Vmax) (Pearson correlation coefficients [PCC] = 0.69, p<0.001) and mean pressure gradient (mPG) (PCC = 0.66, p<0.001). Higher baseline DLi-ASc was associated with a faster AS progression rate (p for trend <0.001). Additionally, the annualised change in DLi-ASc, estimated using linear mixed-effect models, correlated strongly with the annualised progression of AV Vmax (PCC = 0.71, p<0.001) and mPG (PCC = 0.68, p<0.001). In Fine-Gray competing risk models, baseline DLi-ASc independently predicted progression to severe AS, even after adjustment for AV Vmax or mPG (hazard ratio per 10-point increase = 2.38 and 2.80, respectively) ConclusionDLi-ASc increased in parallel with AS progression and independently predicted severe AS progression. These findings support its role as a non-invasive imaging-based digital marker for longitudinal AS monitoring and risk stratification.

Non-invasive CT based multiregional radiomics for predicting pathologic complete response to preoperative neoadjuvant chemoimmunotherapy in non-small cell lung cancer.

Fan S, Xie J, Zheng S, Wang J, Zhang B, Zhang Z, Wang S, Cui Y, Liu J, Zheng X, Ye Z, Cui X, Yue D

pubmed logopapersMay 19 2025
This study aims to develop and validate a multiregional radiomics model to predict pathological complete response (pCR) to neoadjuvant chemoimmunotherapy in non-small cell lung cancer (NSCLC), and further evaluate the performance of the model in different specific subgroups (N2 stage and anti-PD-1/PD-L1). 216 patients with NSCLC who underwent neoadjuvant chemoimmunotherapy followed by surgical intervention were included and assigned to training and validation sets randomly. From pre-treatment baseline CT, one intratumoral (T) and two peritumoral regions (P<sub>3</sub>: 0-3 mm; P<sub>6</sub>: 0-6 mm) were extracted. Five radiomics models were developed using machine learning algorithms to predict pCR, utilizing selected features from intratumoral (T), peritumoral (P<sub>3</sub>, P<sub>6</sub>), and combined intra- and peritumoral regions (T + P<sub>3</sub>, T + P<sub>6</sub>). Additionally, the predictive efficacy of the optimal model was specifically assessed for patients in the N2 stage and anti-PD-1/PD-L1 subgroups. A total of 51.4 % (111/216) of patients exhibited pCR following neoadjuvant chemoimmunotherapy. Multivariable analysis identified that only the T + P<sub>3</sub> radiomics signature served as independent predictor of pCR (P < 0.001). The multiregional radiomics model (T + P<sub>3</sub>) exhibited superior predictive performance for pCR, achieving an area under the curve (AUC) of 0.75 in the validation cohort. Furthermore, this multiregional model maintained robust predictive accuracy in both N2 stage and anti-PD-1/PD-L1 subgroups, with an AUC of 0.829 and 0.833, respectively. The proposed multiregional radiomics model showed potential in predicting pCR in NSCLC after neoadjuvant chemoimmunotherapy, and demonstrated good predictive performance in different specific subgroups. This capability may assist clinicians in identifying suitable candidates for neoadjuvant chemoimmunotherapy and promote the advancement in precision therapy.

An overview of artificial intelligence and machine learning in shoulder surgery.

Cho SH, Kim YS

pubmed logopapersMay 19 2025
Machine learning (ML), a subset of artificial intelligence (AI), utilizes advanced algorithms to learn patterns from data, enabling accurate predictions and decision-making without explicit programming. In orthopedic surgery, ML is transforming clinical practice, particularly in shoulder arthroplasty and rotator cuff tears (RCTs) management. This review explores the fundamental paradigms of ML, including supervised, unsupervised, and reinforcement learning, alongside key algorithms such as XGBoost, neural networks, and generative adversarial networks. In shoulder arthroplasty, ML accurately predicts postoperative outcomes, complications, and implant selection, facilitating personalized surgical planning and cost optimization. Predictive models, including ensemble learning methods, achieve over 90% accuracy in forecasting complications, while neural networks enhance surgical precision through AI-assisted navigation. In RCTs treatment, ML enhances diagnostic accuracy using deep learning models on magnetic resonance imaging and ultrasound, achieving area under the curve values exceeding 0.90. ML models also predict tear reparability with 85% accuracy and postoperative functional outcomes, including range of motion and patient-reported outcomes. Despite remarkable advancements, challenges such as data variability, model interpretability, and integration into clinical workflows persist. Future directions involve federated learning for robust model generalization and explainable AI to enhance transparency. ML continues to revolutionize orthopedic care by providing data-driven, personalized treatment strategies and optimizing surgical outcomes.

Feasibility study of a general model for synthetic CT generation in MRI-guided extracranial radiotherapy.

Hsu SH, Han Z, Hu YH, Ferguson D, van Dams R, Mak RH, Leeman JE, Sudhyadhom A

pubmed logopapersMay 19 2025
This study aims to investigate the feasibility of a single general model to synthesize CT images across body sites, thorax, abdomen, and pelvis, to support treatment planning for MRI-only radiotherapy. A total of 157 patients who received MRI-guided radiation therapy in the thorax, abdomen, and pelvis on a 0.35T MRIdian Linac were included. A subset of 122 cases were used for model training and the remaining 35 cases were used for model validation. All patient datasets had semi-paired CT-simulation image and 0.35T MR image acquired using TrueFISP. A conditional generative adversarial network with a multi-planar method was used to generate synthetic CT images from 0.35T MR images. The effect of preprocessing methods (with and without bias field corrections) on the quality of synthetic CT was evaluated and found to be insignificant. The general models trained on all cases performed comparably to the site-specific models trained on individual body sites. For all models, the peak signal-to-noise ratios ranged from 31.7 to 34.9 and the structural index similarity measures ranged from 0.9547 to 0.9758. For the datasets with bias field corrections, the mean-absolute-errors in HU (general model versus site-specific model) were 49.7 ± 9.4 versus 49.5 ± 8.9, 48.7 ± 7.6 versus 43 ± 7.8 and 32.8 ± 5.5 versus 31.8 ± 5.3 for the thorax, abdomen, and pelvis, respectively. When comparing plans between synthetic CTs and ground truth CTs, the dosimetric difference was on average less than 0.5% (0.2 Gy) for target coverage and less than 2.1% (0.4 Gy) for organ-at-risk metrics for all body sites with either the general or specific models. Synthetic CT plans showed good agreement with mean gamma pass rates of >94% and >99% for 1%/1 mm and 2%/2 mm, respectively. This study has demonstrated the feasibility of using a general model for multiple body sites and the potential of using synthetic CT to support an MRI-guided radiotherapy workflow.

Accuracy of segment anything model for classification of vascular stenosis in digital subtraction angiography.

Navasardyan V, Katz M, Goertz L, Zohranyan V, Navasardyan H, Shahzadi I, Kröger JR, Borggrefe J

pubmed logopapersMay 19 2025
This retrospective study evaluates the diagnostic performance of an optimized comprehensive multi-stage framework based on the Segment Anything Model (SAM), which we named Dr-SAM, for detecting and grading vascular stenosis in the abdominal aorta and iliac arteries using digital subtraction angiography (DSA). A total of 100 DSA examinations were conducted on 100 patients. The infrarenal abdominal aorta (AAI), common iliac arteries (CIA), and external iliac arteries (EIA) were independently evaluated by two experienced radiologists using a standardized 5-point grading scale. Dr-SAM analyzed the same DSA images, and its assessments were compared with the average stenosis grading provided by the radiologists. Diagnostic accuracy was evaluated using Cohen's kappa, specificity, sensitivity, and Wilcoxon signed-rank tests. Interobserver agreement between radiologists, which established the reference standard, was strong (Cohen's kappa: CIA right = 0.95, CIA left = 0.94, EIA right = 0.98, EIA left = 0.98, AAI = 0.79). Dr-SAM showed high agreement with radiologist consensus for CIA (κ = 0.93 right, 0.91 left), moderate agreement for EIA (κ = 0.79 right, 0.76 left), and fair agreement for AAI (κ = 0.70). Dr-SAM demonstrated excellent specificity (up to 1.0) and robust sensitivity (0.67-0.83). Wilcoxon tests revealed no significant differences between Dr-SAM and radiologist grading (p > 0.05). Dr-SAM proved to be an accurate and efficient tool for vascular assessment, with the potential to streamline diagnostic workflows and reduce variability in stenosis grading. Its ability to deliver rapid and consistent evaluations may contribute to earlier detection of disease and the optimization of treatment strategies. Further studies are needed to confirm these findings in prospective settings and to enhance its capabilities, particularly in the detection of occlusions.

Learning Wavelet-Sparse FDK for 3D Cone-Beam CT Reconstruction

Yipeng Sun, Linda-Sophie Schneider, Chengze Ye, Mingxuan Gu, Siyuan Mei, Siming Bayer, Andreas Maier

arxiv logopreprintMay 19 2025
Cone-Beam Computed Tomography (CBCT) is essential in medical imaging, and the Feldkamp-Davis-Kress (FDK) algorithm is a popular choice for reconstruction due to its efficiency. However, FDK is susceptible to noise and artifacts. While recent deep learning methods offer improved image quality, they often increase computational complexity and lack the interpretability of traditional methods. In this paper, we introduce an enhanced FDK-based neural network that maintains the classical algorithm's interpretability by selectively integrating trainable elements into the cosine weighting and filtering stages. Recognizing the challenge of a large parameter space inherent in 3D CBCT data, we leverage wavelet transformations to create sparse representations of the cosine weights and filters. This strategic sparsification reduces the parameter count by $93.75\%$ without compromising performance, accelerates convergence, and importantly, maintains the inference computational cost equivalent to the classical FDK algorithm. Our method not only ensures volumetric consistency and boosts robustness to noise, but is also designed for straightforward integration into existing CT reconstruction pipelines. This presents a pragmatic enhancement that can benefit clinical applications, particularly in environments with computational limitations.

A Skull-Adaptive Framework for AI-Based 3D Transcranial Focused Ultrasound Simulation

Vinkle Srivastav, Juliette Puel, Jonathan Vappou, Elijah Van Houten, Paolo Cabras, Nicolas Padoy

arxiv logopreprintMay 19 2025
Transcranial focused ultrasound (tFUS) is an emerging modality for non-invasive brain stimulation and therapeutic intervention, offering millimeter-scale spatial precision and the ability to target deep brain structures. However, the heterogeneous and anisotropic nature of the human skull introduces significant distortions to the propagating ultrasound wavefront, which require time-consuming patient-specific planning and corrections using numerical solvers for accurate targeting. To enable data-driven approaches in this domain, we introduce TFUScapes, the first large-scale, high-resolution dataset of tFUS simulations through anatomically realistic human skulls derived from T1-weighted MRI images. We have developed a scalable simulation engine pipeline using the k-Wave pseudo-spectral solver, where each simulation returns a steady-state pressure field generated by a focused ultrasound transducer placed at realistic scalp locations. In addition to the dataset, we present DeepTFUS, a deep learning model that estimates normalized pressure fields directly from input 3D CT volumes and transducer position. The model extends a U-Net backbone with transducer-aware conditioning, incorporating Fourier-encoded position embeddings and MLP layers to create global transducer embeddings. These embeddings are fused with U-Net encoder features via feature-wise modulation, dynamic convolutions, and cross-attention mechanisms. The model is trained using a combination of spatially weighted and gradient-sensitive loss functions, enabling it to approximate high-fidelity wavefields. The TFUScapes dataset is publicly released to accelerate research at the intersection of computational acoustics, neurotechnology, and deep learning. The project page is available at https://github.com/CAMMA-public/TFUScapes.

New approaches to lesion assessment in multiple sclerosis.

Preziosa P, Filippi M, Rocca MA

pubmed logopapersMay 19 2025
To summarize recent advancements in artificial intelligence-driven lesion segmentation and novel neuroimaging modalities that enhance the identification and characterization of multiple sclerosis (MS) lesions, emphasizing their implications for clinical use and research. Artificial intelligence, particularly deep learning approaches, are revolutionizing MS lesion assessment and segmentation, improving accuracy, reproducibility, and efficiency. Artificial intelligence-based tools now enable automated detection not only of T2-hyperintense white matter lesions, but also of specific lesion subtypes, including gadolinium-enhancing, central vein sign-positive, paramagnetic rim, cortical, and spinal cord lesions, which hold diagnostic and prognostic value. Novel neuroimaging techniques such as quantitative susceptibility mapping (QSM), χ-separation imaging, and soma and neurite density imaging (SANDI), together with PET, are providing deeper insights into lesion pathology, better disentangling their heterogeneities and clinical relevance. Artificial intelligence-powered lesion segmentation tools hold great potential for improving fast, accurate and reproducible lesional assessment in the clinical scenario, thus improving MS diagnosis, monitoring, and treatment response assessment. Emerging neuroimaging modalities may contribute to advance the understanding MS pathophysiology, provide more specific markers of disease progression, and novel potential therapeutic targets.
Page 413 of 4524519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.