Sort by:
Page 185 of 3203198 results

Interactive prototype learning and self-learning for few-shot medical image segmentation.

Song Y, Xu C, Wang B, Du X, Chen J, Zhang Y, Li S

pubmed logopapersJun 18 2025
Few-shot learning alleviates the heavy dependence of medical image segmentation on large-scale labeled data, but it shows strong performance gaps when dealing with new tasks compared with traditional deep learning. Existing methods mainly learn the class knowledge of a few known (support) samples and extend it to unknown (query) samples. However, the large distribution differences between the support image and the query image lead to serious deviations in the transfer of class knowledge, which can be specifically summarized as two segmentation challenges: Intra-class inconsistency and Inter-class similarity, blurred and confused boundaries. In this paper, we propose a new interactive prototype learning and self-learning network to solve the above challenges. First, we propose a deep encoding-decoding module to learn the high-level features of the support and query images to build peak prototypes with the greatest semantic information and provide semantic guidance for segmentation. Then, we propose an interactive prototype learning module to improve intra-class feature consistency and reduce inter-class feature similarity by conducting mid-level features-based mean prototype interaction and high-level features-based peak prototype interaction. Last, we propose a query features-guided self-learning module to separate foreground and background at the feature level and combine low-level feature maps to complement boundary information. Our model achieves competitive segmentation performance on benchmark datasets and shows substantial improvement in generalization ability.

Image-based AI tools in peripheral nerves assessment: Current status and integration strategies - A narrative review.

Martín-Noguerol T, Díaz-Angulo C, Luna A, Segovia F, Gómez-Río M, Górriz JM

pubmed logopapersJun 18 2025
Peripheral Nerves (PNs) are traditionally evaluated using US or MRI, allowing radiologists to identify and classify them as normal or pathological based on imaging findings, symptoms, and electrophysiological tests. However, the anatomical complexity of PNs, coupled with their proximity to surrounding structures like vessels and muscles, presents significant challenges. Advanced imaging techniques, including MR-neurography and Diffusion-Weighted Imaging (DWI) neurography, have shown promise but are hindered by steep learning curves, operator dependency, and limited accessibility. Discrepancies between imaging findings and patient symptoms further complicate the evaluation of PNs, particularly in cases where imaging appears normal despite clinical indications of pathology. Additionally, demographic and clinical factors such as age, sex, comorbidities, and physical activity influence PN health but remain unquantifiable with current imaging methods. Artificial Intelligence (AI) solutions have emerged as a transformative tool in PN evaluation. AI-based algorithms offer the potential to transition from qualitative to quantitative assessments, enabling precise segmentation, characterization, and threshold determination to distinguish healthy from pathological nerves. These advances could improve diagnostic accuracy and treatment monitoring. This review highlights the latest advances in AI applications for PN imaging, discussing their potential to overcome the current limitations and opportunities to improve their integration into routine radiological practice.

Multimodal deep learning for predicting unsuccessful recanalization in refractory large vessel occlusion.

González JD, Canals P, Rodrigo-Gisbert M, Mayol J, García-Tornel A, Ribó M

pubmed logopapersJun 18 2025
This study explores a multi-modal deep learning approach that integrates pre-intervention neuroimaging and clinical data to predict endovascular therapy (EVT) outcomes in acute ischemic stroke patients. To this end, consecutive stroke patients undergoing EVT were included in the study, including patients with suspected Intracranial Atherosclerosis-related Large Vessel Occlusion ICAD-LVO and other refractory occlusions. A retrospective, single-center cohort of patients with anterior circulation LVO who underwent EVT between 2017-2023 was analyzed. Refractory LVO (rLVO) defined class, comprised patients who presented any of the following: final angiographic stenosis > 50 %, unsuccessful recanalization (eTICI 0-2a) or required rescue treatments (angioplasty +/- stenting). Neuroimaging data included non-contrast CT and CTA volumes, automated vascular segmentation, and CT perfusion parameters. Clinical data included demographics, comorbidities and stroke severity. Imaging features were encoded using convolutional neural networks and fused with clinical data using a DAFT module. Data were split 80 % for training (with four-fold cross-validation) and 20 % for testing. Explainability methods were used to analyze the contribution of clinical variables and regions of interest in the images. The final sample comprised 599 patients; 481 for training the model (77, 16.0 % rLVO), and 118 for testing (16, 13.6 % rLVO). The best model predicting rLVO using just imaging achieved an AUC of 0.53 ± 0.02 and F1 of 0.19 ± 0.05 while the proposed multimodal model achieved an AUC of 0.70 ± 0.02 and F1 of 0.39 ± 0.02 in testing. Combining vascular segmentation, clinical variables, and imaging data improved prediction performance over single-source models. This approach offers an early alert to procedural complexity, potentially guiding more tailored, timely intervention strategies in the EVT workflow.

Deep learning model using CT images for longitudinal prediction of benign and malignant ground-glass nodules.

Yang X, Wang J, Wang P, Li Y, Wen Z, Shang J, Chen K, Tang C, Liang S, Meng W

pubmed logopapersJun 18 2025
To develop and validate a CT image-based multiple time-series deep learning model for the longitudinal prediction of benign and malignant pulmonary ground-glass nodules (GGNs). A total of 486 GGNs from an equal number of patients were included in this research, which took place at two medical centers. Each nodule underwent surgical removal and was confirmed pathologically. The patients were randomly assigned to a training set, validation set, and test set, following a distribution ratio of 7:2:1. We established a transformer-based deep learning framework that leverages multi-temporal CT images for the longitudinal prediction of GGNs, focusing on distinguishing between benign and malignant types. Additionally, we utilized 13 different machine learning algorithms to formulate clinical models, delta-radiomics models, and combined models that merge deep learning with CT semantic features. The predictive capabilities of the models were assessed using the receiver operating characteristic (ROC) curve and the area under the curve (AUC). The multiple time-series deep learning model based on CT images surpassed both the clinical model and the delta-radiomics model, showcasing strong predictive capabilities for GGNs across the training, validation, and test sets, with AUCs of 0.911 (95% CI, 0.879-0.939), 0.809 (95% CI,0.715-0.908), and 0.817 (95% CI,0.680-0.937), respectively. Furthermore, the models that integrated deep learning with CT semantic features achieved the highest performance, resulting in AUCs of 0.960 (95% CI, 0.912-0.977), 0.878 (95% CI,0.801-0.942), and 0.890(95% CI, 0.790-0.968). The multiple time-series deep learning model utilizing CT images was effective in predicting benign and malignant GGNs.

MDEANet: A multi-scale deep enhanced attention net for popliteal fossa segmentation in ultrasound images.

Chen F, Fang W, Wu Q, Zhou M, Guo W, Lin L, Chen Z, Zou Z

pubmed logopapersJun 18 2025
Popliteal sciatic nerve block is a widely used technique for lower limb anesthesia. However, despite ultrasound guidance, the complex anatomical structures of the popliteal fossa can present challenges, potentially leading to complications. To accurately identify the bifurcation of the sciatic nerve for nerve blockade, we propose MDEANet, a deep learning-based segmentation network designed for the precise localization of nerves, muscles, and arteries in ultrasound images of the popliteal region. MDEANet incorporates Cascaded Multi-scale Atrous Convolutions (CMAC) to enhance multi-scale feature extraction, Enhanced Spatial Attention Mechanism (ESAM) to focus on key anatomical regions, and Cross-level Feature Fusion (CLFF) to improve contextual representation. This integration markedly improves segmentation of nerves, muscles, and arteries. Experimental results demonstrate that MDEANet achieves an average Intersection over Union (IoU) of 88.60% and a Dice coefficient of 93.95% across all target structures, outperforming state-of-the-art models by 1.68% in IoU and 1.66% in Dice coefficient. Specifically, for nerve segmentation, the Dice coefficient reaches 93.31%, underscoring the effectiveness of our approach. MDEANet has the potential to provide decision-support assistance for anesthesiologists, thereby enhancing the accuracy and efficiency of ultrasound-guided nerve blockade procedures.

Comparative analysis of transformer-based deep learning models for glioma and meningioma classification.

Nalentzi K, Gerogiannis K, Bougias H, Stogiannos N, Papavasileiou P

pubmed logopapersJun 18 2025
This study compares the classification accuracy of novel transformer-based deep learning models (ViT and BEiT) on brain MRIs of gliomas and meningiomas through a feature-driven approach. Meta's Segment Anything Model was used for semi-automatic segmentation, therefore proposing a total neural network-based workflow for this classification task. ViT and BEiT models were finetuned to a publicly available brain MRI dataset. Gliomas/meningiomas cases (625/507) were used for training and 520 cases (260/260; gliomas/meningiomas) for testing. The extracted deep radiomic features from ViT and BEiT underwent normalization, dimensionality reduction based on the Pearson correlation coefficient (PCC), and feature selection using analysis of variance (ANOVA). A multi-layer perceptron (MLP) with 1 hidden layer, 100 units, rectified linear unit activation, and Adam optimizer was utilized. Hyperparameter tuning was performed via 5-fold cross-validation. The ViT model achieved the highest AUC on the validation dataset using 7 features, yielding an AUC of 0.985 and accuracy of 0.952. On the independent testing dataset, the model exhibited an AUC of 0.962 and an accuracy of 0.904. The BEiT model yielded an AUC of 0.939 and an accuracy of 0.871 on the testing dataset. This study demonstrates the effectiveness of transformer-based models, especially ViT, for glioma and meningioma classification, achieving high AUC scores and accuracy. However, the study is limited by the use of a single dataset, which may affect generalizability. Future work should focus on expanding datasets and further optimizing models to improve performance and applicability across different institutions. This study introduces a feature-driven methodology for glioma and meningioma classification, showcasing advancements in the accuracy and model robustness of transformer-based models.

Imaging Epilepsy: Past, Passing, and to Come.

Theodore WH, Inati SK, Adler S, Pearl PL, Mcdonald CR

pubmed logopapersJun 18 2025
New imaging techniques appearing over the last few decades have replaced procedures that were uncomfortable, of low specificity, and prone to adverse events. While computed tomography remains useful for imaging patients with seizures in acute settings, structural magnetic resonance imaging (MRI) has become the most important imaging modality for epilepsy evaluation, with adjunctive functional imaging also increasingly well established in presurgical evaluation, including positron emission tomography (PET), single photon ictal-interictal subtraction computed tomography co-registered to MRI and functional MRI for preoperative cognitive mapping. Neuroimaging in inherited metabolic epilepsies is integral to diagnosis, monitoring, and assessment of treatment response. Neurotransmitter receptor PET and magnetic resonance spectroscopy can help delineate the pathophysiology of these disorders. Machine learning and artificial intelligence analyses based on large MRI datasets composed of healthy volunteers and people with epilepsy have been initiated to detect lesions that are not found visually, particularly focal cortical dysplasia. These methods, not yet approved for patient care, depend on careful clinical correlation and training sets that fully sample broad populations.

Artificial intelligence-based diagnosis of hallux valgus interphalangeus using anteroposterior foot radiographs.

Kwolek K, Gądek A, Kwolek K, Lechowska-Liszka A, Malczak M, Liszka H

pubmed logopapersJun 18 2025
A recently developed method enables automated measurement of the hallux valgus angle (HVA) and the first intermetatarsal angle (IMA) from weight-bearing foot radiographs. This approach employs bone segmentation to identify anatomical landmarks and provides standardized angle measurements based on established guidelines. While effective for HVA and IMA, preoperative radiograph analysis remains complex and requires additional measurements, such as the hallux interphalangeal angle (IPA), which has received limited research attention. To expand the previous method, which measured HVA and IMA, by incorporating the automatic measurement of IPA, evaluating its accuracy and clinical relevance. A preexisting database of manually labeled foot radiographs was used to train a U-Net neural network for segmenting bones and identifying landmarks necessary for IPA measurement. Of the 265 radiographs in the dataset, 161 were selected for training and 20 for validation. The U-Net neural network achieves a high mean Sørensen-Dice index (> 0.97). The remaining 84 radiographs were used to assess the reliability of automated IPA measurements against those taken manually by two orthopedic surgeons (O<sub>A</sub> and O<sub>B</sub>) using computer-based tools. Each measurement was repeated to assess intraobserver (O<sub>A1</sub> and O<sub>A2</sub>) and interobserver (O<sub>A2</sub> and O<sub>B</sub>) reliability. Agreement between automated and manual methods was evaluated using the Intraclass Correlation Coefficient (ICC), and Bland-Altman analysis identified systematic differences. Standard error of measurement (SEM) and Pearson correlation coefficients quantified precision and linearity, and measurement times were recorded to evaluate efficiency. The artificial intelligence (AI)-based system demonstrated excellent reliability, with ICC3.1 values of 0.92 (AI <i>vs</i> O<sub>A2</sub>) and 0.88 (AI <i>vs</i> O<sub>B</sub>), both statistically significant (<i>P</i> < 0.001). For manual measurements, ICC values were 0.95 (O<sub>A2</sub> <i>vs</i> O<sub>A1</sub>) and 0.95 (O<sub>A2</sub> <i>vs</i> O<sub>B</sub>), supporting both intraobserver and interobserver reliability. Bland-Altman analysis revealed minimal biases of: (1) 1.61° (AI <i>vs</i> O<sub>A2</sub>); and (2) 2.54° (AI <i>vs</i> O<sub>B</sub>), with clinically acceptable limits of agreement. The AI system also showed high precision, as evidenced by low SEM values: (1) 1.22° (O<sub>A2</sub> <i>vs</i> O<sub>B</sub>); (2) 1.77° (AI <i>vs</i> O<sub>A2</sub>); and (3) 2.09° (AI <i>vs</i> O<sub>B</sub>). Furthermore, Pearson correlation coefficients confirmed strong linear relationships between automated and manual measurements, with <i>r</i> = 0.85 (AI <i>vs</i> O<sub>A2</sub>) and <i>r</i> = 0.90 (AI <i>vs</i> O<sub>B</sub>). The AI method significantly improved efficiency, completing all 84 measurements 8 times faster than manual methods, reducing the time required from an average 36 minutes to just 4.5 minutes. The proposed AI-assisted IPA measurement method shows strong clinical potential, effectively corresponding with manual measurements. Integrating IPA with HVA and IMA assessments provides a comprehensive tool for automated forefoot deformity analysis, supporting hallux valgus severity classification and preoperative planning, while offering substantial time savings in high-volume clinical settings.

Development and interpretation of machine learning-based prognostic models for predicting high-risk prognostic pathological components in pulmonary nodules: integrating clinical features, serum tumor marker and imaging features.

Wang D, Qiu J, Li R, Tian H

pubmed logopapersJun 17 2025
With the improvement of imaging, the screening rate of Pulmonary nodules (PNs) has further increased, but their identification of High-Risk Prognostic Pathological Components (HRPPC) is still a major challenge. In this study, we aimed to build a multi-parameter machine learning predictive model to improve the discrimination accuracy of HRPPC. This study included 816 patients with ≤ 3 cm pulmonary nodules with clear pathology and underwent pulmonary resection. High-resolution chest CT images, clinicopathological characteristics were collected from patients. Lasso regression was utilized in order to identify key features, and a machine learning prediction model was constructed based on the screened key features. The recognition ability of the prediction model was evaluated using (ROC) curves and confusion matrices. Model calibration ability was evaluated using calibration curves. Decision curve analysis (DCA) was used to evaluate the value of the model for clinical applications. Use SHAP values for interpreting predictive models. A total of 816 patients were included in this study, of which 112 (13.79%) had HRPPC of pulmonary nodules. By selecting key variables through Lasso recursive feature elimination, we finally identified 13 key relevant features. The XGB model performed the best, with an area under the ROC curve (AUC) of 0.930 (95% CI: 0.906-0.954) in the training cohort and 0.835 (95% CI: 0.774-0.895) in the validation cohort, indicating that the XGB model had excellent predictive performance. In addition, the calibration curves of the XGB model showed good calibration in both cohorts. DCA demonstrated that the predictive model had a positive benefit in general clinical decision-making. The SHAP values identified the top 3 predictors affecting the HRPPC of PNs as CT Value, Nodule Long Diameter, and PRO-GRP. Our prediction model for identifying HRPPC in PNs has excellent discrimination, calibration and clinical utility. Thoracic surgeons could make relatively reliable predictions of HRPPC in PNs without the possibility of invasive testing.

DiffM<sup>4</sup>RI: A Latent Diffusion Model with Modality Inpainting for Synthesizing Missing Modalities in MRI Analysis.

Ye W, Guo Z, Ren Y, Tian Y, Shen Y, Chen Z, He J, Ke J, Shen Y

pubmed logopapersJun 17 2025
Foundation Models (FMs) have shown great promise for multimodal medical image analysis such as Magnetic Resonance Imaging (MRI). However, certain MRI sequences may be unavailable due to various constraints, such as limited scanning time, patient discomfort, or scanner limitations. The absence of certain modalities can hinder the performance of FMs in clinical applications, making effective missing modality imputation crucial for ensuring their applicability. Previous approaches, including generative adversarial networks (GANs), have been employed to synthesize missing modalities in either a one-to-one or many-to-one manner. However, these methods have limitations, as they require training a new model for different missing scenarios and are prone to mode collapse, generating limited diversity in the synthesized images. To address these challenges, we propose DiffM<sup>4</sup>RI, a diffusion model for many-to-many missing modality imputation in MRI. DiffM<sup>4</sup>RI innovatively formulates the missing modality imputation as a modality-level inpainting task, enabling it to handle arbitrary missing modality situations without the need for training multiple networks. Experiments on the BraTs datasets demonstrate DiffM<sup>4</sup>RI can achieve an average SSIM improvement of 0.15 over MustGAN, 0.1 over SynDiff, and 0.02 over VQ-VAE-2. These results highlight the potential of DiffM<sup>4</sup>RI in enhancing the reliability of FMs in clinical applications. The code is available at https://github.com/27yw/DiffM4RI.
Page 185 of 3203198 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.