Sort by:
Page 90 of 2262251 results

RESIGN: Alzheimer's Disease Detection Using Hybrid Deep Learning based Res-Inception Seg Network.

Amsavalli K, Suba Raja SK, Sudha S

pubmed logopapersJun 18 2025
Alzheimer's disease (AD) is a leading cause of death, making early detection critical to improve survival rates. Conventional manual techniques struggle with early diagnosis due to the brain's complex structure, necessitating the use of dependable deep learning (DL) methods. This research proposes a novel RESIGN model is a combination of Res-InceptionSeg for detecting AD utilizing MRI images. The input MRI images were pre-processed using a Non-Local Means (NLM) filter to reduce noise artifacts. A ResNet-LSTM model was used for feature extraction, targeting White Matter (WM), Grey Matter (GM), and Cerebrospinal Fluid (CSF). The extracted features were concatenated and classified into Normal, MCI, and AD categories using an Inception V3-based classifier. Additionally, SegNet was employed for abnormal brain region segmentation. The RESIGN model achieved an accuracy of 99.46%, specificity of 98.68%, precision of 95.63%, recall of 97.10%, and an F1 score of 95.42%. It outperformed ResNet, AlexNet, Dense- Net, and LSTM by 7.87%, 5.65%, 3.92%, and 1.53%, respectively, and further improved accuracy by 25.69%, 5.29%, 2.03%, and 1.71% over ResNet18, CLSTM, VGG19, and CNN, respectively. The integration of spatial-temporal feature extraction, hybrid classification, and deep segmentation makes RESIGN highly reliable in detecting AD. A 5-fold cross-validation proved its robustness, and its performance exceeded that of existing models on the ADNI dataset. However, there are potential limitations related to dataset bias and limited generalizability due to uniform imaging conditions. The proposed RESIGN model demonstrates significant improvement in early AD detection through robust feature extraction and classification by offering a reliable tool for clinical diagnosis.

Cardiovascular risk in childhood and young adulthood is associated with the hemodynamic response function in midlife: The Bogalusa Heart Study.

Chuang KC, Naseri M, Ramakrishnapillai S, Madden K, Amant JS, McKlveen K, Gwizdala K, Dhullipudi R, Bazzano L, Carmichael O

pubmed logopapersJun 18 2025
In functional MRI, a hemodynamic response function (HRF) describes how neural events are translated into a blood oxygenation response detected through imaging. The HRF has the potential to quantify neurovascular mechanisms by which cardiovascular risks modify brain health, but relationships among HRF characteristics, brain health, and cardiovascular modifiers of brain health have not been well studied to date. One hundred and thirty-seven middle-aged participants (mean age: 53.6±4.7, female (62%), 78% White American participants and 22% African American participants) in the exploratory analysis from Bogalusa Heart Study completed clinical evaluations from childhood to midlife and an adaptive Stroop task during fMRI in midlife. The HRFs of each participant within seventeen brain regions of interest (ROIs) previously identified as activated by this task were calculated using a convolutional neural network approach. Faster and more efficient neurovascular functioning was characterized in terms of five HRF characteristics: faster time to peak (TTP), shorter full width at half maximum (FWHM), smaller peak magnitude (PM), smaller trough magnitude (TM), and smaller area under the HRF curve (AUHRF). The composite HRF summary characteristics over all ROIs were calculated for multivariable and simple linear regression analyses. In multivariable models, faster and more efficient HRF characteristic was found in non-smoker compared to smokers (AUHRF, p = 0.029). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM, TM, and AUHRF, p = 0.030, 0.042, and 0.032) and cerebral amyloid burden (FWHM, p-value = 0.027) in midlife; as well as greater response rate on the Stroop task (FWHM, p = 0.042) in midlife. In simple linear regression models, faster and more efficient HRF characteristics were found in women compared to men (TM, p = 0.019); in White American participants compared to African American participants (AUHRF, p = 0.044); and in non-smokers compared to smokers (TTP and AUHRF, p = 0.019 and 0.010). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM and TM, p = 0.019 and 0.029), and lower BMI (FWHM and AUHRF, p = 0.025 and 0.017), in childhood and adolescence; and lower BMI (TTP, p = 0.049), cerebral amyloid burden (FWHM, p = 0.002), and white matter hyperintensity burden (FWHM, p = 0.046) in midlife; as well as greater accuracy on the Stroop task (AUHRF, p = 0.037) in midlife. In a diverse middle-aged community sample, HRF-based indicators of faster and more efficient neurovascular functioning were associated with better brain health and cognitive function, as well as better lifespan cardiovascular health.

Hierarchical refinement with adaptive deformation cascaded for multi-scale medical image registration.

Hussain N, Yan Z, Cao W, Anwar M

pubmed logopapersJun 18 2025
Deformable image registration is a fundamental task in medical image analysis, which is crucial in enabling early detection and accurate disease diagnosis. Although transformer-based architectures have demonstrated strong potential through attention mechanisms, challenges remain in ineffective feature extraction and spatial alignment, particularly within hierarchical attention frameworks. To address these limitations, we propose a novel registration framework that integrates hierarchical feature encoding in the encoder and an adaptive cascaded refinement strategy in the decoder. The model employs hierarchical cross-attention between fixed and moving images at multiple scales, enabling more precise alignment and improved registration accuracy. The decoder incorporates the Adaptive Cascaded Module (ACM), facilitating progressive deformation field refinement across multiple resolution levels. This approach captures coarse global transformations and acceptable local variations, resulting in smooth and anatomically consistent alignment. However, rather than relying solely on the final decoder output, our framework leverages intermediate representations at each stage of the network, enhancing the robustness and precision of the registration process. Our method achieves superior accuracy and adaptability by integrating deformations across all scales. Comprehensive experiments on two widely used 3D brain MRI datasets, OASIS and LPBA40, demonstrate that the proposed framework consistently outperforms existing state-of-the-art approaches across multiple evaluation metrics regarding accuracy, robustness, and generalizability.

Interactive prototype learning and self-learning for few-shot medical image segmentation.

Song Y, Xu C, Wang B, Du X, Chen J, Zhang Y, Li S

pubmed logopapersJun 18 2025
Few-shot learning alleviates the heavy dependence of medical image segmentation on large-scale labeled data, but it shows strong performance gaps when dealing with new tasks compared with traditional deep learning. Existing methods mainly learn the class knowledge of a few known (support) samples and extend it to unknown (query) samples. However, the large distribution differences between the support image and the query image lead to serious deviations in the transfer of class knowledge, which can be specifically summarized as two segmentation challenges: Intra-class inconsistency and Inter-class similarity, blurred and confused boundaries. In this paper, we propose a new interactive prototype learning and self-learning network to solve the above challenges. First, we propose a deep encoding-decoding module to learn the high-level features of the support and query images to build peak prototypes with the greatest semantic information and provide semantic guidance for segmentation. Then, we propose an interactive prototype learning module to improve intra-class feature consistency and reduce inter-class feature similarity by conducting mid-level features-based mean prototype interaction and high-level features-based peak prototype interaction. Last, we propose a query features-guided self-learning module to separate foreground and background at the feature level and combine low-level feature maps to complement boundary information. Our model achieves competitive segmentation performance on benchmark datasets and shows substantial improvement in generalization ability.

Image-based AI tools in peripheral nerves assessment: Current status and integration strategies - A narrative review.

Martín-Noguerol T, Díaz-Angulo C, Luna A, Segovia F, Gómez-Río M, Górriz JM

pubmed logopapersJun 18 2025
Peripheral Nerves (PNs) are traditionally evaluated using US or MRI, allowing radiologists to identify and classify them as normal or pathological based on imaging findings, symptoms, and electrophysiological tests. However, the anatomical complexity of PNs, coupled with their proximity to surrounding structures like vessels and muscles, presents significant challenges. Advanced imaging techniques, including MR-neurography and Diffusion-Weighted Imaging (DWI) neurography, have shown promise but are hindered by steep learning curves, operator dependency, and limited accessibility. Discrepancies between imaging findings and patient symptoms further complicate the evaluation of PNs, particularly in cases where imaging appears normal despite clinical indications of pathology. Additionally, demographic and clinical factors such as age, sex, comorbidities, and physical activity influence PN health but remain unquantifiable with current imaging methods. Artificial Intelligence (AI) solutions have emerged as a transformative tool in PN evaluation. AI-based algorithms offer the potential to transition from qualitative to quantitative assessments, enabling precise segmentation, characterization, and threshold determination to distinguish healthy from pathological nerves. These advances could improve diagnostic accuracy and treatment monitoring. This review highlights the latest advances in AI applications for PN imaging, discussing their potential to overcome the current limitations and opportunities to improve their integration into routine radiological practice.

Multimodal deep learning for predicting unsuccessful recanalization in refractory large vessel occlusion.

González JD, Canals P, Rodrigo-Gisbert M, Mayol J, García-Tornel A, Ribó M

pubmed logopapersJun 18 2025
This study explores a multi-modal deep learning approach that integrates pre-intervention neuroimaging and clinical data to predict endovascular therapy (EVT) outcomes in acute ischemic stroke patients. To this end, consecutive stroke patients undergoing EVT were included in the study, including patients with suspected Intracranial Atherosclerosis-related Large Vessel Occlusion ICAD-LVO and other refractory occlusions. A retrospective, single-center cohort of patients with anterior circulation LVO who underwent EVT between 2017-2023 was analyzed. Refractory LVO (rLVO) defined class, comprised patients who presented any of the following: final angiographic stenosis > 50 %, unsuccessful recanalization (eTICI 0-2a) or required rescue treatments (angioplasty +/- stenting). Neuroimaging data included non-contrast CT and CTA volumes, automated vascular segmentation, and CT perfusion parameters. Clinical data included demographics, comorbidities and stroke severity. Imaging features were encoded using convolutional neural networks and fused with clinical data using a DAFT module. Data were split 80 % for training (with four-fold cross-validation) and 20 % for testing. Explainability methods were used to analyze the contribution of clinical variables and regions of interest in the images. The final sample comprised 599 patients; 481 for training the model (77, 16.0 % rLVO), and 118 for testing (16, 13.6 % rLVO). The best model predicting rLVO using just imaging achieved an AUC of 0.53 ± 0.02 and F1 of 0.19 ± 0.05 while the proposed multimodal model achieved an AUC of 0.70 ± 0.02 and F1 of 0.39 ± 0.02 in testing. Combining vascular segmentation, clinical variables, and imaging data improved prediction performance over single-source models. This approach offers an early alert to procedural complexity, potentially guiding more tailored, timely intervention strategies in the EVT workflow.

Deep learning model using CT images for longitudinal prediction of benign and malignant ground-glass nodules.

Yang X, Wang J, Wang P, Li Y, Wen Z, Shang J, Chen K, Tang C, Liang S, Meng W

pubmed logopapersJun 18 2025
To develop and validate a CT image-based multiple time-series deep learning model for the longitudinal prediction of benign and malignant pulmonary ground-glass nodules (GGNs). A total of 486 GGNs from an equal number of patients were included in this research, which took place at two medical centers. Each nodule underwent surgical removal and was confirmed pathologically. The patients were randomly assigned to a training set, validation set, and test set, following a distribution ratio of 7:2:1. We established a transformer-based deep learning framework that leverages multi-temporal CT images for the longitudinal prediction of GGNs, focusing on distinguishing between benign and malignant types. Additionally, we utilized 13 different machine learning algorithms to formulate clinical models, delta-radiomics models, and combined models that merge deep learning with CT semantic features. The predictive capabilities of the models were assessed using the receiver operating characteristic (ROC) curve and the area under the curve (AUC). The multiple time-series deep learning model based on CT images surpassed both the clinical model and the delta-radiomics model, showcasing strong predictive capabilities for GGNs across the training, validation, and test sets, with AUCs of 0.911 (95% CI, 0.879-0.939), 0.809 (95% CI,0.715-0.908), and 0.817 (95% CI,0.680-0.937), respectively. Furthermore, the models that integrated deep learning with CT semantic features achieved the highest performance, resulting in AUCs of 0.960 (95% CI, 0.912-0.977), 0.878 (95% CI,0.801-0.942), and 0.890(95% CI, 0.790-0.968). The multiple time-series deep learning model utilizing CT images was effective in predicting benign and malignant GGNs.

MDEANet: A multi-scale deep enhanced attention net for popliteal fossa segmentation in ultrasound images.

Chen F, Fang W, Wu Q, Zhou M, Guo W, Lin L, Chen Z, Zou Z

pubmed logopapersJun 18 2025
Popliteal sciatic nerve block is a widely used technique for lower limb anesthesia. However, despite ultrasound guidance, the complex anatomical structures of the popliteal fossa can present challenges, potentially leading to complications. To accurately identify the bifurcation of the sciatic nerve for nerve blockade, we propose MDEANet, a deep learning-based segmentation network designed for the precise localization of nerves, muscles, and arteries in ultrasound images of the popliteal region. MDEANet incorporates Cascaded Multi-scale Atrous Convolutions (CMAC) to enhance multi-scale feature extraction, Enhanced Spatial Attention Mechanism (ESAM) to focus on key anatomical regions, and Cross-level Feature Fusion (CLFF) to improve contextual representation. This integration markedly improves segmentation of nerves, muscles, and arteries. Experimental results demonstrate that MDEANet achieves an average Intersection over Union (IoU) of 88.60% and a Dice coefficient of 93.95% across all target structures, outperforming state-of-the-art models by 1.68% in IoU and 1.66% in Dice coefficient. Specifically, for nerve segmentation, the Dice coefficient reaches 93.31%, underscoring the effectiveness of our approach. MDEANet has the potential to provide decision-support assistance for anesthesiologists, thereby enhancing the accuracy and efficiency of ultrasound-guided nerve blockade procedures.

Comparative analysis of transformer-based deep learning models for glioma and meningioma classification.

Nalentzi K, Gerogiannis K, Bougias H, Stogiannos N, Papavasileiou P

pubmed logopapersJun 18 2025
This study compares the classification accuracy of novel transformer-based deep learning models (ViT and BEiT) on brain MRIs of gliomas and meningiomas through a feature-driven approach. Meta's Segment Anything Model was used for semi-automatic segmentation, therefore proposing a total neural network-based workflow for this classification task. ViT and BEiT models were finetuned to a publicly available brain MRI dataset. Gliomas/meningiomas cases (625/507) were used for training and 520 cases (260/260; gliomas/meningiomas) for testing. The extracted deep radiomic features from ViT and BEiT underwent normalization, dimensionality reduction based on the Pearson correlation coefficient (PCC), and feature selection using analysis of variance (ANOVA). A multi-layer perceptron (MLP) with 1 hidden layer, 100 units, rectified linear unit activation, and Adam optimizer was utilized. Hyperparameter tuning was performed via 5-fold cross-validation. The ViT model achieved the highest AUC on the validation dataset using 7 features, yielding an AUC of 0.985 and accuracy of 0.952. On the independent testing dataset, the model exhibited an AUC of 0.962 and an accuracy of 0.904. The BEiT model yielded an AUC of 0.939 and an accuracy of 0.871 on the testing dataset. This study demonstrates the effectiveness of transformer-based models, especially ViT, for glioma and meningioma classification, achieving high AUC scores and accuracy. However, the study is limited by the use of a single dataset, which may affect generalizability. Future work should focus on expanding datasets and further optimizing models to improve performance and applicability across different institutions. This study introduces a feature-driven methodology for glioma and meningioma classification, showcasing advancements in the accuracy and model robustness of transformer-based models.

Imaging Epilepsy: Past, Passing, and to Come.

Theodore WH, Inati SK, Adler S, Pearl PL, Mcdonald CR

pubmed logopapersJun 18 2025
New imaging techniques appearing over the last few decades have replaced procedures that were uncomfortable, of low specificity, and prone to adverse events. While computed tomography remains useful for imaging patients with seizures in acute settings, structural magnetic resonance imaging (MRI) has become the most important imaging modality for epilepsy evaluation, with adjunctive functional imaging also increasingly well established in presurgical evaluation, including positron emission tomography (PET), single photon ictal-interictal subtraction computed tomography co-registered to MRI and functional MRI for preoperative cognitive mapping. Neuroimaging in inherited metabolic epilepsies is integral to diagnosis, monitoring, and assessment of treatment response. Neurotransmitter receptor PET and magnetic resonance spectroscopy can help delineate the pathophysiology of these disorders. Machine learning and artificial intelligence analyses based on large MRI datasets composed of healthy volunteers and people with epilepsy have been initiated to detect lesions that are not found visually, particularly focal cortical dysplasia. These methods, not yet approved for patient care, depend on careful clinical correlation and training sets that fully sample broad populations.
Page 90 of 2262251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.