Sort by:
Page 84 of 1201200 results

Modelling pathological spread through the structural connectome in the frontotemporal dementia clinical spectrum.

Agosta F, Basaia S, Spinelli EG, Facente F, Lumaca L, Ghirelli A, Canu E, Castelnovo V, Sibilla E, Tripodi C, Freri F, Cecchetti G, Magnani G, Caso F, Verde F, Ticozzi N, Silani V, Caroppo P, Prioni S, Villa C, Tremolizzo L, Appollonio I, Raj A, Filippi M

pubmed logopapersJun 3 2025
The ability to predict the spreading of pathology in patients with frontotemporal dementia (FTD) is crucial for early diagnosis and targeted interventions. In this study, we examined the relationship between network vulnerability and longitudinal progression of atrophy in FTD patients, using the network diffusion model (NDM) of the spread of pathology. Thirty behavioural variant FTD (bvFTD), 13 semantic variant primary progressive aphasia (svPPA), 14 non-fluent variant primary progressive aphasia (nfvPPA) and 12 semantic behavioural variant FTD (sbvFTD) patients underwent longitudinal T1-weighted MRI. Fifty young controls (20-31 years of age) underwent multi-shell diffusion MRI scan. An NDM was developed to model progression of FTD pathology as a spreading process from a seed through the healthy structural connectome, using connectivity measures from fractional anisotropy and intracellular volume fraction in young controls. Four disease epicentres were initially identified from the peaks of atrophy of each FTD variant: left insula (bvFTD), left temporal pole (svPPA), right temporal pole (sbvFTD) and left supplementary motor area (nfvPPA). Pearson's correlations were calculated between NDM-predicted atrophy in young controls and the observed longitudinal atrophy in FTD patients over a follow-up period of 24 months. The NDM was then run for all 220 brain seeds to verify whether the four epicentres were among those that yielded the highest correlation. Using the NDM, predictive maps in young controls showed progression of pathology from the peaks of atrophy in svPPA, nfvPPA and sbvFTD over 24 months. svPPA exhibited early involvement of the left temporal and occipital lobes, progressing to extensive left hemisphere impairment. nfvPPA and sbvFTD spread in a similar manner bilaterally to frontal, sensorimotor and temporal regions, with sbvFTD additionally affecting the right hemisphere. Moreover, the NDM-predicted atrophy of each region was positively correlated with longitudinal real atrophy, with a greater effect in svPPA and sbvFTD. In bvFTD, the model starting from the left insula (the peak of atrophy) demonstrated a highly left-lateralized pattern, with pathology spreading to frontal, sensorimotor, temporal and basal ganglia regions, with minimal extension to the contralateral hemisphere by 24 months. However, unlike the atrophy peaks observed in the other three phenotypes, the left insula did not show the strongest correlation between the estimated and real atrophy. Instead, the bilateral superior frontal gyrus emerged as optimal seeds for modelling atrophy spread, showing the highest correlation ranking in both hemispheres. Overall, NDM applied on the intracellular volume fraction connectome yielded higher correlations relative to NDM applied on fractional anisotropy maps. The NDM implementation using the cross-sectional structural connectome is a valuable tool to predict patterns of atrophy and spreading of pathology in FTD clinical variants.

Multi-modal brain MRI synthesis based on SwinUNETR

Haowen Pang, Weiyan Guo, Chuyang Ye

arxiv logopreprintJun 3 2025
Multi-modal brain magnetic resonance imaging (MRI) plays a crucial role in clinical diagnostics by providing complementary information across different imaging modalities. However, a common challenge in clinical practice is missing MRI modalities. In this paper, we apply SwinUNETR to the synthesize of missing modalities in brain MRI. SwinUNETR is a novel neural network architecture designed for medical image analysis, integrating the strengths of Swin Transformer and convolutional neural networks (CNNs). The Swin Transformer, a variant of the Vision Transformer (ViT), incorporates hierarchical feature extraction and window-based self-attention mechanisms, enabling it to capture both local and global contextual information effectively. By combining the Swin Transformer with CNNs, SwinUNETR merges global context awareness with detailed spatial resolution. This hybrid approach addresses the challenges posed by the varying modality characteristics and complex brain structures, facilitating the generation of accurate and realistic synthetic images. We evaluate the performance of SwinUNETR on brain MRI datasets and demonstrate its superior capability in generating clinically valuable images. Our results show significant improvements in image quality, anatomical consistency, and diagnostic value.

Enhancing Lesion Detection in Inflammatory Myelopathies: A Deep Learning-Reconstructed Double Inversion Recovery MRI Approach.

Fang Q, Yang Q, Wang B, Wen B, Xu G, He J

pubmed logopapersJun 3 2025
The imaging of inflammatory myelopathies has advanced significantly across time, with MRI techniques playing a pivotal role in enhancing lesion detection. However, the impact of deep learning (DL)-based reconstruction on 3D double inversion recovery (DIR) imaging for inflammatory myelopathies remains unassessed. This study aimed to compare the acquisition time, image quality, diagnostic confidence, and lesion detection rates among sagittal T2WI, standard DIR, and DL-reconstructed DIR in patients with inflammatory myelopathies. In this observational study, patients diagnosed with inflammatory myelopathies were recruited between June 2023 and March 2024. Each patient underwent sagittal conventional TSE sequences and standard 3D DIR (T2WI and standard 3D DIR were used as references for comparison), followed by an undersampled accelerated double inversion recovery deep learning (DIR<sub>DL</sub>) examination. Three neuroradiologists evaluated the images using a 4-point Likert scale (from 1 to 4) for overall image quality, perceived SNR, sharpness, artifacts, and diagnostic confidence. The acquisition times and lesion detection rates were also compared among the acquisition protocols. A total of 149 participants were evaluated (mean age, 40.6 [SD, 16.8] years; 71 women). The median acquisition time for DIR<sub>DL</sub> was significantly lower than for standard DIR (298 seconds [interquartile range, 288-301 seconds] versus 151 seconds [interquartile range, 148-155 seconds]; <i>P</i> < .001), showing a 49% time reduction. DIR<sub>DL</sub> images scored higher in overall quality, perceived SNR, and artifact noise reduction (all <i>P</i> < .001). There were no significant differences in sharpness (<i>P</i> = .07) or diagnostic confidence (<i>P</i> = .06) between the standard DIR and DIR<sub>DL</sub> protocols. Additionally, DIR<sub>DL</sub> detected 37% more lesions compared with T2WI (300 versus 219; <i>P</i> < .001). DIR<sub>DL</sub> significantly reduces acquisition time and improves image quality compared with standard DIR, without compromising diagnostic confidence. Additionally, DIR<sub>DL</sub> enhances lesion detection in patients with inflammatory myelopathies, making it a valuable tool in clinical practice. These findings underscore the potential for incorporating DIR<sub>DL</sub> into future imaging guidelines.

Radiomics-Based Differentiation of Primary Central Nervous System Lymphoma and Solitary Brain Metastasis Using Contrast-Enhanced T1-Weighted Imaging: A Retrospective Machine Learning Study.

Xia X, Qiu J, Tan Q, Du W, Gou Q

pubmed logopapersJun 3 2025
To develop and evaluate radiomics-based models using contrast-enhanced T1-weighted imaging (CE-T1WI) for the non-invasive differentiation of primary central nervous system lymphoma (PCNSL) and solitary brain metastasis (SBM), aiming to improve diagnostic accuracy and support clinical decision-making. This retrospective study included a cohort of 324 patients pathologically diagnosed with PCNSL (n=115) or SBM (n=209) between January 2014 and December 2024. Tumor regions were manually segmented on CE-T1WI, and a comprehensive set of 1561 radiomic features was extracted. To identify the most important features, a two-step approach for feature selection was utilized, which involved the use of least absolute shrinkage and selection operator (LASSO) regression. Multiple machine learning classifiers were trained and validated to assess diagnostic performance. Model performance was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity. The effectiveness of the radiomics-based models was further assessed using decision curve analysis, which incorporated a risk threshold of 0.5 to balance both false positives and false negatives. 23 features were identified through LASSO regression. All classifiers demonstrated robust performance in terms of area under the curve (AUC) and accuracy, with 15 out of 20 classifiers achieving AUC values exceeding 0.9. In the 10-fold cross-validation, the artificial neural network (ANN) classifier achieved the highest AUC of 0.9305, followed by the support vector machine with polynomial kernels (SVMPOLY) classifier at 0.9226. Notably, the independent test revealed that the support vector machine with radial basis function (SVMRBF) classifier performed best, with an AUC of 0.9310 and the highest accuracy of 0.8780. The selected models-SVMRBF, SVMPOLY, ensemble learning with LDA (ELDA), ANN, random forest (RF), and grading boost with random undersampling boosting (GBRUSB)-all showed significant clinical utility, with their standardized net benefits (sNBs) surpassing 0.6. These results underline the potential of the radiomics-based models in reliably distinguishing PCNSL from SBM. The application of radiomic-driven models based on CE-T1WI has demonstrated encouraging potential for accurately distinguishing between PCNSL and SBM. The SVMRBF classifier showed the greatest diagnostic efficacy of all the classifiers tested, indicating its potential clinical utility in differential diagnosis.

High-Throughput Phenotyping of the Symptoms of Alzheimer Disease and Related Dementias Using Large Language Models: Cross-Sectional Study.

Cheng Y, Malekar M, He Y, Bommareddy A, Magdamo C, Singh A, Westover B, Mukerji SS, Dickson J, Das S

pubmed logopapersJun 3 2025
Alzheimer disease and related dementias (ADRD) are complex disorders with overlapping symptoms and pathologies. Comprehensive records of symptoms in electronic health records (EHRs) are critical for not only reaching an accurate diagnosis but also supporting ongoing research studies and clinical trials. However, these symptoms are frequently obscured within unstructured clinical notes in EHRs, making manual extraction both time-consuming and labor-intensive. We aimed to automate symptom extraction from the clinical notes of patients with ADRD using fine-tuned large language models (LLMs), compare its performance to regular expression-based symptom recognition, and validate the results using brain magnetic resonance imaging (MRI) data. We fine-tuned LLMs to extract ADRD symptoms across the following 7 domains: memory, executive function, motor, language, visuospatial, neuropsychiatric, and sleep. We assessed the algorithm's performance by calculating the area under the receiver operating characteristic curve (AUROC) for each domain. The extracted symptoms were then validated in two analyses: (1) predicting ADRD diagnosis using the counts of extracted symptoms and (2) examining the association between ADRD symptoms and MRI-derived brain volumes. Symptom extraction across the 7 domains achieved high accuracy with AUROCs ranging from 0.97 to 0.99. Using the counts of extracted symptoms to predict ADRD diagnosis yielded an AUROC of 0.83 (95% CI 0.77-0.89). Symptom associations with brain volumes revealed that a smaller hippocampal volume was linked to memory impairments (odds ratio 0.62, 95% CI 0.46-0.84; P=.006), and reduced pallidum size was associated with motor impairments (odds ratio 0.73, 95% CI 0.58-0.90; P=.04). These results highlight the accuracy and reliability of our high-throughput ADRD phenotyping algorithm. By enabling automated symptom extraction, our approach has the potential to assist with differential diagnosis, as well as facilitate clinical trials and research studies of dementia.

MobileTurkerNeXt: investigating the detection of Bankart and SLAP lesions using magnetic resonance images.

Gurger M, Esmez O, Key S, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 2 2025
The landscape of computer vision is predominantly shaped by two groundbreaking methodologies: transformers and convolutional neural networks (CNNs). In this study, we aim to introduce an innovative mobile CNN architecture designed for orthopedic imaging that efficiently identifies both Bankart and SLAP lesions. Our approach involved the collection of two distinct magnetic resonance (MR) image datasets, with the primary goal of automating the detection of Bankart and SLAP lesions. A novel mobile CNN, dubbed MobileTurkerNeXt, forms the cornerstone of this research. This newly developed model, comprising roughly 1 million trainable parameters, unfolds across four principal stages: the stem, main, downsampling, and output phases. The stem phase incorporates three convolutional layers to initiate feature extraction. In the main phase, we introduce an innovative block, drawing inspiration from ConvNeXt, EfficientNet, and ResNet architectures. The downsampling phase utilizes patchify average pooling and pixel-wise convolution to effectively reduce spatial dimensions, while the output phase is meticulously engineered to yield classification outcomes. Our experimentation with MobileTurkerNeXt spanned three comparative scenarios: Bankart versus normal, SLAP versus normal, and a tripartite comparison of Bankart, SLAP, and normal cases. The model demonstrated exemplary performance, achieving test classification accuracies exceeding 96% across these scenarios. The empirical results underscore the MobileTurkerNeXt's superior classification process in differentiating among Bankart, SLAP, and normal conditions in orthopedic imaging. This underscores the potential of our proposed mobile CNN in advancing diagnostic capabilities and contributing significantly to the field of medical image analysis.

Attention-enhanced residual U-Net: lymph node segmentation method with bimodal MRI images.

Qiu J, Chen C, Li M, Hong J, Dong B, Xu S, Lin Y

pubmed logopapersJun 2 2025
In medical images, lymph nodes (LNs) have fuzzy boundaries, diverse shapes and sizes, and structures similar to surrounding tissues. To automatically segment uterine LNs from sagittal magnetic resonance (MRI) scans, we combined T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) images and tested the final results in our proposed model. This study used a data set of 158 MRI images of patients with FIGO staged LN confirmed by pathology. To improve the robustness of the model, data augmentation was applied to expand the data set. The training data was manually annotated by two experienced radiologists. The DWI and T2 images were fused and inputted into U-Net. The efficient channel attention (ECA) module was added to U-Net. A residual network was added to the encoding-decoding stage, named Efficient residual U-Net (ERU-Net), to obtain the final segmentation results and calculate the mean intersection-over-union (mIoU). The experimental results demonstrated that the ERU-Net network showed strong segmentation performance, which was significantly better than other segmentation networks. The mIoU reached 0.83, and the average pixel accuracy was 0.91. In addition, the precision was 0.90, and the corresponding recall was 0.91. In this study, ERU-Net successfully achieved the segmentation of LN in uterine MRI images. Compared with other segmentation networks, our network has the best segmentation effect on uterine LN. This provides a valuable reference for doctors to develop more effective and efficient treatment plans.

Current trends in glioma tumor segmentation: A survey of deep learning modules.

Shoushtari FK, Elahi R, Valizadeh G, Moodi F, Salari HM, Rad HS

pubmed logopapersJun 2 2025
Multiparametric Magnetic Resonance Imaging (mpMRI) is the gold standard for diagnosing brain tumors, especially gliomas, which are difficult to segment due to their heterogeneity and varied sub-regions. While manual segmentation is time-consuming and error-prone, Deep Learning (DL) automates the process with greater accuracy and speed. We conducted ablation studies on surveyed articles to evaluate the impact of "add-on" modules-addressing challenges like spatial information loss, class imbalance, and overfitting-on glioma segmentation performance. Advanced modules-such as atrous (dilated) convolutions, inception, attention, transformer, and hybrid modules-significantly enhance segmentation accuracy, efficiency, multiscale feature extraction, and boundary delineation, while lightweight modules reduce computational complexity. Experiments on the Brain Tumor Segmentation (BraTS) dataset (comprising low- and high-grade gliomas) confirm their robustness, with top-performing models achieving high Dice score for tumor sub-regions. This survey underscores the need for optimal module selection and placement to balance speed, accuracy, and interpretability in glioma segmentation. Future work should focus on improving model interpretability, lowering computational costs, and boosting generalizability. Tools like NeuroQuant® and Raidionics demonstrate potential for clinical translation. Further refinement could enable regulatory approval, advancing precision in brain tumor diagnosis and treatment planning.

Referenceless 4D Flow Cardiovascular Magnetic Resonance with deep learning.

Trenti C, Ylipää E, Ebbers T, Carlhäll CJ, Engvall J, Dyverfeldt P

pubmed logopapersJun 2 2025
Despite its potential to improve the assessment of cardiovascular diseases, 4D Flow CMR is hampered by long scan times. 4D Flow CMR is conventionally acquired with three motion encodings and one reference encoding, as the 3-dimensional velocity data are obtained by subtracting the phase of the reference from the phase of the motion encodings. In this study, we aim to use deep learning to predict the reference encoding from the three motion encodings for cardiovascular 4D Flow. A U-Net was trained with adversarial learning (U-Net<sub>ADV</sub>) and with a velocity frequency-weighted loss function (U-Net<sub>VEL</sub>) to predict the reference encoding from the three motion encodings obtained with a non-symmetric velocity-encoding scheme. Whole-heart 4D Flow datasets from 126 patients with different types of cardiomyopathies were retrospectively included. The models were trained on 113 patients with a 5-fold cross-validation, and tested on 13 patients. Flow volumes in the aorta and pulmonary artery, mean and maximum velocity, total and maximum turbulent kinetic energy at peak systole in the cardiac chambers and main vessels were assessed. 3-dimensional velocity data reconstructed with the reference encoding predicted by deep learning agreed well with the velocities obtained with the reference encoding acquired at the scanner for both models. U-Net<sub>ADV</sub> performed more consistently throughout the cardiac cycle and across the test subjects, while U-Net<sub>VEL</sub> performed better for systolic velocities. Comprehensively, the largest error for flow volumes, maximum and mean velocities was -6.031% for maximum velocities in the right ventricle for the U-Net<sub>ADV</sub>, and -6.92% for mean velocities in the right ventricle for U-Net<sub>VEL</sub>. For total turbulent kinetic energy, the highest errors were in the left ventricle (-77.17%) for the U-Net<sub>ADV</sub>, and in the right ventricle (24.96%) for the U-Net<sub>VEL</sub>, while for maximum turbulent kinetic energy were in the pulmonary artery for both models, with a value of -15.5% for U-Net<sub>ADV</sub> and 15.38% for the U-Net<sub>VEL</sub>. Deep learning-enabled referenceless 4D Flow CMR permits velocities and flow volumes quantification comparable to conventional 4D Flow. Omitting the reference encoding reduces the amount of acquired data by 25%, thus allowing shorter scan times or improved resolution, which is valuable for utilization in the clinical routine.
Page 84 of 1201200 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.