Sort by:
Page 92 of 2262251 results

Exploring factors driving the evolution of chronic lesions in multiple sclerosis using machine learning.

Hu H, Ye L, Wu P, Shi Z, Chen G, Li Y

pubmed logopapersJun 17 2025
The study aimed to identify factors influencing the evolution of chronic lesions in multiple sclerosis (MS) using a machine learning approach. Longitudinal data were collected from individuals with relapsing-remitting multiple sclerosis (RRMS). The "iron rim" sign was identified using quantitative susceptibility mapping (QSM), and microstructural damage was quantified via T1/fluid attenuated inversion recovery (FLAIR) ratios. Additional data included baseline lesion volume, cerebral T2-hyperintense lesion volume, iron rim lesion volume, the proportion of iron rim lesion volume, gender, age, disease duration (DD), disability and cognitive scores, use of disease-modifying therapy, and follow-up intervals. These features were integrated into machine learning models (logistic regression (LR), random forest (RF), and support vector machine (SVM)) to predict lesion volume change, with the most predictive model selected for feature importance analysis. The study included 47 RRMS individuals (mean age, 30.6 ± 8.0 years [standard deviation], 6 males) and 833 chronic lesions. Machine learning model development results showed that the SVM model demonstrated superior predictive efficiency, with an AUC of 0.90 in the training set and 0.81 in the testing set. Feature importance analysis identified the top three features were the "iron rim" sign of lesions, DD, and the T1/FLAIR ratios of the lesions. This study developed a machine learning model to predict the volume outcome of MS lesions. Feature importance analysis identified chronic inflammation around the lesion, DD, and the microstructural damage as key factors influencing volume change in chronic MS lesions. Question The evolution of different chronic lesions in MS exhibits variability, and the driving factors influencing these outcomes remain to be further investigated. Findings A SVM learning model was developed to predict chronic MS lesion volume changes, integrating lesion characteristics, lesion burden, and clinical data. Clinical relevance Chronic inflammation surrounding lesions, DD, and microstructural damage are key factors influencing the evolution of chronic MS lesions.

Application of Convolutional Neural Network Denoising to Improve Cone Beam CT Myelographic Images.

Madhavan AA, Zhou Z, Thorne J, Kodet ML, Cutsforth-Gregory JK, Schievink WI, Mark IT, Schueler BA, Yu L

pubmed logopapersJun 17 2025
Cone beam CT is an imaging modality that provides high-resolution, cross-sectional imaging in the fluoroscopy suite. In neuroradiology, cone beam CT has been used for various applications including temporal bone imaging and during spinal and cerebral angiography. Furthermore, cone beam CT has been shown to improve imaging of spinal CSF leaks during myelography. One drawback of cone beam CT is that images have a relatively high noise level. In this technical report, we describe the first application of a high-resolution convolutional neural network to denoise cone beam CT myelographic images. We show examples of the resulting improvement in image quality for a variety of types of spinal CSF leaks. Further application of this technique is warranted to demonstrate its clinical utility and potential use for other cone beam CT applications.ABBREVIATIONS: CBCT = cone beam CT; CB-CTM = cone beam CT myelography; CTA = CT angiography; CVF = CSF-venous fistula; DSM = digital subtraction myelography; EID = energy integrating detector; FBP = filtered back-projection; SNR = signal-to-noise ratio.

Recognition and diagnosis of Alzheimer's Disease using T1-weighted magnetic resonance imaging via integrating CNN and Swin vision transformer.

Wang Y, Sheng H, Wang X

pubmed logopapersJun 17 2025
Alzheimer's disease is a debilitating neurological disorder that requires accurate diagnosis for the most effective therapy and care. This article presents a new vision transformer model specifically created to evaluate magnetic resonance imaging data from the Alzheimer's Disease Neuroimaging Initiative dataset in order to categorize cases of Alzheimer's disease. Contrary to models that rely on convolutional neural networks, the vision transformer has the ability to capture large relationships between far-apart pixels in the images. The suggested architecture has shown exceptional outcomes, as its precision has emphasized its capacity to detect and distinguish significant characteristics from MRI scans, hence enabling the precise classification of Alzheimer's disease subtypes and various stages. The model utilizes both the elements from convolutional neural network and vision transformer models to extract both local and global visual patterns, facilitating the accurate categorization of various Alzheimer's disease classifications. We specifically focus on the term 'dementia in patients with Alzheimer's disease' to describe individuals who have progressed to the dementia stage as a result of AD, distinguishing them from those in earlier stages of the disease. Precise categorization of Alzheimer's disease has significant therapeutic importance, as it enables timely identification, tailored treatment strategies, disease monitoring, and prognostic assessment. The stated high accuracy indicates that the suggested vision transformer model has the capacity to assist healthcare providers and researchers in generating well-informed and precise evaluations of individuals with Alzheimer's disease.

Toward general text-guided multimodal brain MRI synthesis for diagnosis and medical image analysis.

Wang Y, Xiong H, Sun K, Bai S, Dai L, Ding Z, Liu J, Wang Q, Liu Q, Shen D

pubmed logopapersJun 17 2025
Multimodal brain magnetic resonance imaging (MRI) offers complementary insights into brain structure and function, thereby improving the diagnostic accuracy of neurological disorders and advancing brain-related research. However, the widespread applicability of MRI is substantially limited by restricted scanner accessibility and prolonged acquisition times. Here, we present TUMSyn, a text-guided universal MRI synthesis model capable of generating brain MRI specified by textual imaging metadata from routinely acquired scans. We ensure the reliability of TUMSyn by constructing a brain MRI database comprising 31,407 3D images across 7 MRI modalities from 13 worldwide centers and pre-training an MRI-specific text encoder to process text prompts effectively. Experiments on diverse datasets and physician assessments indicate that TUMSyn-generated images can be utilized along with acquired MRI scan(s) to facilitate large-scale MRI-based screening and diagnosis of multiple brain diseases, substantially reducing the time and cost of MRI in the healthcare system.

2nd trimester ultrasound (anomaly).

Carocha A, Vicente M, Bernardeco J, Rijo C, Cohen Á, Cruz J

pubmed logopapersJun 17 2025
The second-trimester ultrasound is a crucial tool in prenatal care, typically conducted between 18 and 24 weeks of gestation to evaluate fetal anatomy, growth, and mid-trimester screening. This article provides a comprehensive overview of the best practices and guidelines for performing this examination, with a focus on detecting fetal anomalies. The ultrasound assesses key structures and evaluates fetal growth by measuring biometric parameters, which are essential for estimating fetal weight. Additionally, the article discusses the importance of placental evaluation, amniotic fluid levels measurement, and the risk of preterm birth through cervical length measurements. Factors that can affect the accuracy of the scan, such as the skill of the operator, the quality of the equipment, and maternal conditions such as obesity, are discussed. The article also addresses the limitations of the procedure, including variability in detection. Despite these challenges, the second-trimester ultrasound remains a valuable screening and diagnostic tool, providing essential information for managing pregnancies, especially in high-risk cases. Future directions include improving imaging technology, integrating artificial intelligence for anomaly detection, and standardizing ultrasound protocols to enhance diagnostic accuracy and ensure consistent prenatal care.

Deep learning based colorectal cancer detection in medical images: A comprehensive analysis of datasets, methods, and future directions.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.

Next-generation machine learning model to measure the Norberg angle on canine hip radiographs increases accuracy and time to completion.

Hansen GC, Yao Y, Fischetti AJ, Gonzalez A, Porter I, Todhunter RJ, Zhang Y

pubmed logopapersJun 16 2025
To apply machine learning (ML) to measure the Norberg angle (NA) on canine ventrodorsal hip-extended pelvic radiographs. In this observational study, an NA-AI model was trained on real and synthetic radiographs. Additional radiographs were used for validation and testing. Each NA was predicted using a hybrid architecture derived from 2 ML vision models. The NAs were measured by 4 authors, and the model all were compared to each other. The time taken to correct the NAs predicted by the model was compared to unassisted human measurements. The NA-AI model was trained on 733 real and 1,474 synthetic radiographs; 105 real radiographs were used for validation and 128 for testing. The mean absolute error between each human measurement ranged from 3° to 10° ± SD = 3° to 10° with an intraclass correlation between humans of 0.38 to 0.92. The mean absolute error between the NA-AI model prediction and the human measurements was 5° to 6° ± SD = 5° (intraclass correlation, 0.39 to 0.94). Bland-Altman plots showed good agreement between human and AI measurements when the NAs were greater than 80°. The time taken to check the accuracy of the NA measurement compared to unassisted measurements was reduced by 45% to 80%. The NA-AI model proved more accurate than the original model except when the hip dysplasia was severe, and its assistance decreased the time needed to analyze radiographs. The assistance of the NA-AI model reduces the time taken for radiographic hip analysis for clinical applications. However, it is less reliable in cases involving severe osteoarthritic change, requiring manual review for such cases.

Integration of MRI radiomics and germline genetics to predict the IDH mutation status of gliomas.

Nakase T, Henderson GA, Barba T, Bareja R, Guerra G, Zhao Q, Francis SS, Gevaert O, Kachuri L

pubmed logopapersJun 16 2025
The molecular profiling of gliomas for isocitrate dehydrogenase (IDH) mutations currently relies on resected tumor samples, highlighting the need for non-invasive, preoperative biomarkers. We investigated the integration of glioma polygenic risk scores (PRS) and radiographic features for prediction of IDH mutation status. We used 256 radiomic features, a glioma PRS and demographic information in 158 glioma cases within elastic net and neural network models. The integration of glioma PRS with radiomics increased the area under the receiver operating characteristic curve (AUC) for distinguishing IDH-wildtype vs. IDH-mutant glioma from 0.83 to 0.88 (P<sub>ΔAUC</sub> = 6.9 × 10<sup>-5</sup>) in the elastic net model and from 0.91 to 0.92 (P<sub>ΔAUC</sub> = 0.32) in the neural network model. Incorporating age at diagnosis and sex further improved the classifiers (elastic net: AUC = 0.93, neural network: AUC = 0.93). Patients predicted to have IDH-mutant vs. IDH-wildtype tumors had significantly lower mortality risk (hazard ratio (HR) = 0.18, 95% CI: 0.08-0.40, P = 2.1 × 10<sup>-5</sup>), comparable to prognostic trajectories for biopsy-confirmed IDH status. The augmentation of imaging-based classifiers with genetic risk profiles may help delineate molecular subtypes and improve the timely, non-invasive clinical assessment of glioma patients.

Whole-lesion-aware network based on freehand ultrasound video for breast cancer assessment: a prospective multicenter study.

Han J, Gao Y, Huo L, Wang D, Xie X, Zhang R, Xiao M, Zhang N, Lei M, Wu Q, Ma L, Sun C, Wang X, Liu L, Cheng S, Tang B, Wang L, Zhu Q, Wang Y

pubmed logopapersJun 16 2025
The clinical application of artificial intelligence (AI) models based on breast ultrasound static images has been hindered in real-world workflows due to operator-dependence of standardized image acquisition and incomplete view of breast lesions on static images. To better exploit the real-time advantages of ultrasound and more conducive to clinical application, we proposed a whole-lesion-aware network based on freehand ultrasound video (WAUVE) scanning in an arbitrary direction for predicting overall breast cancer risk score. The WAUVE was developed using 2912 videos (2912 lesions) of 2771 patients retrospectively collected from May 2020 to August 2022 in two hospitals. We compared the diagnostic performance of WAUVE with static 2D-ResNet50 and dynamic TimeSformer models in the internal validation set. Subsequently, a dataset comprising 190 videos (190 lesions) from 175 patients prospectively collected from December 2022 to April 2023 in two other hospitals, was used as an independent external validation set. A reader study was conducted by four experienced radiologists on the external validation set. We compared the diagnostic performance of WAUVE with the four experienced radiologists and evaluated the auxiliary value of model for radiologists. The WAUVE demonstrated superior performance compared to the 2D-ResNet50 model, while similar to the TimeSformer model. In the external validation set, WAUVE achieved an area under the receiver operating characteristic curve (AUC) of 0.8998 (95% CI = 0.8529-0.9439), and showed a comparable diagnostic performance to that of four experienced radiologists in terms of sensitivity (97.39% vs. 98.48%, p = 0.36), specificity (49.33% vs. 50.00%, p = 0.92), and accuracy (78.42% vs.79.34%, p = 0.60). With the WAUVE model assistance, the average specificity of four experienced radiologists was improved by 6.67%, and higher consistency was achieved (from 0.807 to 0.838). The WAUVE based on non-standardized ultrasound scanning demonstrated excellent performance in breast cancer assessment which yielded outcomes similar to those of experienced radiologists, indicating the clinical application of the WAUVE model promising.
Page 92 of 2262251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.