Sort by:
Page 634 of 7647636 results

Allaw S, Khabaz K, Given TC, Montas D, Alcazar-Felix RJ, Srinath A, Kass-Hout T, Carroll TJ, Hurley MC, Polster SP

pubmed logopapersJun 3 2025
Traditional guidance for intracranial aneurysm (IA) management is dichotomized by rupture status. Fundamental to the management of ruptured aneurysm is the detection and treatment of SAH, along with securing the aneurysm by the safest technique. On the other hand, unruptured aneurysms first require a careful assessment of their natural history versus treatment risk, including an imaging assessment of aneurysm size, location, and morphology, along with additional evidence-based risk factors such as smoking, hypertension, and family history. Unfortunately, a large proportion of ruptured aneurysms are in the lower risk size category (<7 mm), putting a premium on discovering a more refined noninvasive biomarker to detect and stratify aneurysm instability before rupture. In this review of aneurysm work-up, we cover the gamut of established imaging modalities (eg, CT, CTA, DSA, FLAIR, 3D TOF-MRA, contrast-enhanced-MRA) as well as more novel MR techniques (MR vessel wall imaging, dynamic contrast-enhanced MRI, computational fluid dynamics). Additionally, we evaluate the current landscape of artificial intelligence software and its integration into diagnostic and risk-stratification pipelines for IAs. These advanced MR techniques, increasingly complemented with artificial intelligence models, offer a paradigm shift by evaluating factors beyond size and morphology, including vessel wall inflammation, permeability, and hemodynamics. Additionally, we provide our institution's scan parameters for many of these modalities as a reference. Ultimately, this review provides an organized, up-to-date summary of the array of available modalities/sequences for IA imaging to help build protocols focused on IA characterization.

Chang S, Benson JC, Lane JI, Bruesewitz MR, Swicklik JR, Thorne JE, Koons EK, Carlson ML, McCollough CH, Leng S

pubmed logopapersJun 3 2025
Ultra-high-resolution (UHR) photon-counting-detector (PCD) CT improves image resolution but increases noise, necessitating the use of smoother reconstruction kernels that reduce resolution below the 0.125-mm maximum spatial resolution. A denoising convolutional neural network (CNN) was developed to reduce noise in images reconstructed with the available sharpest reconstruction kernel while preserving resolution for enhanced temporal bone visualization to address this issue. With institutional review board approval, the CNN was trained on 6 patient cases of clinical temporal bone imaging (1885 images) and tested on 20 independent cases using a dual-source PCD-CT (NAEOTOM Alpha). Images were reconstructed using quantum iterative reconstruction at strength 3 (QIR3) with both a clinical routine kernel (Hr84) and the sharpest available head kernel (Hr96). The CNN was applied to images reconstructed with Hr96 and QIR1 kernel. For each case, three series of images (Hr84-QIR3, Hr96-QIR3, and Hr96-CNN) were randomized for review by 2 neuroradiologists assessing the overall quality and delineating the modiolus, stapes footplate, and incudomallear joint. The CNN reduced noise by 80% compared with Hr96-QIR3 and by 50% relative to Hr84-QIR3, while maintaining high resolution. Compared with the conventional method at the same kernel (Hr96-QIR3), Hr96-CNN significantly decreased image noise (from 204.63 to 47.35 HU) and improved its structural similarity index (from 0.72 to 0.99). Hr96-CNN images ranked higher than Hr84-QIR3 and Hr96-QIR3 in overall quality (<i>P</i> < .001). Readers preferred Hr96-CNN for all 3 structures. The proposed CNN significantly reduced image noise in UHR PCD-CT, enabling the use of the sharpest kernel. This combination greatly enhanced diagnostic image quality and anatomic visualization.

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Pierrick Coupé, Boris Mansencal, Floréal Morandat, Sergio Morell-Ortega, Nicolas Villain, Jose V. Manjón, Vincent Planche

arxiv logopreprintJun 3 2025
INTRODUCTION: Quantification of amyloid plaques (A), neurofibrillary tangles (T2), and neurodegeneration (N) using PET and MRI is critical for Alzheimer's disease (AD) diagnosis and prognosis. Existing pipelines face limitations regarding processing time, variability in tracer types, and challenges in multimodal integration. METHODS: We developed petBrain, a novel end-to-end processing pipeline for amyloid-PET, tau-PET, and structural MRI. It leverages deep learning-based segmentation, standardized biomarker quantification (Centiloid, CenTauR, HAVAs), and simultaneous estimation of A, T2, and N biomarkers. The pipeline is implemented as a web-based platform, requiring no local computational infrastructure or specialized software knowledge. RESULTS: petBrain provides reliable and rapid biomarker quantification, with results comparable to existing pipelines for A and T2. It shows strong concordance with data processed in ADNI databases. The staging and quantification of A/T2/N by petBrain demonstrated good agreement with CSF/plasma biomarkers, clinical status, and cognitive performance. DISCUSSION: petBrain represents a powerful and openly accessible platform for standardized AD biomarker analysis, facilitating applications in clinical research.

Cerrone D, Riccobelli D, Gazzoni S, Vitullo P, Ballarin F, Falco J, Acerbi F, Manzoni A, Zunino P, Ciarletta P

pubmed logopapersJun 3 2025
Glioblastoma is among the most aggressive brain tumors in adults, characterized by patient-specific invasion patterns driven by the underlying brain microstructure. In this work, we present a proof-of-concept for a mathematical model of GBL growth, enabling real-time prediction and patient-specific parameter identification from longitudinal neuroimaging data. The framework exploits a diffuse-interface mathematical model to describe the tumor evolution and a reduced-order modeling strategy, relying on proper orthogonal decomposition, trained on synthetic data derived from patient-specific brain anatomies reconstructed from magnetic resonance imaging and diffusion tensor imaging. A neural network surrogate learns the inverse mapping from tumor evolution to model parameters, achieving significant computational speed-up while preserving high accuracy. To ensure robustness and interpretability, we perform both global and local sensitivity analyses, identifying the key biophysical parameters governing tumor dynamics and assessing the stability of the inverse problem solution. These results establish a methodological foundation for future clinical deployment of patient-specific digital twins in neuro-oncology.

Alyanak B, Çakar İ, Dede BT, Yıldızgören MT, Bağcıer F

pubmed logopapersJun 3 2025
This study aims to evaluate the reliability of plantar fascia thickness measurements performed by ChatGPT-4 using magnetic resonance imaging (MRI) compared to those obtained by an experienced clinician. In this retrospective, single-center study, foot MRI images from the hospital archive were analysed. Plantar fascia thickness was measured under both blinded and non-blinded conditions by an experienced clinician and ChatGPT-4 at two separate time points. Measurement reliability was assessed using the intraclass correlation coefficient (ICC), mean absolute error (MAE), and mean relative error (MRE). A total of 41 participants (32 females, 9 males) were included. The average plantar fascia thickness measured by the clinician was 4.20 ± 0.80 mm and 4.25 ± 0.92 mm under blinded and non-blinded conditions, respectively, while ChatGPT-4's measurements were 6.47 ± 1.30 mm and 6.46 ± 1.31 mm, respectively. Human evaluators demonstrated excellent agreement (ICC = 0.983-0.989), whereas ChatGPT-4 exhibited low reliability (ICC = 0.391-0.432). In thin plantar fascia cases, ChatGPT-4's error rate was higher, with MAE = 2.70 mm, MRE = 77.17 % under blinded conditions, and MAE = 2.91 mm, MRE = 87.02 % under non-blinded conditions. ChatGPT-4 demonstrated lower reliability in plantar fascia thickness measurements compared to an experienced clinician, with increased error rates in thin structures. These findings highlight the limitations of AI-based models in medical image analysis and emphasize the need for further refinement before clinical implementation.

Xia X, Qiu J, Tan Q, Du W, Gou Q

pubmed logopapersJun 3 2025
To develop and evaluate radiomics-based models using contrast-enhanced T1-weighted imaging (CE-T1WI) for the non-invasive differentiation of primary central nervous system lymphoma (PCNSL) and solitary brain metastasis (SBM), aiming to improve diagnostic accuracy and support clinical decision-making. This retrospective study included a cohort of 324 patients pathologically diagnosed with PCNSL (n=115) or SBM (n=209) between January 2014 and December 2024. Tumor regions were manually segmented on CE-T1WI, and a comprehensive set of 1561 radiomic features was extracted. To identify the most important features, a two-step approach for feature selection was utilized, which involved the use of least absolute shrinkage and selection operator (LASSO) regression. Multiple machine learning classifiers were trained and validated to assess diagnostic performance. Model performance was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity. The effectiveness of the radiomics-based models was further assessed using decision curve analysis, which incorporated a risk threshold of 0.5 to balance both false positives and false negatives. 23 features were identified through LASSO regression. All classifiers demonstrated robust performance in terms of area under the curve (AUC) and accuracy, with 15 out of 20 classifiers achieving AUC values exceeding 0.9. In the 10-fold cross-validation, the artificial neural network (ANN) classifier achieved the highest AUC of 0.9305, followed by the support vector machine with polynomial kernels (SVMPOLY) classifier at 0.9226. Notably, the independent test revealed that the support vector machine with radial basis function (SVMRBF) classifier performed best, with an AUC of 0.9310 and the highest accuracy of 0.8780. The selected models-SVMRBF, SVMPOLY, ensemble learning with LDA (ELDA), ANN, random forest (RF), and grading boost with random undersampling boosting (GBRUSB)-all showed significant clinical utility, with their standardized net benefits (sNBs) surpassing 0.6. These results underline the potential of the radiomics-based models in reliably distinguishing PCNSL from SBM. The application of radiomic-driven models based on CE-T1WI has demonstrated encouraging potential for accurately distinguishing between PCNSL and SBM. The SVMRBF classifier showed the greatest diagnostic efficacy of all the classifiers tested, indicating its potential clinical utility in differential diagnosis.

Yahaya BS, Osman ND, Karim NKA, Appalanaido GK, Isa IS

pubmed logopapersJun 3 2025
Computed tomography (CT) has been widely used as an effective tool for liver imaging due to its high spatial resolution, and ability to differentiate tissue densities, which contributing to comprehensive image analysis. Recent advancements in artificial intelligence (AI) promoted the role of Machine Learning (ML) in managing liver cancers by predicting or classifying tumours using mathematical algorithms. Deep learning (DL), a subset of ML, expanded these capabilities through convolutional neural networks (CNN) that analyse large data automatically. This review examines methods, achievements, limitations, and performance outcomes of ML-based radiomics and DL models for liver malignancies from CT imaging. A systematic search for full-text articles in English on CT radiomics and DL in liver cancer analysis was conducted in PubMed, Scopus, Science Citation Index, and Cochrane Library databases between 2020 and 2024 using the keywords; machine learning, radiomics, deep learning, computed tomography, liver cancer and associated MESH terms. PRISMA guidelines were used to identify and screen studies for inclusion. A total of 49 studies were included consisting of 17 Radiomics, 24 DL, and 8 combined DL/Radiomics studies. Radiomics has been predominantly utilised for predictive analysis, while DL has been extensively applied to automatic liver and tumour segmentation with a surge of a recent increase in studies integrating both techniques. Despite the growing popularity of DL methods, classical radiomics models are still relevant and often preferred over DL methods when performance is similar, due to lower computational and data needs. Performance of models keep improving, but challenges like data scarcity and lack of standardised protocols persists.

Afnouch M, Bougourzi F, Gaddour O, Dornaika F, Ahmed AT

pubmed logopapersJun 3 2025
Artificial Intelligence is transforming medical imaging, particularly in the analysis of bone metastases (BM), a serious complication of advanced cancers. Machine learning and deep learning techniques offer new opportunities to improve detection, recognition, and segmentation of bone metastasis. Yet, challenges such as limited data, interpretability, and clinical validation remain. Following PRISMA guidelines, we reviewed artificial intelligence methods and applications for bone metastasis analysis across major imaging modalities including CT, MRI, PET, SPECT, and bone scintigraphy. The survey includes traditional machine learning models and modern deep learning architectures such as CNNs and transformers. We also examined available datasets and their effect in developing artificial intelligence in this field. Artificial intelligence models have achieved strong performance across tasks and modalities, with Convolutional Neural Network (CNN) and Transformer architectures showing particularly efficient performance across different tasks. However, limitations persist, including data imbalance, overfitting risks, and the need for greater transparency. Clinical translation is also challenged by regulatory and validation hurdles. Artificial intelligence holds strong potential to improve BM diagnosis and streamline radiology workflows. To reach clinical maturity, future work must address data diversity, model explainability, and large-scale validation, which are critical steps for being trusted to be integrated into the oncology care routines.

Westerhoff, M., Gyftopoulos, S., Dane, B., Vega, E., Murdock, D., Lindow, N., Herter, F., Bousabarah, K., Recht, M. P., Bredella, M. A.

medrxiv logopreprintJun 3 2025
BackgroundOsteoporosis is underdiagnosed and undertreated prompting the exploration of opportunistic screening using CT and artificial intelligence (AI). PurposeTo develop a reproducible deep learning-based convolutional neural network to automatically place a 3D region of interest (ROI) in trabecular bone, develop a correction method to normalize attenuation across different CT protocols or and scanner models, and to establish thresholds for osteoporosis in a large diverse population. MethodsA deep learning-based method was developed to automatically quantify trabecular attenuation using a 3D ROI of the thoracic and lumbar spine on chest, abdomen, or spine CTs, adjusted for different tube voltages and scanner models. Normative values, thresholds for osteoporosis of trabecular attenuation of the spine were established across a diverse population, stratified by age, sex, race, and ethnicity using reported prevalence of osteoporosis by the WHO. Results538,946 CT examinations from 283,499 patients (mean age 65 years{+/-}15, 51.2% women and 55.5% White), performed on 50 scanner models using six different tube voltages were analyzed. Hounsfield Units at 80 kVp versus 120 kVp differed by 23%, and different scanner models resulted in differences of values by < 10%. Automated ROI placement of 1496 vertebra was validated by manual radiologist review, demonstrating >99% agreement. Mean trabecular attenuation was higher in young women (<50 years) than young men (p<.001) and decreased with age, with a steeper decline in postmenopausal women. In patients older than 50 years, trabecular attention was higher in males than females (p<.001). Trabecular attenuation was highest in Blacks, followed by Asians and lowest in Whites (p<.001). The threshold for L1 in diagnosing osteoporosis was 80 HU. ConclusionDeep learning-based automated opportunistic osteoporosis screening can identify patients with low bone mineral density that undergo CT scans for clinical purposes on different scanners and protocols. Key Results 3 main results/conclusionsO_LIIn a study of 538,946 CT examinations performed in 283,499 patients using different scanner models and imaging protocols, an automated deep learning-based convolutional neural network was able to accurately place a three-dimensional regions of interest within thoracic and lumbar vertebra to measure trabecular attenuation. C_LIO_LITube voltage had a larger influence on attenuation values (23%) than scanner model (<10%). C_LIO_LIA threshold of 80 HU was identified for L1 to diagnose osteoporosis using an automated three-dimensional region of interest. C_LI
Page 634 of 7647636 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.