Sort by:
Page 137 of 1521519 results

Brain Fractal Dimension and Machine Learning can predict first-episode psychosis and risk for transition to psychosis.

Hu Y, Frisman M, Andreou C, Avram M, Riecher-Rössler A, Borgwardt S, Barth E, Korda A

pubmed logopapersMay 26 2025
Although there are notable structural abnormalities in the brain associated with psychotic diseases, it is still unclear how these abnormalities relate to clinical presentation. However, the fractal dimension (FD), which offers details on the complexity and irregularity of brain microstructures, may be a promising feature, as demonstrated by neuropsychiatric disorders such as Parkinson's and Alzheimer's. It may offer a possible biomarker for the detection and prognosis of psychosis when paired with machine learning. The purpose of this study is to investigate FD as a structural magnetic resonance imaging (sMRI) feature from individuals with a high clinical risk of psychosis who did not transit to psychosis (CHR_NT), clinical high risk who transit to psychosis (CHR_T), patients with first-episode psychosis (FEP) and healthy controls (HC). Using a machine learning approach that ultimately classifies sMRI images, the goals are (a) to evaluate FD as a potential biomarker and (b) to investigate its ability to predict a subsequent transition to psychosis from the high-risk clinical condition. We obtained sMRI images from 194 subjects, including 44 HCs, 77 FEPs, 16 CHR_Ts, and 57 CHR_NTs. We extracted the FD features and analyzed them using machine learning methods under five classification schemas (a) FEP vs. HC, (b) FEP vs. CHR_NT, (c) FEP vs. CHR_T, (d) CHR_NT vs. CHR_T, (d) CHR_NT vs. HC and (e) CHR_T vs. HC. In addition, the CHR_T group was used as external validation in (a), (b) and (d) comparisons to examine whether the progression of the disorder followed the FEP or CHR_NT patterns. The proposed algorithm resulted in a balanced accuracy greater than 0.77. This study has shown that FD can function as a predictive neuroimaging marker, providing fresh information on the microstructural alterations triggered throughout the course of psychosis. The effectiveness of FD in the detection of psychosis and transition to psychosis should be established by further research using larger datasets.

Beyond Accuracy: Evaluating certainty of AI models for brain tumour detection.

Nisa ZU, Bhatti SM, Jaffar A, Mazhar T, Shahzad T, Ghadi YY, Almogren A, Hamam H

pubmed logopapersMay 26 2025
Brain tumors pose a severe health risk, often leading to fatal outcomes if not detected early. While most studies focus on improving classification accuracy, this research emphasizes prediction certainty, quantified through loss values. Traditional metrics like accuracy and precision do not capture confidence in predictions, which is critical for medical applications. This study establishes a correlation between lower loss values and higher prediction certainty, ensuring more reliable tumor classification. We evaluate CNN, ResNet50, XceptionNet, and a Proposed Model (VGG19 with customized classification layers) using accuracy, precision, recall, and loss. Results show that while accuracy remains comparable across models, the Proposed Model achieves the best performance (96.95 % accuracy, 0.087 loss), outperforming others in both precision and recall. These findings demonstrate that certainty-aware AI models are essential for reliable clinical decision-making. This study highlights the potential of AI to bridge the shortage of medical professionals by integrating reliable diagnostic tools in healthcare. AI-powered systems can enhance early detection and improve patient outcomes, reinforcing the need for certainty-driven AI adoption in medical imaging.

Machine-learning modeL based on computed tomography body composition analysis for the estimation of resting energy expenditure: A pilot study.

Palmas F, Ciudin A, Melian J, Guerra R, Zabalegui A, Cárdenas G, Mucarzel F, Rodriguez A, Roson N, Burgos R, Hernández C, Simó R

pubmed logopapersMay 26 2025
The assessment of resting energy expenditure (REE) is a challenging task with the current existing methods. The reference method, indirect calorimetry (IC), is not widely available, and other surrogates, such as equations and bioimpedance (BIA) show poor agreement with IC. Body composition (BC), in particular muscle mass, plays an important role in REE. In recent years, computed tomography (CT) has emerged as a reliable tool for BC assessment, but its usefulness for the REE evaluation has not been examined. In the present study we have explored the usefulness of CT-scan imaging to assess the REE using AI machine-learning models. Single-centre observational cross-sectional pilot study from January to June 2022, including 90 fasting, clinically stable adults (≥18 years) with no contraindications for indirect calorimetry (IC), bioimpedance (BIA), or abdominal CT-scan. REE was measured using classical predictive equations, IC, BIA and skeletal CT-scan. The proposed model was based on a second-order linear regression with different input parameters, and the output corresponds to the estimated REE. The model was trained and tested using a cross-validation one-vs-all strategy including subjects with different characteristics. Data from 90 subjects were included in the final analysis. Bland-Altman plots showed that the CT-based estimation model had a mean bias of 0 kcal/day (LoA: -508.4 to 508.4) compared with IC, indicating better agreement than most predictive equations and similar agreement to BIA (bias 53.4 kcal/day, LoA: -475.7 to 582.4). Surprisingly, gender and BMI, ones of the mains variables included in all the BIA algorithms and mathematical equations were not relevant variables for REE calculated by means of AI coupled to skeletal CT scan. These findings were consistent with the results of other performance metrics, including mean absolute error (MAE), root mean square error (RMSE), and Lin's concordance correlation coefficient (CCC), which also favored the CT-based method over conventional equations. Our results suggest that the analysis of a CT-scan image by means of machine learning model is a reliable tool for the REE estimation. These findings have the potential to significantly change the paradigm and guidelines for nutritional assessment.

Diffusion based multi-domain neuroimaging harmonization method with preservation of anatomical details.

Lan H, Varghese BA, Sheikh-Bahaei N, Sepehrband F, Toga AW, Choupan J

pubmed logopapersMay 26 2025
In multi-center neuroimaging studies, the technical variability caused by the batch differences could hinder the ability to aggregate data across sites, and negatively impact the reliability of study-level results. Recent efforts in neuroimaging harmonization have aimed to minimize these technical gaps and reduce technical variability across batches. While Generative Adversarial Networks (GAN) has been a prominent method for addressing harmonization tasks, GAN-harmonized images suffer from artifacts or anatomical distortions. Given the advancements of denoising diffusion probabilistic model which produces high-fidelity images, we have assessed the efficacy of the diffusion model for neuroimaging harmonization. While GAN-based methods intrinsically transform imaging styles between two domains per model, we have demonstrated the diffusion model's superior capability in harmonizing images across multiple domains with single model. Our experiments highlight that the learned domain invariant anatomical condition reinforces the model to accurately preserve the anatomical details while differentiating batch differences at each diffusion step. Our proposed method has been tested using T1-weighted MRI images from two public neuroimaging datasets of ADNI1 and ABIDE II, yielding harmonization results with consistent anatomy preservation and superior FID score compared to the GAN-based methods. We have conducted multiple analyses including extensive quantitative and qualitative evaluations against the baseline models, ablation study showcasing the benefits of the learned domain invariant conditions, and improvements in the consistency of perivascular spaces segmentation analysis and volumetric analysis through harmonization.

Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study.

Chu X, Wang T, Chen M, Li J, Wang L, Wang C, Wang H, Wong ST, Chen Y, Li H

pubmed logopapersMay 26 2025
The automatic screening of thyroid nodules using computer-aided diagnosis holds great promise in reducing missed and misdiagnosed cases in clinical practice. However, most current research focuses on single-modal images and does not fully leverage the comprehensive information from multimodal medical images, limiting model performance. To enhance screening accuracy, this study uses a deep learning framework that integrates high-dimensional convolutions of B-mode ultrasound (BMUS) and strain elastography (SE) images to predict the malignancy of TI-RADS 4 thyroid nodules with high-risk features. First, we extract nodule regions from the images and expand the boundary areas. Then, adaptive particle swarm optimization (APSO) and contrast limited adaptive histogram equalization (CLAHE) algorithms are applied to enhance ultrasound image contrast. Finally, deep learning techniques are used to extract and fuse high-dimensional features from both ultrasound modalities to classify benign and malignant thyroid nodules. The proposed model achieved an AUC of 0.937 (95 % CI 0.917-0.949) and 0.927 (95 % CI 0.907-0.948) in the test and external validation sets, respectively, demonstrating strong generalization ability. When compared with the diagnostic performance of three groups of radiologists, the model outperformed them significantly. Meanwhile, with the model's assistance, all three radiologist groups showed improved diagnostic performance. Furthermore, heatmaps generated by the model show a high alignment with radiologists' expertise, further confirming its credibility. The results indicate that our model can assist in clinical thyroid nodule diagnosis, reducing the risk of missed and misdiagnosed diagnoses, particularly for high-risk populations, and holds significant clinical value.

ScanAhead: Simplifying standard plane acquisition of fetal head ultrasound.

Men Q, Zhao H, Drukker L, Papageorghiou AT, Noble JA

pubmed logopapersMay 26 2025
The fetal standard plane acquisition task aims to detect an Ultrasound (US) image characterized by specified anatomical landmarks and appearance for assessing fetal growth. However, in practice, due to variability in human operator skill and possible fetal motion, it can be challenging for a human operator to acquire a satisfactory standard plane. To support a human operator with this task, this paper first describes an approach to automatically predict the fetal head standard plane from a video segment approaching the standard plane. A transformer-based image predictor is proposed to produce a high-quality standard plane by understanding diverse scales of head anatomy within the US video frame. Because of the visual gap between the video frames and standard plane image, the predictor is equipped with an offset adaptor that performs domain adaption to translate the off-plane structures to the anatomies that would usually appear in a standard plane view. To enhance the anatomical details of the predicted US image, the approach is extended by utilizing a second modality, US probe movement, that provides 3D location information. Quantitative and qualitative studies conducted on two different head biometry planes demonstrate that the proposed US image predictor produces clinically plausible standard planes with superior performance to comparative published methods. The results of dual-modality solution show an improved visualization with enhanced anatomical details of the predicted US image. Clinical evaluations are also conducted to demonstrate the consistency between the predicted echo textures and the expected echo patterns seen in a typical real standard plane, which indicates its clinical feasibility for improving the standard plane acquisition process.

Pulse Pressure, White Matter Hyperintensities, and Cognition: Mediating Effects Across the Adult Lifespan.

Hannan J, Newman-Norlund S, Busby N, Wilson SC, Newman-Norlund R, Rorden C, Fridriksson J, Bonilha L, Riccardi N

pubmed logopapersMay 25 2025
To investigate whether pulse pressure or mean arterial pressure mediates the relationship between age and white matter hyperintensity load and to examine the mediating effect of white matter hyperintensities on cognition. Demographic information, blood pressure, current medication lists, and Montreal Cognitive Assessment scores for 231 stroke- and dementia-free adults were retrospectively obtained from the Aging Brain Cohort study. Total WMH load was determined from T2-FLAIR magnetic resonance scans using the TrUE-Net deep learning tool for white matter segmentation. In separate models, we used mediation analysis to assess whether pulse pressure or MAP mediates the relationship between age and total white matter hyperintensity load, controlling for cardiovascular confounds. We also assessed whether white matter hyperintensity load mediated the relationship between age and cognitive scores. Pulse pressure, but not mean arterial pressure, significantly mediated the relationship between age and white matter hyperintensity load. White matter hyperintensity load partially mediated the relationship between age and Montreal Cognitive Assessment score. Our results indicate that pulse pressure, but not mean arterial pressure, is mechanistically associated with age-related accumulation of white matter hyperintensities, independent of other cardiovascular risk factors. White matter hyperintensity load was a mediator of cognitive scores across the adult lifespan. Effective management of pulse pressure may be especially important for maintenance of brain health and cognition.

Distinct brain age gradients across the adult lifespan reflect diverse neurobiological hierarchies.

Riccardi N, Teghipco A, Newman-Norlund S, Newman-Norlund R, Rangus I, Rorden C, Fridriksson J, Bonilha L

pubmed logopapersMay 25 2025
'Brain age' is a biological clock typically used to describe brain health with one number, but its relationship with established gradients of cortical organization remains unclear. We address this gap by leveraging a data-driven, region-specific brain age approach in 335 neurologically intact adults, using a convolutional neural network (volBrain) to estimate regional brain ages directly from structural MRI without a predefined set of morphometric properties. Six distinct gradients of brain aging are replicated in two independent cohorts. Spatial patterns of accelerated brain aging in older adults quantitatively align with the archetypal sensorimotor-to-association axis of cortical organization. Other brain aging gradients reflect neurobiological hierarchies such as gene expression and externopyramidization. Participant-level correspondences to brain age gradients are associated with cognitive and sensorimotor performance and explained behavioral variance more effectively than global brain age. These results suggest that regional brain age patterns reflect fundamental principles of cortical organization and behavior.

MobNas ensembled model for breast cancer prediction.

Shahzad T, Saqib SM, Mazhar T, Iqbal M, Almogren A, Ghadi YY, Saeed MM, Hamam H

pubmed logopapersMay 25 2025
Breast cancer poses a real and immense threat to humankind, thus a need to develop a way of diagnosing this devastating disease early, accurately, and in a simpler manner. Thus, while substantial progress has been made in developing machine learning algorithms, deep learning, and transfer learning models, issues with diagnostic accuracy and minimizing diagnostic errors persist. This paper introduces MobNAS, a model that uses MobileNetV2 and NASNetLarge to sort breast cancer images into benign, malignant, or normal classes. The study employs a multi-class classification design and uses a publicly available dataset comprising 1,578 ultrasound images, including 891 benign, 421 malignant, and 266 normal cases. By deploying MobileNetV2, it is easy to work well on devices with less computational capability than is used by NASNetLarge, which enhances its applicability and effectiveness in other tasks. The performance of the proposed MobNAS model was tested on the breast cancer image dataset, and the accuracy level achieved was 97%, the Mean Absolute Error (MAE) was 0.05, and the Matthews Correlation Coefficient (MCC) was 95%. From the findings of this research, it is evident that MobNAS can enhance diagnostic accuracy and reduce existing shortcomings in breast cancer detection.

MedITok: A Unified Tokenizer for Medical Image Synthesis and Interpretation

Chenglong Ma, Yuanfeng Ji, Jin Ye, Zilong Li, Chenhui Wang, Junzhi Ning, Wei Li, Lihao Liu, Qiushan Guo, Tianbin Li, Junjun He, Hongming Shan

arxiv logopreprintMay 25 2025
Advanced autoregressive models have reshaped multimodal AI. However, their transformative potential in medical imaging remains largely untapped due to the absence of a unified visual tokenizer -- one capable of capturing fine-grained visual structures for faithful image reconstruction and realistic image synthesis, as well as rich semantics for accurate diagnosis and image interpretation. To this end, we present MedITok, the first unified tokenizer tailored for medical images, encoding both low-level structural details and high-level clinical semantics within a unified latent space. To balance these competing objectives, we introduce a novel two-stage training framework: a visual representation alignment stage that cold-starts the tokenizer reconstruction learning with a visual semantic constraint, followed by a textual semantic representation alignment stage that infuses detailed clinical semantics into the latent space. Trained on the meticulously collected large-scale dataset with over 30 million medical images and 2 million image-caption pairs, MedITok achieves state-of-the-art performance on more than 30 datasets across 9 imaging modalities and 4 different tasks. By providing a unified token space for autoregressive modeling, MedITok supports a wide range of tasks in clinical diagnostics and generative healthcare applications. Model and code will be made publicly available at: https://github.com/Masaaki-75/meditok.
Page 137 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.