Sort by:
Page 327 of 6636627 results

Miniere HJM, Hormuth DA, Lima EABF, Farhat M, Panthi B, Langshaw H, Shanker MD, Talpur W, Thrower S, Goldman J, Ty S, Chung C, Yankeelov TE

pubmed logopapersJul 29 2025
High-grade gliomas are highly invasive and respond variably to chemoradiation. Accurate, patient-specific predictions of tumor response could enhance treatment planning. We present a novel computational platform that assimilates MRI data to continually predict spatiotemporal tumor changes during chemoradiotherapy. Tumor growth and response to chemoradiation was described using a two-species reaction-diffusion model of enhancing and non-enhancing regions of the tumor. Two evaluation scenarios were used to test the predictive accuracy of this model. In scenario 1, the model was calibrated on a patient-specific basis (n = 21) to weekly MRI data during the course of chemoradiotherapy. A data assimilation framework was used to update model parameters with each new imaging visit which were then used to update model predictions. In scenario 2, we evaluated the predictive accuracy of the model when fewer data points are available by calibrating the same model using only the first two imaging visits and then predicted tumor response at the remaining five weeks of treatment. We investigated three approaches to assign model parameters for scenario 2: (1) predictions using only parameters estimated by fitting the data obtained from an individual patient's first two imaging visits, (2) predictions made by averaging the patient-specific parameters with the cohort-derived parameters, and (3) predictions using only cohort-derived parameters. Scenario 1 achieved a median [range] concordance correlation coefficient (CCC) between the predicted and measured total tumor cell counts of 0.91 [0.84, 0.95], and a median [range] percent error in tumor volume of -2.6% [-19.7, 8.0%], demonstrating strong agreement throughout the course of treatment. For scenario 2, the three approaches yielded CCCs of: (1) 0.65 [0.51, 0.88], (2) 0.74 [0.70, 0.91], (3) 0.76 [0.73, 0.92] with significant differences between the approach (1) that does not use the cohort parameters and the two approaches (2 and 3) that do. The proposed data assimilation framework enhances the accuracy of tumor growth forecasts by integrating patient-specific and cohort-based data. These findings show a practical method for identifying more personalized treatment strategies in high-grade glioma patients.

Murugesan GK, McCrumb D, Soni R, Kumar J, Nuernberg L, Pei L, Wagner U, Granger S, Fedorov AY, Moore S, Van Oss J

pubmed logopapersJul 29 2025
The Artificial Intelligence in Medical Imaging (AIMI) initiative aims to enhance the National Cancer Institute's (NCI) Image Data Commons (IDC) by releasing fully reproducible nnU-Net models, along with AI-assisted segmentation for cancer radiology images. In this extension of our earlier work, we created high-quality, AI-annotated imaging datasets for 11 IDC collections, spanning computed tomography (CT) and magnetic resonance imaging (MRI) of the lungs, breast, brain, kidneys, prostate, and liver. Each nnU-Net model was trained on open-source datasets, and a portion of the AI-generated annotations was reviewed and corrected by board-certified radiologists. Both the AI and radiologist annotations were encoded in compliance with the Digital Imaging and Communications in Medicine (DICOM) standard, ensuring seamless integration into the IDC collections. By making these models, images, and annotations publicly accessible, we aim to facilitate further research and development in cancer imaging.

Meng Q, Ren P, Guo L, Gao P, Liu T, Chen W, Liu W, Peng H, Fang M, Meng S, Ge H, Li M, Chen X

pubmed logopapersJul 29 2025
Deep learning (DL) demonstrates high sensitivity but low specificity in lung cancer (LC) detection during CT screening, and the seven Tumor-associated antigens autoantibodies (7-TAAbs), known for its high specificity in LC, was employed to improve the DL's specificity for the efficiency of LC screening in China. To develop and evaluate a risk model combining 7-TAAbs test and DL scores for diagnosing LC with pulmonary lesions < 70 mm. Four hundreds and six patients with 406 lesions were enrolled and assigned into training set (n = 313) and test set (n = 93) randomly. The malignant lesions were defined as those lesions with high malignant risks by DL or those with positive expression of 7-TAAbs panel. Model performance was assessed using the area under the receiver operating characteristic curves (AUC). In the training set, the AUCs for DL, 7-TAAbs, combined model (DL and 7-TAAbs) and combined model (DL or 7-TAAbs) were 0.771, 0.638, 0.606, 0.809 seperately. In the test set, the combined model (DL or 7-TAAbs) achieved achieved the highest sensitivity (82.6%), NPV (81.8%) and accuracy (79.6%) among four models, and the AUCs of DL model, 7-TAAbs model, combined model (DL and 7-TAAbs), and combined model (DL or 7-TAAbs) were 0.731, 0.679, 0.574, and 0.794, respectively. The 7-TAAbs test significantly enhances DL performance in predicting LC with pulmonary leisons < 70 mm in China.

Sun B, Liu C, Wang Q, Bi K, Zhang W

pubmed logopapersJul 29 2025
The advancement of deep learning has driven extensive research validating the effectiveness of U-Net-style symmetric encoder-decoder architectures based on Transformers for medical image segmentation. However, the inherent design requiring attention mechanisms to compute token affinities across all spatial locations leads to prohibitive computational complexity and substantial memory demands. Recent efforts have attempted to address these limitations through sparse attention mechanisms. However, existing approaches employing artificial, content-agnostic sparse attention patterns demonstrate limited capability in modeling long-range dependencies effectively. We propose MFFBi-Unet, a novel architecture incorporating dynamic sparse attention through bi-level routing, enabling context-aware computation allocation with enhanced adaptability. The encoder-decoder module integrates BiFormer to optimize semantic feature extraction and facilitate high-fidelity feature map reconstruction. A novel Multi-scale Feature Fusion (MFF) module in skip connections synergistically combines multi-level contextual information with processed multi-scale features. Extensive evaluations on multiple public medical benchmarks demonstrate that our method consistently exhibits significant advantages. Notably, our method achieves statistically significant improvements, outperforming state-of-the-art approaches like MISSFormer by 2.02% and 1.28% Dice scores on respective benchmarks.

Kim H, Park S, Seo SW, Na DL, Jang H, Kim JP, Kim HJ, Kang SH, Kwak K

pubmed logopapersJul 29 2025
Physiological brain aging is associated with cognitive impairment and neuroanatomical changes. Brain age prediction of routine clinical 2D brain MRI scans were understudied and often unsuccessful. We developed a novel brain age prediction framework for clinical 2D T1-weighted MRI scans using a deep learning-based model trained with research grade 3D MRI scans mostly from publicly available datasets (N = 8681; age = 51.76 ± 21.74). Our model showed accurate and fast brain age prediction on clinical 2D MRI scans from cognitively unimpaired (CU) subjects (N = 175) with MAE of 2.73 years after age bias correction (Pearson's r = 0.918). Brain age gap of Alzheimer's disease (AD) subjects was significantly greater than CU subjects (p < 0.001) and increase in brain age gap was associated with disease progression in both AD (p < 0.05) and Parkinson's disease (p < 0.01). Our framework can be extended to other MRI modalities and potentially applied to routine clinical examinations, enabling early detection of structural anomalies and improve patient outcome.

Ahamed MKU, Hossen R, Paul BK, Hasan M, Al-Arashi WH, Kazi M, Talukder MA

pubmed logopapersJul 29 2025
Alzheimer's disease is a progressive neurological disorder that profoundly affects cognitive functions and daily activities. Rapid and precise identification is essential for effective intervention and improved patient outcomes. This research introduces an innovative hybrid filtering approach with a deep transfer learning model for detecting Alzheimer's disease utilizing brain imaging data. The hybrid filtering method integrates the Adaptive Non-Local Means filter with a Sharpening filter for image preprocessing. Furthermore, the deep learning model used in this study is constructed on the EfficientNetV2B3 architecture, augmented with additional layers and fine-tuning to guarantee effective classification among four categories: Mild, moderate, very mild, and non-demented. The work employs Grad-CAM++ to enhance interpretability by localizing disease-relevant characteristics in brain images. The experimental assessment, performed on a publicly accessible dataset, illustrates the ability of the model to achieve an accuracy of 99.45%. These findings underscore the capability of sophisticated deep learning methodologies to aid clinicians in accurately identifying Alzheimer's disease.

Chen WS, Fu FX, Cai QL, Wang F, Wang XH, Hong L, Su L

pubmed logopapersJul 29 2025
Assessing MGMT promoter methylation is crucial for determining appropriate glioblastoma therapy. Previous studies have focused on intratumoral regions, overlooking the peritumoral area. This study aimed to develop a radiomic model using MRI-derived features from both regions. We included 96 glioblastoma patients randomly allocated to training and testing sets. Radiomic features were extracted from intratumoral and peritumoral regions. We constructed and compared radiomic models based on intratumoral, peritumoral, and combined features. Model performance was evaluated using the area under the receiver-operating characteristic curve (AUC). The combined radiomic model achieved an AUC of 0.814 (95% CI: 0.767-0.862) in the training set and 0.808 (95% CI: 0.736-0.859) in the testing set, outperforming models based on intratumoral or peritumoral features alone. Calibration and decision curve analyses demonstrated excellent model fit and clinical utility. The radiomic model incorporating both intratumoral and peritumoral features shows promise in differentiating MGMT methylation status, potentially informing clinical treatment strategies for glioblastoma.

A. Piffer, J. A. Buchner, A. G. Gennari, P. Grehten, S. Sirin, E. Ross, I. Ezhov, M. Rosier, J. C. Peeken, M. Piraud, B. Menze, A. Guerreiro Stücklin, A. Jakab, F. Kofler

arxiv logopreprintJul 29 2025
Background Brain tumours are the most common solid malignancies in children, encompassing diverse histological, molecular subtypes and imaging features and outcomes. Paediatric brain tumours (PBTs), including high- and low-grade gliomas (HGG, LGG), medulloblastomas (MB), ependymomas, and rarer forms, pose diagnostic and therapeutic challenges. Deep learning (DL)-based segmentation offers promising tools for tumour delineation, yet its performance across heterogeneous PBT subtypes and MRI protocols remains uncertain. Methods A retrospective single-centre cohort of 174 paediatric patients with HGG, LGG, medulloblastomas (MB), ependymomas, and other rarer subtypes was used. MRI sequences included T1, T1 post-contrast (T1-C), T2, and FLAIR. Manual annotations were provided for four tumour subregions: whole tumour (WT), T2-hyperintensity (T2H), enhancing tumour (ET), and cystic component (CC). A 3D nnU-Net model was trained and tested (121/53 split), with segmentation performance assessed using the Dice similarity coefficient (DSC) and compared against intra- and inter-rater variability. Results The model achieved robust performance for WT and T2H (mean DSC: 0.85), comparable to human annotator variability (mean DSC: 0.86). ET segmentation was moderately accurate (mean DSC: 0.75), while CC performance was poor. Segmentation accuracy varied by tumour type, MRI sequence combination, and location. Notably, T1, T1-C, and T2 alone produced results nearly equivalent to the full protocol. Conclusions DL is feasible for PBTs, particularly for T2H and WT. Challenges remain for ET and CC segmentation, highlighting the need for further refinement. These findings support the potential for protocol simplification and automation to enhance volumetric assessment and streamline paediatric neuro-oncology workflows.

Zhao Y, Xiao L, Liu H, Li Y, Ning C, Liu M

pubmed logopapersJul 29 2025
The rising global prevalence of gout necessitates advancements in diagnostic methodologies. Ultrasonographic imaging of the foot has become an important diagnostic modality for gout because of its non-invasiveness, cost-effectiveness, and real-time imaging capabilities. This study aims to develop and validate a deep learning-based artificial intelligence (AI) model for automated gout diagnosis using ultrasound images. In this study, ultrasound images were primarily acquired at the first metatarsophalangeal joint (MTP1) from 598 cases in two institutions: 520 from Institution 1 and 78 from Institution 2. From Institution 1's dataset, 66% of cases were randomly allocated for model training, while the remaining 34% constitute the internal test set. The dataset from Institution 2 served as an independent external validation cohort. A novel deep learning model integrating a patch-wise attention mechanism and multi-scale feature extraction was developed to enhance the detection of subtle sonographic features and optimize diagnostic performance. The proposed model demonstrated robust diagnostic efficacy, achieving an accuracy of 87.88%, a sensitivity of 87.85%, a specificity of 87.93%, and an area under the curve (AUC) of 93.43%. Additionally, the model generates interpretable visual heatmaps to localize gout-related pathological features, thereby facilitating interpretation for clinical decision-making. In this paper, a deep learning-based artificial intelligence (AI) model was developed for the automated detection of gout using ultrasound images, which achieved better performance than other models. Furthermore, the features highlighted by the model align closely with expert assessments, demonstrating its potential to assist in the ultrasound-based diagnosis of gout.

Soormally C, Beitone C, Troccaz J, Voros S

pubmed logopapersJul 29 2025
Diagnosis of prostate cancer requires histopathology of tissue samples. Following an MRI to identify suspicious areas, a biopsy is performed under ultrasound (US) guidance. In existing assistance systems, 3D US information is generally available (taken before the biopsy session and/or in between samplings). However, without registration between 2D images and 3D volumes, the urologist must rely on cognitive navigation. This work introduces a deep learning model to track the orientation of real-time US slices relative to a reference 3D US volume using only image and volume data. The dataset comprises 515 3D US volumes collected from 51 patients during routine transperineal biopsy. To generate 2D images streams, volumes are resampled to simulate three degrees of freedom rotational movements around the rectal entrance. The proposed model comprises two ResNet-based sub-modules to address the symmetry ambiguity arising from complex out-of-plane movement of the probe. The first sub-module predicts the unsigned relative orientation between consecutive slices, while the second leverages a custom similarity model and a spatial context volume to determine the sign of this relative orientation. From the sub-modules predictions, slices orientations along the navigated trajectory can then be derived in real-time. Results demonstrate that registration error remains below 2.5 mm in 92% of cases over a 5-second trajectory, and 80% over a 25-second trajectory. These findings show that accurate, sensorless 2D/3D US registration given a spatial context is achievable with limited drift over extended navigation. This highlights the potential of AI-driven biopsy assistance to increase the accuracy of freehand biopsy.
Page 327 of 6636627 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.