Sort by:
Page 49 of 3053046 results

Use of imaging biomarkers and ambulatory functional endpoints in Duchenne muscular dystrophy clinical trials: Systematic review and machine learning-driven trend analysis.

Todd M, Kang S, Wu S, Adhin D, Yoon DY, Willcocks R, Kim S

pubmed logopapersJul 29 2025
Duchenne muscular dystrophy (DMD) is a rare X-linked genetic muscle disorder affecting primarily pediatric males and leading to limited life expectancy. This systematic review of 85 DMD trials and non-interventional studies (2010-2022) evaluated how magnetic resonance imaging biomarkers-particularly fat fraction and T2 relaxation time-are currently being used to quantitatively track disease progression and how their use compares to traditional mobility-based functional endpoints. Imaging biomarker studies lasted on average 4.50 years, approximately 11 months longer than those using only ambulatory functional endpoints. While 93% of biologic intervention trials (n = 28) included ambulatory functional endpoints, only 13.3% (n = 4) incorporated imaging biomarkers. Small molecule trials and natural history studies were the predominant contributors to imaging biomarker use, each comprising 30.4% of such studies. Small molecule trials used imaging biomarkers more frequently than biologic trials, likely because biologics often target dystrophin, an established surrogate biomarker, while small molecules lack regulatory-approved biomarkers. Notably, following the 2018 FDA guidance finalization, we observed a significant decrease in new trials using imaging biomarkers despite earlier regulatory encouragement. This analysis demonstrates that while imaging biomarkers are increasingly used in natural history studies, their integration into interventional trials remains limited. From XGBoost machine learning analysis, trial duration and start year were the strongest predictors of biomarker usage, with a decline observed following the 2018 FDA guidance. Despite their potential to objectively track disease progression, imaging biomarkers have not yet been widely adopted as primary endpoints in therapeutic trials, likely due to regulatory and logistical challenges. Future research should examine whether standardizing imaging protocols or integrating hybrid endpoint models could bridge the regulatory gap currently limiting biomarker adoption in therapeutic trials.

Diagnosis of Major Depressive Disorder Based on Multi-Granularity Brain Networks Fusion.

Zhou M, Mi R, Zhao A, Wen X, Niu Y, Wu X, Dong Y, Xu Y, Li Y, Xiang J

pubmed logopapersJul 29 2025
Major Depressive Disorder (MDD) is a common mental disorder, and making an early and accurate diagnosis is crucial for effective treatment. Functional Connectivity Network (FCN) constructed based on functional Magnetic Resonance Imaging (fMRI) have demonstrated the potential to reveal the mechanisms underlying brain abnormalities. Deep learning has been widely employed to extract features from FCN, but existing methods typically operate directly on the network, failing to fully exploit their deep information. Although graph coarsening techniques offer certain advantages in extracting the brain's complex structure, they may also result in the loss of critical information. To address this issue, we propose the Multi-Granularity Brain Networks Fusion (MGBNF) framework. MGBNF models brain networks through multi-granularity analysis and constructs combinatorial modules to enhance feature extraction. Finally, the Constrained Attention Pooling (CAP) mechanism is employed to achieve the effective integration of multi-channel features. In the feature extraction stage, the parameter sharing mechanism is introduced and applied to multiple channels to capture similar connectivity patterns between different channels while reducing the number of parameters. We validate the effectiveness of the MGBNF model on multiple classification tasks and various brain atlases. The results demonstrate that MGBNF outperforms baseline models in terms of classification performance. Ablation experiments further validate its effectiveness. In addition, we conducted a thorough analysis of the variability of different subtypes of MDD by multiple classification tasks, and the results support further clinical applications.

Diabetes and longitudinal changes in deep learning-derived measures of vertebral bone mineral density using conventional CT: the Multi-Ethnic Study of Atherosclerosis.

Ghotbi E, Hadidchi R, Hathaway QA, Bancks MP, Bluemke DA, Barr RG, Smith BM, Post WS, Budoff M, Lima JAC, Demehri S

pubmed logopapersJul 29 2025
To investigate the longitudinal association between diabetes and changes in vertebral bone mineral density (BMD) derived from conventional chest CT and to evaluate whether kidney function (estimated glomerular filtration rate (eGFR)) modifies this relationship. This longitudinal study included 1046 participants from the Multi-Ethnic Study of Atherosclerosis Lung Study with vertebral BMD measurements from chest CTs at Exam 5 (2010-2012) and Exam 6 (2016-2018). Diabetes was classified based on the American Diabetes Association criteria, and those with impaired fasting glucose (i.e., prediabetes) were excluded. Volumetric BMD was derived using a validated deep learning model to segment trabecular bone of thoracic vertebrae. Linear mixed-effects models estimated the association between diabetes and BMD changes over time. Following a significant interaction between diabetes status and eGFR, additional stratified analyses examined the impact of kidney function (i.e., diabetic nephropathy), categorized by eGFR (≥ 60 vs. < 60 mL/min/body surface area). Participants with diabetes had a higher baseline vertebral BMD than those without (202 vs. 190 mg/cm<sup>3</sup>) and experienced a significant increase over a median followpup of 6.2 years (β = 0.62 mg/cm<sup>3</sup>/year; 95% CI 0.26, 0.98). This increase was more pronounced among individuals with diabetes and reduced kidney function (β = 1.52 mg/cm<sup>3</sup>/year; 95% CI 0.66, 2.39) compared to the diabetic individuals with preserved kidney function (β = 0.48 mg/cm<sup>3</sup>/year; 95% CI 0.10, 0.85). Individuals with diabetes exhibited an increase in vertebral BMD over time in comparison to the non-diabetes group which is more pronounced in those with diabetic nephropathy. These findings suggest that conventional BMD measurements may not fully capture the well-known fracture risk in diabetes. Further studies incorporating bone microarchitecture using advanced imaging and fracture outcomes are needed to refine skeletal health assessments in the diabetic population.

Development and validation of a cranial ultrasound imaging-based deep learning model for periventricular-intraventricular haemorrhage detection and grading: a two-centre study.

Peng Y, Hu Z, Wen M, Deng Y, Zhao D, Yu Y, Liang W, Dai X, Wang Y

pubmed logopapersJul 29 2025
Periventricular-intraventricular haemorrhage (IVH) is the most prevalent type of neonatal intracranial haemorrhage. It is especially threatening to preterm infants, in whom it is associated with significant morbidity and mortality. Cranial ultrasound has become an important means of screening periventricular IVH in infants. The integration of artificial intelligence with neonatal ultrasound is promising for enhancing diagnostic accuracy, reducing physician workload, and consequently improving periventricular IVH outcomes. The study investigated whether deep learning-based analysis of the cranial ultrasound images of infants could detect and grade periventricular IVH. This multicentre observational study included 1,060 cases and healthy controls from two hospitals. The retrospective modelling dataset encompassed 773 participants from January 2020 to July 2023, while the prospective two-centre validation dataset included 287 participants from August 2023 to January 2024. The periventricular IVH net model, a deep learning model incorporating the convolutional block attention module mechanism, was developed. The model's effectiveness was assessed by randomly dividing the retrospective data into training and validation sets, followed by independent validation with the prospective two-centre data. To evaluate the model, we measured its recall, precision, accuracy, F1-score, and area under the curve (AUC). The regions of interest (ROI) that influenced the detection by the deep learning model were visualised in significance maps, and the t-distributed stochastic neighbour embedding (t-SNE) algorithm was used to visualise the clustering of model detection parameters. The final retrospective dataset included 773 participants (mean (standard deviation (SD)) gestational age, 32.7 (4.69) weeks; mean (SD) weight, 1,862.60 (855.49) g). For the retrospective data, the model's AUC was 0.99 (95% confidence interval (CI), 0.98-0.99), precision was 0.92 (0.89-0.95), recall was 0.93 (0.89-0.95), and F1-score was 0.93 (0.90-0.95). For the prospective two-centre validation data, the model's AUC was 0.961 (95% CI, 0.94-0.98) and accuracy was 0.89 (95% CI, 0.86-0.92). The two-centre prospective validation results of the periventricular IVH net model demonstrated its tremendous potential for paediatric clinical applications. Combining artificial intelligence with paediatric ultrasound can enhance the accuracy and efficiency of periventricular IVH diagnosis, especially in primary hospitals or community hospitals.

A hybrid M-DbneAlexnet for brain tumour detection using MRI images.

Kotti J, Chalasani V, Rajan C

pubmed logopapersJul 29 2025
Brain Tumour (BT) is characterised by the uncontrolled proliferation of the cells within the brain which can result in cancer. Detecting BT at the early stage significantly increases the patient's survival chances. The existing BT detection methods often struggle with high computational complexity, limited feature discrimination, and poor generalisation. To mitigate these issues, an effective brain tumour detection and segmentation method based on A hybrid network named MobileNet- Deep Batch-Normalized eLU AlexNet (M-DbneAlexnet) is developed based on Magnetic Resonance Imaging (MRI). The image enhancement is done by Piecewise Linear Transformation (PLT) function. BT region is segmented Transformer Brain Tumour Segmentation (TransBTSV2). Then feature extraction is done. Finally, BT is detected using M-DbneAlexnet model, which is devised by combining MobileNet and Deep Batch-Normalized eLU AlexNet (DbneAlexnet).<b>Results:</b> The proposed model achieved an accuracy of 92.68%, sensitivity of 93.02%, and specificity of 92.85%, demonstrating its effectiveness in accurately detecting brain tumors from MRI images. The proposed model enhances training speed and performs well on limited datasets, making it effective for distinguishing between tumor and healthy tissues. Its practical utility lies in enabling early detection and diagnosis of brain tumors, which can significantly reduce mortality rates.

Radiomics meets transformers: A novel approach to tumor segmentation and classification in mammography for breast cancer.

Saadh MJ, Hussain QM, Albadr RJ, Doshi H, Rekha MM, Kundlas M, Pal A, Rizaev J, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Farhood B

pubmed logopapersJul 29 2025
ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.

BioAug-Net: a bioimage sensor-driven attention-augmented segmentation framework with physiological coupling for early prostate cancer detection in T2-weighted MRI.

Arshad M, Wang C, Us Sima MW, Ali Shaikh J, Karamti H, Alharthi R, Selecky J

pubmed logopapersJul 29 2025
Accurate segmentation of the prostate peripheral zone (PZ) in T2-weighted MRI is critical for the early detection of prostate cancer. Existing segmentation methods are hindered by significant inter-observer variability (37.4 ± 5.6%), poor boundary localization, and the presence of motion artifacts, along with challenges in clinical integration. In this study, we propose BioAug-Net, a novel framework that integrates real-time physiological signal feedback with MRI data, leveraging transformer-based attention mechanisms and a probabilistic clinical decision support system (PCDSS). BioAug-Net features a dual-branch asymmetric attention mechanism: one branch processes spatial MRI features, while the other incorporates temporal sensor signals through a BiGRU-driven adaptive masking module. Additionally, a Markov Decision Process-based PCDSS maps segmentation outputs to clinical PI-RADS scores, with uncertainty quantification. We validated BioAug-Net on a multi-institutional dataset (n=1,542) and demonstrated state-of-the-art performance, achieving a Dice Similarity Coefficient of 89.7% (p < 0.001), sensitivity of 91.2% (p < 0.001), specificity of 88.4% (p < 0.001), and HD95 of 2.14 mm (p < 0.001), outperforming U-Net, Attention U-Net, and TransUNet. Sensor integration improved segmentation accuracy by 12.6% (p < 0.001) and reduced inter-observer variation by 48.3% (p < 0.001). Radiologist evaluations (n=3) confirmed a 15.0% reduction in diagnosis time (p = 0.003) and an increase in inter-reader agreement from K = 0.68 to K = 0.82 (p = 0.001). Our results show that BioAug-Net offers a clinically viable solution for early prostate cancer detection through enhanced physiological coupling and explainable AI diagnostics.

Evaluation and analysis of risk factors for fractured vertebral recompression post-percutaneous kyphoplasty: a retrospective cohort study based on logistic regression analysis.

Zhao Y, Li B, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 29 2025
Vertebral recompression after percutaneous kyphoplasty (PKP) for osteoporotic vertebral compression fractures (OVCFs) may lead to recurrent pain, deformity, and neurological impairment, compromising prognosis and quality of life. To identify independent risk factors for postoperative recompression and develop predictive models for risk assessment. We retrospectively analyzed 284 OVCF patients treated with PKP, grouped by recompression status. Predictors were screened using univariate and correlation analyses. Multicollinearity was assessed using variance inflation factor (VIF). A multivariable logistic regression model was constructed and validated via 10-fold cross-validation and temporal validation. Five independent predictors were identified: incomplete anterior cortex (odds ratio [OR] = 9.38), high paravertebral muscle fat infiltration (OR = 218.68), low vertebral CT value (OR = 0.87), large Cobb change (OR = 1.45), and high vertebral height recovery rate (OR = 22.64). The logistic regression model achieved strong performance: accuracy 97.67%, precision 97.06%, recall 97.06%, F1 score 97.06%, specificity 98.08%, area under the receiver operating characteristic curve (AUC) 0.998. Machine learning models (e.g., random forest) were also evaluated but did not outperform logistic regression in accuracy or interpretability. Five imaging-based predictors of vertebral recompression were identified. The logistic regression model showed excellent predictive accuracy and generalizability, supporting its clinical utility for early risk stratification and personalized decision-making in OVCF patients undergoing PKP.

A data assimilation framework for predicting the spatiotemporal response of high-grade gliomas to chemoradiation.

Miniere HJM, Hormuth DA, Lima EABF, Farhat M, Panthi B, Langshaw H, Shanker MD, Talpur W, Thrower S, Goldman J, Ty S, Chung C, Yankeelov TE

pubmed logopapersJul 29 2025
High-grade gliomas are highly invasive and respond variably to chemoradiation. Accurate, patient-specific predictions of tumor response could enhance treatment planning. We present a novel computational platform that assimilates MRI data to continually predict spatiotemporal tumor changes during chemoradiotherapy. Tumor growth and response to chemoradiation was described using a two-species reaction-diffusion model of enhancing and non-enhancing regions of the tumor. Two evaluation scenarios were used to test the predictive accuracy of this model. In scenario 1, the model was calibrated on a patient-specific basis (n = 21) to weekly MRI data during the course of chemoradiotherapy. A data assimilation framework was used to update model parameters with each new imaging visit which were then used to update model predictions. In scenario 2, we evaluated the predictive accuracy of the model when fewer data points are available by calibrating the same model using only the first two imaging visits and then predicted tumor response at the remaining five weeks of treatment. We investigated three approaches to assign model parameters for scenario 2: (1) predictions using only parameters estimated by fitting the data obtained from an individual patient's first two imaging visits, (2) predictions made by averaging the patient-specific parameters with the cohort-derived parameters, and (3) predictions using only cohort-derived parameters. Scenario 1 achieved a median [range] concordance correlation coefficient (CCC) between the predicted and measured total tumor cell counts of 0.91 [0.84, 0.95], and a median [range] percent error in tumor volume of -2.6% [-19.7, 8.0%], demonstrating strong agreement throughout the course of treatment. For scenario 2, the three approaches yielded CCCs of: (1) 0.65 [0.51, 0.88], (2) 0.74 [0.70, 0.91], (3) 0.76 [0.73, 0.92] with significant differences between the approach (1) that does not use the cohort parameters and the two approaches (2 and 3) that do. The proposed data assimilation framework enhances the accuracy of tumor growth forecasts by integrating patient-specific and cohort-based data. These findings show a practical method for identifying more personalized treatment strategies in high-grade glioma patients.

AI generated annotations for Breast, Brain, Liver, Lungs, and Prostate cancer collections in the National Cancer Institute Imaging Data Commons.

Murugesan GK, McCrumb D, Soni R, Kumar J, Nuernberg L, Pei L, Wagner U, Granger S, Fedorov AY, Moore S, Van Oss J

pubmed logopapersJul 29 2025
The Artificial Intelligence in Medical Imaging (AIMI) initiative aims to enhance the National Cancer Institute's (NCI) Image Data Commons (IDC) by releasing fully reproducible nnU-Net models, along with AI-assisted segmentation for cancer radiology images. In this extension of our earlier work, we created high-quality, AI-annotated imaging datasets for 11 IDC collections, spanning computed tomography (CT) and magnetic resonance imaging (MRI) of the lungs, breast, brain, kidneys, prostate, and liver. Each nnU-Net model was trained on open-source datasets, and a portion of the AI-generated annotations was reviewed and corrected by board-certified radiologists. Both the AI and radiologist annotations were encoded in compliance with the Digital Imaging and Communications in Medicine (DICOM) standard, ensuring seamless integration into the IDC collections. By making these models, images, and annotations publicly accessible, we aim to facilitate further research and development in cancer imaging.
Page 49 of 3053046 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.