Sort by:
Page 91 of 1421420 results

AI-driven genetic algorithm-optimized lung segmentation for precision in early lung cancer diagnosis.

Said Y, Ayachi R, Afif M, Saidani T, Alanezi ST, Saidani O, Algarni AD

pubmed logopapersJul 2 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, necessitating accurate and efficient diagnostic tools to improve patient outcomes. Lung segmentation plays a pivotal role in the diagnostic pipeline, directly impacting the accuracy of disease detection and treatment planning. This study presents an advanced AI-driven framework, optimized through genetic algorithms, for precise lung segmentation in early cancer diagnosis. The proposed model builds upon the UNET3 + architecture and integrates multi-scale feature extraction with enhanced optimization strategies to improve segmentation accuracy while significantly reducing computational complexity. By leveraging genetic algorithms, the framework identifies optimal neural network configurations within a defined search space, ensuring high segmentation performance with minimal parameters. Extensive experiments conducted on publicly available lung segmentation datasets demonstrated superior results, achieving a dice similarity coefficient of 99.17% with only 26% of the parameters required by the baseline UNET3 + model. This substantial reduction in model size and computational cost makes the system highly suitable for resource-constrained environments, including point-of-care diagnostic devices. The proposed approach exemplifies the transformative potential of AI in medical imaging, enabling earlier and more precise lung cancer diagnosis while reducing healthcare disparities in resource-limited settings.

Urethra contours on MRI: multidisciplinary consensus educational atlas and reference standard for artificial intelligence benchmarking

song, y., Nguyen, L., Dornisch, A., Baxter, M. T., Barrett, T., Dale, A., Dess, R. T., Harisinghani, M., Kamran, S. C., Liss, M. A., Margolis, D. J., Weinberg, E. P., Woolen, S. A., Seibert, T. M.

medrxiv logopreprintJul 2 2025
IntroductionThe urethra is a recommended avoidance structure for prostate cancer treatment. However, even subspecialist physicians often struggle to accurately identify the urethra on available imaging. Automated segmentation tools show promise, but a lack of reliable ground truth or appropriate evaluation standards has hindered validation and clinical adoption. This study aims to establish a reference-standard dataset with expert consensus contours, define clinically meaningful evaluation metrics, and assess the performance and generalizability of a deep-learning-based segmentation model. Materials and MethodsA multidisciplinary panel of four experienced subspecialists in prostate MRI generated consensus contours of the male urethra for 71 patients across six imaging centers. Four of those cases were previously used in an international study (PURE-MRI), wherein 62 physicians attempted to contour the prostate and urethra on the patient images. Separately, we developed a deep-learning AI model for urethra segmentation using another 151 cases from one center and evaluated it against the consensus reference standard and compared to human performance using Dice Score, percent urethra Coverage, and Maximum 2D (axial, in-plane) Hausdorff Distance (HD) from the reference standard. ResultsIn the PURE-MRI dataset, the AI model outperformed most physicians, achieving a median Dice of 0.41 (vs. 0.33 for physicians), Coverage of 81% (vs. 36%), and Max 2D HD of 1.8 mm (vs. 1.6 mm). In the larger dataset, performance remained consistent, with a Dice of 0.40, Coverage of 89%, and Max 2D HD of 2.0 mm, indicating strong generalizability across a broader patient population and more varied imaging conditions. ConclusionWe established a multidisciplinary consensus benchmark for segmentation of the urethra. The deep-learning model performs comparably to specialist physicians and demonstrates consistent results across multiple institutions. It shows promise as a clinical decision-support tool for accurate and reliable urethra segmentation in prostate cancer radiotherapy planning and studies of dose-toxicity associations.

Multimodal nomogram integrating deep learning radiomics and hemodynamic parameters for early prediction of post-craniotomy intracranial hypertension.

Fu Z, Wang J, Shen W, Wu Y, Zhang J, Liu Y, Wang C, Shen Y, Zhu Y, Zhang W, Lv C, Peng L

pubmed logopapersJul 2 2025
To evaluate the effectiveness of deep learning radiomics nomogram in distinguishing early intracranial hypertension (IH) following primary decompressive craniectomy (DC) in patients with severe traumatic brain injury (TBI) and to demonstrate its potential clinical value as a noninvasive tool for guiding timely intervention and improving patient outcomes. This study included 238 patients with severe TBI (training cohort: n = 166; testing cohort: n = 72). Postoperative ultrasound images of the optic nerve sheath (ONS) and Spectral doppler imaging of middle cerebral artery (MCASDI) were obtained at 6 and 18 h after DC. Patients were grouped according to threshold values of 15 mmHg and 20 mmHg based on invasive intracranial pressure (ICPi) measurements. Clinical-semantic features were collected, and radiomics features were extracted from ONS images, and Additionally, deep transfer learning (DTL) features were generated using RseNet101. Predictive models were developed using the Light Gradient Boosting Machine (light GBM) machine learning algorithm. Clinical-ultrasound variables were incorporated into the model through univariate and multivariate logistic regression. A combined nomogram was developed by integrating DLR (deep learning radiomics) features with clinical-ultrasound variables, and its diagnostic performance over different thresholds was evaluated using Receiver Operating Characteristic (ROC) curve analysis and decision curve analysis (DCA). The nomogram model demonstrated superior performance over the clinical model at both 15 mmHg and 20 mmHg thresholds. For 15 mmHg, the AUC was 0.974 (95% confidence interval [CI]: 0.953-0.995) in the training cohort and 0.919 (95% CI: 0.845-0.993) in the testing cohort. For 20 mmHg, the AUC was 0.968 (95% CI: 0.944-0.993) in the training cohort and 0.889 (95% CI: 0.806-0.972) in the testing cohort. DCA curves showed net clinical benefit across all models. Among DLR models based on ONS, MCASDI, or their pre-fusion, the ONS-based model performed best in the testing cohorts. The nomogram model, incorporating clinical-semantic features, radiomics, and DTL features, exhibited promising performance in predicting early IH in post-DC patients. It shows promise for enhancing non-invasive ICP monitoring and supporting individualized therapeutic strategies.

Multimodal AI to forecast arrhythmic death in hypertrophic cardiomyopathy.

Lai C, Yin M, Kholmovski EG, Popescu DM, Lu DY, Scherer E, Binka E, Zimmerman SL, Chrispin J, Hays AG, Phelan DM, Abraham MR, Trayanova NA

pubmed logopapersJul 2 2025
Sudden cardiac death from ventricular arrhythmias is a leading cause of mortality worldwide. Arrhythmic death prognostication is challenging in patients with hypertrophic cardiomyopathy (HCM), a setting where current clinical guidelines show low performance and inconsistent accuracy. Here, we present a deep learning approach, MAARS (Multimodal Artificial intelligence for ventricular Arrhythmia Risk Stratification), to forecast lethal arrhythmia events in patients with HCM by analyzing multimodal medical data. MAARS' transformer-based neural networks learn from electronic health records, echocardiogram and radiology reports, and contrast-enhanced cardiac magnetic resonance images, the latter being a unique feature of this model. MAARS achieves an area under the curve of 0.89 (95% confidence interval (CI) 0.79-0.94) and 0.81 (95% CI 0.69-0.93) in internal and external cohorts and outperforms current clinical guidelines by 0.27-0.35 (internal) and 0.22-0.30 (external). In contrast to clinical guidelines, it demonstrates fairness across demographic subgroups. We interpret MAARS' predictions on multiple levels to promote artificial intelligence transparency and derive risk factors warranting further investigation.

Large language model trained on clinical oncology data predicts cancer progression.

Zhu M, Lin H, Jiang J, Jinia AJ, Jee J, Pichotta K, Waters M, Rose D, Schultz N, Chalise S, Valleru L, Morin O, Moran J, Deasy JO, Pilai S, Nichols C, Riely G, Braunstein LZ, Li A

pubmed logopapersJul 2 2025
Subspecialty knowledge barriers have limited the adoption of large language models (LLMs) in oncology. We introduce Woollie, an open-source, oncology-specific LLM trained on real-world data from Memorial Sloan Kettering Cancer Center (MSK) across lung, breast, prostate, pancreatic, and colorectal cancers, with external validation using University of California, San Francisco (UCSF) data. Woollie surpasses ChatGPT in medical benchmarks and excels in eight non-medical benchmarks. Analyzing 39,319 radiology impression notes from 4002 patients, it achieved an overall area under the receiver operating characteristic curve (AUROC) of 0.97 for cancer progression prediction on MSK data, including a notable 0.98 AUROC for pancreatic cancer. On UCSF data, it achieved an overall AUROC of 0.88, excelling in lung cancer detection with an AUROC of 0.95. As the first oncology specific LLM validated across institutions, Woollie demonstrates high accuracy and consistency across cancer types, underscoring its potential to enhance cancer progression analysis.

Individualized structural network deviations predict surgical outcome in mesial temporal lobe epilepsy: a multicentre validation study.

Feng L, Han H, Mo J, Huang Y, Huang K, Zhou C, Wang X, Zhang J, Yang Z, Liu D, Zhang K, Chen H, Liu Q, Li R

pubmed logopapersJul 2 2025
Surgical resection is an effective treatment for medically refractory mesial temporal lobe epilepsy (mTLE), however, more than one-third of patients fail to achieve seizure freedom after surgery. This study aimed to evaluate preoperative individual morphometric network characteristics and develop a machine learning model to predict surgical outcome in mTLE. This multicentre, retrospective study included 189 mTLE patients who underwent unilateral temporal lobectomy and 78 normal controls between February 2018 and June 2023. Postoperative seizure outcomes were categorized as seizure-free (SF, n = 125) or non-seizure-free (NSF, n = 64) at a minimum of one-year follow-up. The preoperative individualized structural covariance network (iSCN) derived from T1-weighted MRI was constructed for each patient by calculating deviations from the control-based reference distribution, and further divided into the surgery network and the surgically spared network using a standard resection mask by merging each patient's individual lacuna. Regional features were selected separately from bilateral, ipsilateral and contralateral iSCN abnormalities to train support vector machine models, validated in two independent external datasets. NSF patients showed greater iSCN deviations from the normative distribution in the surgically spared network compared to SF patients (P = 0.02). These deviations were widely distributed in the contralateral functional modules (P < 0.05, false discovery rate corrected). Seizure outcome was optimally predicted by the contralateral iSCN features, with an accuracy of 82% (P < 0.05, permutation test) and an area under the receiver operating characteristic curve (AUC) of 0.81, with the default mode and fronto-parietal areas contributing most. External validation in two independent cohorts showed accuracy of 80% and 88%, with AUC of 0.80 and 0.82, respectively, emphasizing the generalizability of the model. This study provides reliable personalized structural biomarkers for predicting surgical outcome in mTLE and has the potential to assist tailored surgical treatment strategies.

SPACE: Subregion Perfusion Analysis for Comprehensive Evaluation of Breast Tumor Using Contrast-Enhanced Ultrasound-A Retrospective and Prospective Multicenter Cohort Study.

Fu Y, Chen J, Chen Y, Lin Z, Ye L, Ye D, Gao F, Zhang C, Huang P

pubmed logopapersJul 2 2025
To develop a dynamic contrast-enhanced ultrasound (CEUS)-based method for segmenting tumor perfusion subregions, quantifying tumor heterogeneity, and constructing models for distinguishing benign from malignant breast tumors. This retrospective-prospective cohort study analyzed CEUS videos of patients with breast tumors from four academic medical centers between September 2015 and October 2024. Pixel-based time-intensity curve (TIC) perfusion variables were extracted, followed by the generation of perfusion heterogeneity maps through cluster analysis. A combined diagnostic model incorporating clinical variables, subregion percentages, and radiomics scores was developed, and subsequently, a nomogram based on this model was constructed for clinical application. A total of 339 participants were included in this bidirectional study. Retrospective data included 233 tumors divided into training and test sets. The prospective data comprised 106 tumors as an independent test set. Subregion analysis revealed Subregion 2 dominated benign tumors, while Subregion 3 was prevalent in malignant tumors. Among 59 machine-learning models, Elastic Net (ENET) (α = 0.7) performed best. Age and subregion radiomics scores were independent risk factors. The combined model achieved area under the curve (AUC) values of 0.93, 0.82, and 0.90 in the training, retrospective, and prospective test sets, respectively. The proposed CEUS-based method enhances visualization and quantification of tumor perfusion dynamics, significantly improving the diagnostic accuracy for breast tumors.

Robust brain age estimation from structural MRI with contrastive learning

Carlo Alberto Barbano, Benoit Dufumier, Edouard Duchesnay, Marco Grangetto, Pietro Gori

arxiv logopreprintJul 2 2025
Estimating brain age from structural MRI has emerged as a powerful tool for characterizing normative and pathological aging. In this work, we explore contrastive learning as a scalable and robust alternative to supervised approaches for brain age estimation. We introduce a novel contrastive loss function, $\mathcal{L}^{exp}$, and evaluate it across multiple public neuroimaging datasets comprising over 20,000 scans. Our experiments reveal four key findings. First, scaling pre-training on diverse, multi-site data consistently improves generalization performance, cutting external mean absolute error (MAE) nearly in half. Second, $\mathcal{L}^{exp}$ is robust to site-related confounds, maintaining low scanner-predictability as training size increases. Third, contrastive models reliably capture accelerated aging in patients with cognitive impairment and Alzheimer's disease, as shown through brain age gap analysis, ROC curves, and longitudinal trends. Lastly, unlike supervised baselines, $\mathcal{L}^{exp}$ maintains a strong correlation between brain age accuracy and downstream diagnostic performance, supporting its potential as a foundation model for neuroimaging. These results position contrastive learning as a promising direction for building generalizable and clinically meaningful brain representations.

A computationally frugal open-source foundation model for thoracic disease detection in lung cancer screening programs

Niccolò McConnell, Pardeep Vasudev, Daisuke Yamada, Daryl Cheng, Mehran Azimbagirad, John McCabe, Shahab Aslani, Ahmed H. Shahin, Yukun Zhou, The SUMMIT Consortium, Andre Altmann, Yipeng Hu, Paul Taylor, Sam M. Janes, Daniel C. Alexander, Joseph Jacob

arxiv logopreprintJul 2 2025
Low-dose computed tomography (LDCT) imaging employed in lung cancer screening (LCS) programs is increasing in uptake worldwide. LCS programs herald a generational opportunity to simultaneously detect cancer and non-cancer-related early-stage lung disease. Yet these efforts are hampered by a shortage of radiologists to interpret scans at scale. Here, we present TANGERINE, a computationally frugal, open-source vision foundation model for volumetric LDCT analysis. Designed for broad accessibility and rapid adaptation, TANGERINE can be fine-tuned off the shelf for a wide range of disease-specific tasks with limited computational resources and training data. Relative to models trained from scratch, TANGERINE demonstrates fast convergence during fine-tuning, thereby requiring significantly fewer GPU hours, and displays strong label efficiency, achieving comparable or superior performance with a fraction of fine-tuning data. Pretrained using self-supervised learning on over 98,000 thoracic LDCTs, including the UK's largest LCS initiative to date and 27 public datasets, TANGERINE achieves state-of-the-art performance across 14 disease classification tasks, including lung cancer and multiple respiratory diseases, while generalising robustly across diverse clinical centres. By extending a masked autoencoder framework to 3D imaging, TANGERINE offers a scalable solution for LDCT analysis, departing from recent closed, resource-intensive models by combining architectural simplicity, public availability, and modest computational requirements. Its accessible, open-source lightweight design lays the foundation for rapid integration into next-generation medical imaging tools that could transform LCS initiatives, allowing them to pivot from a singular focus on lung cancer detection to comprehensive respiratory disease management in high-risk populations.

Hybrid deep learning architecture for scalable and high-quality image compression.

Al-Khafaji M, Ramaha NTA

pubmed logopapersJul 2 2025
The rapid growth of medical imaging data presents challenges for efficient storage and transmission, particularly in clinical and telemedicine applications where image fidelity is crucial. This study proposes a hybrid deep learning-based image compression framework that integrates Stationary Wavelet Transform (SWT), Stacked Denoising Autoencoder (SDAE), Gray-Level Co-occurrence Matrix (GLCM), and K-means clustering. The framework enables multiresolution decomposition, texture-aware feature extraction, and adaptive region-based compression. A custom loss function that combines Mean Squared Error (MSE) and Structural Similarity Index (SSIM) ensures high perceptual quality and compression efficiency. The proposed model was evaluated across multiple benchmark medical imaging datasets and achieved a Peak Signal-to-Noise Ratio (PSNR) of up to 50.36 dB, MS-SSIM of 0.9999, and an encoding-decoding time of 0.065 s. These results demonstrate the model's capability to outperform existing approaches while maintaining diagnostic integrity, scalability, and speed, making it suitable for real-time and resource-constrained clinical environments.
Page 91 of 1421420 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.