Sort by:
Page 139 of 6476462 results

Wang M, Mo S, Li G, Zheng J, Wu H, Tian H, Chen J, Tang S, Chen Z, Xu J, Huang Z, Dong F

pubmed logopapersSep 24 2025
This study aimed to develop a Deep Learning Radiomics integrated model (DLRN), which combines photoacoustic/ultrasound(PA/US)imaging with clinical and radiomics features to distinguish between luminal and non-luminal BC in a preoperative setting. A total of 388 BC patients were included, with 271 in the training group and 117 in the testing group. Radiomics and deep learning features were extracted from PA/US images using Pyradiomics and ResNet50, respectively. Feature selection was performed using independent sample t-tests, Pearson correlation analysis, and LASSO regression to build a Deep Learning Radiomics (DLR) model. Based on the results of univariate and multivariate logistic regression analyses, the DLR model was combined with valuable clinical features to construct the DLRN model. Model efficacy was assessed using AUC, accuracy, sensitivity, specificity, and NPV. The DLR model comprised 3 radiomic features and 6 deep learning features, which, when combined with significant clinical predictors, formed the DLRN model. In the testing set, the AUC of the DLRN model (0.924 [0.877-0.972]) was significantly higher than that of the DLR (AUC 0.847 [0.758-0.936], p = 0.026), DL (AUC 0.822 [0.725-0.919], p = 0.06), Rad (AUC 0.717 [0.597-0.838], p < 0.001), and clinical (AUC 0.820 [0.745-0.895], p = 0.002) models. These findings indicate that the DLRN model (integrated model) exhibited the most favorable predictive performance among all models evaluated. The DLRN model effectively integrates PA/US imaging with clinical data, showing potential for preoperative molecular subtype prediction and guiding personalized treatment strategies for BC patients.

Lu X, Zhou Q, Xiao Z, Guo Y, Peng Q, Zhao S, Liu S, Huang J, Yang C, Yuan Y

pubmed logopapersSep 24 2025
Accurate prostate segmentation from transrectal ultrasound (TRUS) images is the key to the computer-aided diagnosis of prostate cancer. However, this task faces serious challenges, including various interferences, variational prostate shapes, and insufficient datasets. To address these challenges, a region-adaptive transformer convolution fusion net (TCF-Net) for accurate and robust segmentation of TRUS images is proposed. As a high-performance segmentation network, the TCF-Net contains a hierarchical encoder-decoder structure with two main modules: (1) a region-adaptive transformer-based encoder to identify and localize prostate regions, which learns the relationship between objects and pixels. It assists the model in overcoming various interferences and prostate shape variations. (2) A convolution-based decoder to improve the applicability to small datasets. Besides, a patch-based fusion module is also proposed to introduce an inductive bias for fine prostate segmentation. TCF-Net is trained and evaluated on a challenging clinical TRUS image dataset collected from the First Affiliated Hospital of Jinan University in China. The dataset contains 1000 TRUS images of 135 patients. Experimental results show that the mIoU of TCF-Net is 94.4%, which exceeds other state-of-the-art (SOTA) models by more than 1%.

Chuang TY, Lian PH, Kuo YC, Chang GH

pubmed logopapersSep 24 2025
Osteoarthritis (OA) pain often does not correlate with magnetic resonance imaging (MRI)-detected structural abnormalities, limiting the clinical utility of traditional volume-based lesion assessments. To address this mismatch, we present a novel explainable artificial intelligence (XAI) framework that localizes pain-driving abnormalities in knee MR images via counterfactual image synthesis and Shapley-based feature attribution. Our method combines a Bayesian generative network-which is trained to synthesize asymptomatic versions of symptomatic knees-with a black-box pain classifier to generate counterfactual MRI scans. These counterfactuals, which are constrained by multimodal segmentation and uncertainty-aware inference, isolate lesion regions that are likely responsible for symptoms. Applying Shapley additive explanations (SHAP) to the output of the classifier enables the contribution of each lesion to pain to be precisely quantified. We trained and validated this framework on 2148 knee pairs obtained from a multicenter study of the Osteoarthritis Initiative (OAI), achieving high anatomical specificity in terms of identifying pain-relevant features such as patellar effusions and bone marrow lesions. An odds ratio (OR) analysis revealed that SHAP-derived lesion scores were significantly more strongly associated with pain than raw lesion volumes were (OR 6.75 vs. 3.73 in patellar regions), supporting the interpretability and clinical relevance of the model. Compared with conventional saliency methods and volumetric measures, our approach demonstrates superior lesion-level resolution and highlights the spatial heterogeneity of OA pain mechanisms. These results establish a new direction for conducting interpretable, lesion-specific MRI analyses that could guide personalized treatment strategies for musculoskeletal disorders.

Zhang J, Liu H, Wu Y, Zhu J, Wang Y, Zhou Y, Wang M, Sun Q, Che F, Li B

pubmed logopapersSep 24 2025
This study aimed to identify key image parameters from the traditional and advanced MR sequences within the peritumoral edema in glioblastoma, which could predict the sub-volume with high risk of tumor recurrence. The retrospective cohort involved 32 cases with recurrent glioblastoma, while the retrospective validation cohort consisted of 5 cases. The volume of interest (VOI) including tumor and edema were manually contoured on each MR sequence. Rigid registration was performed between sequences before and after tumor recurrence. The edema before tumor recurrence was divided into the subedema-rec and subedema-no-rec depending on whether tumors occurred after registration. The histogram parameters of VOI on each sequence were collected and statistically analyzed. Beside Spearman's rank correlation analysis, Wilcoxon's paired test, least absolute shrinkage and selection operator (LASSO) analysis, and a forward stepwise logistic regression model(FSLRM) comparing with two machine learning models was developed to distinguish the subedema-rec and subedema-no-rec. The efficiency and applicability of the model was evaluated using receiver operating characteristic (ROC) curve analysis, image prediction and pathological detection. Differences of the characteristics from the ADC map between the subedema-rec and subedema-no-rec were identified, which included the standard deviation of the mean ADC value (stdmeanADC), the maximum ADC value (maxiADC), the minimum ADC value (miniADC), the Ratio-maxiADC/meanADC (maxiADC divided by the meanADC), and the kurtosis coefficient of the ADC value (all P < 0.05). FSLRM showed that the area under the ROC curve (AUC) of a single-parameter model based on Ratio-maxiADC/meanADC (0.823) was higher than that of the support vector machine (0.813) and random forest models (0.592), compared to the retrospective validation cohort's AUC of 0.776. The location prediction in image revealed that tumor recurrent mostly in the area with Ratio-maxiADC/meanADC less than 2.408. Pathological detection in 10 patients confirmed that the tumor cell dotted within the subedema-rec while not in the subedema-no-rec. The Ratio-maxiADC/meanADC is useful in predicting location of the subedema-rec.

Mehrtabar S, Marey A, Desai A, Saad AM, Desai V, Goñi J, Pal B, Umair M

pubmed logopapersSep 24 2025
The integration of artificial intelligence (AI) into cardiovascular imaging and radiology offers the potential to enhance diagnostic accuracy, streamline workflows, and personalize patient care. However, the rapid adoption of AI has introduced complex ethical challenges, particularly concerning patient privacy, data handling, informed consent, and data ownership. This narrative review explores these issues by synthesizing literature from clinical, technical, and regulatory perspectives. We examine the tensions between data utility and data protection, the evolving role of transparency and explainable AI, and the disparities in ethical and legal frameworks across jurisdictions such as the European Union, the USA, and emerging players like China. We also highlight the vulnerabilities introduced by cloud computing, adversarial attacks, and the use of commercial datasets. Ethical frameworks and regulatory guidelines are compared, and proposed mitigation strategies such as federated learning, blockchain, and differential privacy are discussed. To ensure ethical implementation, we emphasize the need for shared accountability among clinicians, developers, healthcare institutions, and policymakers. Ultimately, the responsible development of AI in medical imaging must prioritize patient trust, fairness, and equity, underpinned by robust governance and transparent data stewardship.

Santos AN, Venkatesh V, Chidambaram S, Piedade Santos G, Dawoud B, Rauschenbach L, Choucha A, Bingöl S, Wipplinger T, Wipplinger C, Siegel AM, Dammann P, Abou-Hamden A

pubmed logopapersSep 24 2025
Artificial Intelligence (AI) and Machine Learning (ML) are increasingly being applied in medical research, including studies on cerebral cavernous malformations (CCM). This scoping review aims to analyze the scope and impact of AI in CCM, focusing on diagnostic tools, risk assessment, biomarker identification, outcome prediction, and treatment planning. We conducted a comprehensive literature search across different databases, reviewing articles that explore AI applications in CCM. Articles were selected based on predefined eligibility criteria and categorized according to their primary focus: drug discovery, diagnostic imaging, genetic analysis, biomarker identification, outcome prediction, and treatment planning. Sixteen studies met the inclusion criteria, showcasing diverse AI applications in CCM. Nearly half (47%) were cohort or prospective studies, primarily focused on biomarker discovery and risk prediction. Technical notes and diagnostic studies accounted for 27%, concentrating on computer-aided diagnosis (CAD) systems and drug screening. Other studies included a conceptual review on AI for surgical planning and a systematic review confirming ML's superiority in predicting clinical outcomes within neurosurgery. AI applications in CCM show significant promise, particularly in enhancing diagnostic accuracy, risk assessment, and surgical planning. These advancements suggest that AI could transform CCM management, offering pathways to improved patient outcomes and personalized care strategies.

Farbod Bigdeli, Mohsen Mohammadagha, Ali Bigdeli

arxiv logopreprintSep 24 2025
Breast cancer screening with mammography remains central to early detection and mortality reduction. Deep learning has shown strong potential for automating mammogram interpretation, yet limited-resolution datasets and small sample sizes continue to restrict performance. We revisit the Mini-DDSM dataset (9,684 images; 2,414 patients) and introduce a lightweight region-of-interest (ROI) augmentation strategy. During training, full images are probabilistically replaced with random ROI crops sampled from a precomputed, label-free bounding-box bank, with optional jitter to increase variability. We evaluate under strict patient-level cross-validation and report ROC-AUC, PR-AUC, and training-time efficiency metrics (throughput and GPU memory). Because ROI augmentation is training-only, inference-time cost remains unchanged. On Mini-DDSM, ROI augmentation (best: p_roi = 0.10, alpha = 0.10) yields modest average ROC-AUC gains, with performance varying across folds; PR-AUC is flat to slightly lower. These results demonstrate that simple, data-centric ROI strategies can enhance mammography classification in constrained settings without requiring additional labels or architectural modifications.

Samia Saeed, Khuram Naveed

arxiv logopreprintSep 24 2025
Breast cancer, the second leading cause of cancer-related deaths globally, accounts for a quarter of all cancer cases [1]. To lower this death rate, it is crucial to detect tumors early, as early-stage detection significantly improves treatment outcomes. Advances in non-invasive imaging techniques have made early detection possible through computer-aided detection (CAD) systems which rely on traditional image analysis to identify malignancies. However, there is a growing shift towards deep learning methods due to their superior effectiveness. Despite their potential, deep learning methods often struggle with accuracy due to the limited availability of large-labeled datasets for training. To address this issue, our study introduces a Contrastive Learning (CL) framework, which excels with smaller labeled datasets. In this regard, we train Resnet-50 in semi supervised CL approach using similarity index on a large amount of unlabeled mammogram data. In this regard, we use various augmentation and transformations which help improve the performance of our approach. Finally, we tune our model on a small set of labelled data that outperforms the existing state of the art. Specifically, we observed a 96.7% accuracy in detecting breast cancer on benchmark datasets INbreast and MIAS.

Babacan Ö, Karkaş AY, Durak G, Uysal E, Durak Ü, Shrestha R, Bingöl Z, Okumuş G, Medetalibeyoğlu A, Ertürk ŞM

pubmed logopapersSep 24 2025
To assess the diagnostic accuracy and clinical applicability of the artificial intelligence (AI) program "Canon Automation Platform" for the automated detection and localization of pulmonary embolisms (PEs) in chest computed tomography pulmonary angiograms (CTPAs). A total of 1474 CTPAs suspected of PEs were retrospectively evaluated by 2 senior radiology residents with 5 years of experience. The final diagnosis was verified through radiology reports by 2 thoracic radiologists with 20 and 25 years of experience, along with the patients' clinical records and histories. The images were transferred to the Canon Automation Platform, which integrates with the picture archiving and communication system (PACS), and the diagnostic success of the platform was evaluated. This study examined all anatomic levels of the pulmonary arteries, including the left pulmonary artery, right pulmonary artery, and interlobar, segmental, and subsegmental branches. The confusion matrix data obtained at all anatomic levels considered in our study were as follows: AUC-ROC score of 0.945 to 0.996, accuracy of 95.4% to 99.7%, sensitivity of 81.4% to 99.1%, specificity of 98.7% to 100%, PPV of 89.1% to 100%, NPV of 95.6% to 99.9%, F1 score of 0.868 to 0.987, and Cohen Kappa of 0.842 to 0.986. Notably, sensitivity in the subsegmental branches was lower (81.4% to 84.7%) compared with more central locations, whereas specificity remained consistent (98.7% to 98.9%). The results showed that the chest pain package of the Canon Automation Platform accurately provides rapid automatic PE detection in chest CTPAs by leveraging deep learning algorithms to facilitate the clinical workflow. This study demonstrates that AI can provide physicians with robust diagnostic support for acute PE, particularly in hospitals without 24/7 access to radiology specialists.

Young A, Paloka R, Islam A, Prasanna P, Hill V, Payne D

pubmed logopapersSep 24 2025
This study represents a continuation of prior work by Payne et al. evaluating large language model (LLM) performance on radiology board-style assessments, specifically the ACR diagnostic radiology in-training examination (DXIT). Building upon earlier findings with GPT-4, we assess the performance of newer, cutting-edge models, such as GPT-4o, GPT-o1, GPT-o3, Claude, Gemini, and Grok on standardized DXIT questions. In addition to overall performance, we compare model accuracy on text-based versus image-based questions to assess multi-modal reasoning capabilities. As a secondary aim, we investigate the potential impact of data contamination by comparing model performance on original versus revised image-based questions. Seven LLMs - GPT-4, GPT-4o, GPT-o1, GPT-o3, Claude 3.5 Sonnet, Gemini 1.5 Pro, and Grok 2.0-were evaluated using 106 publicly available DXIT questions. Each model was prompted using a standardized instruction set to simulate a radiology resident answering board-style questions. For each question, the model's selected answer, rationale, and confidence score were recorded. Unadjusted accuracy (based on correct answer selection) and logic-adjusted accuracy (based on clinical reasoning pathways) were calculated. Subgroup analysis compared model performance on text-based versus image-based questions. Additionally, 63 image-based questions were revised to test novel reasoning while preserving the original diagnostic image to assess the impact of potential training data contamination. Across 106 DXIT questions, GPT-o1 demonstrated the highest unadjusted accuracy (71.7%), followed closely by GPT-4o (69.8%) and GPT-o3 (68.9%). GPT-4 (59.4%) and Grok 2.0 exhibited similar scores (59.4% and 52.8%). Claude 3.5 Sonnet had the lowest unadjusted accuracies (34.9%). Similar trends were observed for logic-adjusted accuracy, with GPT-o1 (60.4%), GPT-4o (59.4%), and GPT-o3 (59.4%) again outperforming other models, while Grok 2.0 and Claude 3.5 Sonnet lagged behind (34.0% and 30.2%, respectively). GPT-4o's performance was significantly higher on text-based questions compared to image-based ones. Unadjusted accuracy for the revised DXIT questions was 49.2%, compared to 56.1% on matched original DXIT questions. Logic-adjusted accuracy for the revised DXIT questions was 40.0% compared to 44.4% on matched original DXIT questions. No significant difference in performance was observed between original and revised questions. Modern LLMs, especially those from OpenAI, demonstrate strong and improved performance on board-style radiology assessments. Comparable performance on revised prompts suggests that data contamination may have played a limited role. As LLMs improve, they hold strong potential to support radiology resident learning through personalized feedback and board-style question review.
Page 139 of 6476462 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.