Sort by:
Page 8 of 2252246 results

Development and validation of an interpretable machine learning model for diagnosing pathologic complete response in breast cancer.

Zhou Q, Peng F, Pang Z, He R, Zhang H, Jiang X, Song J, Li J

pubmed logopapersJul 1 2025
Pathologic complete response (pCR) following neoadjuvant chemotherapy (NACT) is a critical prognostic marker for patients with breast cancer, potentially allowing surgery omission. However, noninvasive and accurate pCR diagnosis remains a significant challenge due to the limitations of current imaging techniques, particularly in cases where tumors completely disappear post-NACT. We developed a novel framework incorporating Dimensional Accumulation for Layered Images (DALI) and an Attention-Box annotation tool to address the unique challenge of analyzing imaging data where target lesions are absent. These methods transform three-dimensional magnetic resonance imaging into two-dimensional representations and ensure consistent target tracking across time-points. Preprocessing techniques, including tissue-region normalization and subtraction imaging, were used to enhance model performance. Imaging features were extracted using radiomics and pretrained deep-learning models, and machine-learning algorithms were integrated into a stacked ensemble model. The approach was developed using the I-SPY 2 dataset and validated with an independent Tangshan People's Hospital cohort. The stacked ensemble model achieved superior diagnostic performance, with an area under the receiver operating characteristic curve of 0.831 (95 % confidence interval, 0.769-0.887) on the test set, outperforming individual models. Tissue-region normalization and subtraction imaging significantly enhanced diagnostic accuracy. SHAP analysis identified variables that contributed to the model predictions, ensuring model interpretability. This innovative framework addresses challenges of noninvasive pCR diagnosis. Integrating advanced preprocessing techniques improves feature quality and model performance, supporting clinicians in identifying patients who can safely omit surgery. This innovation reduces unnecessary treatments and improves quality of life for patients with breast cancer.

Automated Acetabular Defect Reconstruction and Analysis for Revision Total Hip Arthroplasty: A Computational Modeling Study.

Hopkins D, Callary SA, Solomon LB, Lee PVS, Ackland DC

pubmed logopapersJul 1 2025
Revision total hip arthroplasty (rTHA) involving large acetabular defects is associated with high early failure rates, primarily due to cup loosening. Most acetabular defect classification systems used in surgical planning are based on planar radiographs and do not encapsulate three-dimensional geometry and morphology of the acetabular defect. This study aimed to develop an automated computational modeling pipeline for rapid generation of three-dimensional acetabular bone defect geometry. The framework employed artificial neural network segmentation of preoperative pelvic computed tomography (CT) images and statistical shape model generation for defect reconstruction in 60 rTHA patients. Regional acetabular absolute defect volumes (ADV), relative defect volumes (RDV) and defect depths (DD) were calculated and stratified within Paprosky classifications. Defect geometries from the automated modeling pipeline were validated against manually reconstructed models and were found to have a mean dice coefficient of 0.827 and a mean relative volume error of 16.4%. The mean ADV, RDV and DD of classification groups generally increased with defect severity. Except for superior RDV and ADV between 3A and 2A defects, and anterior RDV and DD between 3B and 3A defects, statistically significant differences in ADV, RDV or DD were only found between 3B and 2B-2C defects (p < 0.05). Poor correlations observed between ADV, RDV, and DD within Paprosky classifications suggest that quantitative measures are not unique to each Paprosky grade. The automated modeling tools developed may be useful in surgical planning and computational modeling of rTHA.

Prediction of early recurrence in primary central nervous system lymphoma based on multimodal MRI-based radiomics: A preliminary study.

Wang X, Wang S, Zhao X, Chen L, Yuan M, Yan Y, Sun X, Liu Y, Sun S

pubmed logopapersJul 1 2025
To evaluate the role of multimodal magnetic resonance imaging radiomics features in predicting early recurrence of primary central nervous system lymphoma (PCNSL) and to investigate their correlation with patient prognosis. A retrospective analysis was conducted on 145 patients with PCNSL who were treated with high-dose methotrexate-based chemotherapy. Clinical data and MRI images were collected, with tumor regions segmented using ITK-SNAP software. Radiomics features were extracted via Pyradiomics, and predictive models were developed using various machine learning algorithms. The predictive performance of these models was assessed using receiver operating characteristic (ROC) curves. Additionally, Cox regression analysis was employed to identify risk factors associated with progression-free survival (PFS). In the cohort of 145 PCNSL patients (72 recurrence, 73 non-recurrence), clinical characteristics were comparable between groups except for multiple lesion frequency (61.1% vs. 39.7%, p < 0.05) and not receiving consolidation therapy (44.4% vs. 13.7%, p < 0.05). A total of 2392 radiomics features were extracted from CET1 and T2WI MRI sequence. Combining clinical variables, 10 features were retained after the feature selection process. The logistic regression (LR) model exhibited superior predictive performance in the test set to predict PCNSL early relapse, with an area under the curve (AUC) of 0.887 (95 % confidence interval: 0.785-0.988). Multivariate Cox regression identified the Cli-Rad score as an independent prognostic factor for PFS. Significant difference in PFS was observed between high- and low-risk groups defined by Cli-Rad score (8.24 months vs. 24.17 months, p < 0.001). The LR model based on multimodal MRI radiomics and clinical features, can effectively predict early recurrence of PCNSL, while the Cli-Rad score could independently forecast PFS among PCNSL patients.

CAD-Unet: A capsule network-enhanced Unet architecture for accurate segmentation of COVID-19 lung infections from CT images.

Dang Y, Ma W, Luo X, Wang H

pubmed logopapersJul 1 2025
Since the outbreak of the COVID-19 pandemic in 2019, medical imaging has emerged as a primary modality for diagnosing COVID-19 pneumonia. In clinical settings, the segmentation of lung infections from computed tomography images enables rapid and accurate quantification and diagnosis of COVID-19. Segmentation of COVID-19 infections in the lungs poses a formidable challenge, primarily due to the indistinct boundaries and limited contrast presented by ground glass opacity manifestations. Moreover, the confounding similarity among infiltrates, lung tissues, and lung walls further complicates this segmentation task. To address these challenges, this paper introduces a novel deep network architecture, called CAD-Unet, for segmenting COVID-19 lung infections. In this architecture, capsule networks are incorporated into the existing Unet framework. Capsule networks represent a novel type of network architecture that differs from traditional convolutional neural networks. They utilize vectors for information transfer among capsules, facilitating the extraction of intricate lesion spatial information. Additionally, we design a capsule encoder path and establish a coupling path between the unet encoder and the capsule encoder. This design maximizes the complementary advantages of both network structures while achieving efficient information fusion. Finally, extensive experiments are conducted on four publicly available datasets, encompassing binary segmentation tasks and multi-class segmentation tasks. The experimental results demonstrate the superior segmentation performance of the proposed model. The code has been released at: https://github.com/AmanoTooko-jie/CAD-Unet.

The Evolution of Radiology Image Annotation in the Era of Large Language Models.

Flanders AE, Wang X, Wu CC, Kitamura FC, Shih G, Mongan J, Peng Y

pubmed logopapersJul 1 2025
Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale, using only clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundational image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. <b>Keywords:</b> Feature Detection, Diagnosis, Semi-supervised Learning © RSNA, 2025.

Artificial Intelligence in Prostate Cancer Diagnosis on Magnetic Resonance Imaging: Time for a New PARADIGM.

Ng AB, Giganti F, Kasivisvanathan V

pubmed logopapersJul 1 2025
Artificial intelligence (AI) may provide a solution for improving access to expert, timely, and accurate magnetic resonance imaging (MRI) interpretation. The PARADIGM trial will provide level 1 evidence on the role of AI in the diagnosis of prostate cancer on MRI.

Structural uncertainty estimation for medical image segmentation.

Yang B, Zhang X, Zhang H, Li S, Higashita R, Liu J

pubmed logopapersJul 1 2025
Precise segmentation and uncertainty estimation are crucial for error identification and correction in medical diagnostic assistance. Existing methods mainly rely on pixel-wise uncertainty estimations. They (1) neglect the global context, leading to erroneous uncertainty indications, and (2) bring attention interference, resulting in the waste of extensive details and potential understanding confusion. In this paper, we propose a novel structural uncertainty estimation method, based on Convolutional Neural Networks (CNN) and Active Shape Models (ASM), named SU-ASM, which incorporates global shape information for providing precise segmentation and uncertainty estimation. The SU-ASM consists of three components. Firstly, multi-task generation provides multiple outcomes to assist ASM initialization and shape optimization via a multi-task learning module. Secondly, information fusion involves the creation of a Combined Boundary Probability (CBP) and along with a rapid shape initialization algorithm, Key Landmark Template Matching (KLTM), to enhance boundary reliability and select appropriate shape templates. Finally, shape model fitting where multiple shape templates are matched to the CBP while maintaining their intrinsic shape characteristics. Fitted shapes generate segmentation results and structural uncertainty estimations. The SU-ASM has been validated on cardiac ultrasound dataset, ciliary muscle dataset of the anterior eye segment, and the chest X-ray dataset. It outperforms state-of-the-art methods in terms of segmentation and uncertainty estimation.

Machine learning approaches for fine-grained symptom estimation in schizophrenia: A comprehensive review.

Foteinopoulou NM, Patras I

pubmed logopapersJul 1 2025
Schizophrenia is a severe yet treatable mental disorder, and it is diagnosed using a multitude of primary and secondary symptoms. Diagnosis and treatment for each individual depends on the severity of the symptoms. Therefore, there is a need for accurate, personalised assessments. However, the process can be both time-consuming and subjective; hence, there is a motivation to explore automated methods that can offer consistent diagnosis and precise symptom assessments, thereby complementing the work of healthcare practitioners. Machine Learning has demonstrated impressive capabilities across numerous domains, including medicine; the use of Machine Learning in patient assessment holds great promise for healthcare professionals and patients alike, as it can lead to more consistent and accurate symptom estimation. This survey reviews methodologies utilising Machine Learning for diagnosing and assessing schizophrenia. Contrary to previous reviews that primarily focused on binary classification, this work recognises the complexity of the condition and, instead, offers an overview of Machine Learning methods designed for fine-grained symptom estimation. We cover multiple modalities, namely Medical Imaging, Electroencephalograms and Audio-Visual, as the illness symptoms can manifest in a patient's pathology and behaviour. Finally, we analyse the datasets and methodologies used in the studies and identify trends, gaps, as opportunities for future research.

Reconstruction-based approach for chest X-ray image segmentation and enhanced multi-label chest disease classification.

Hage Chehade A, Abdallah N, Marion JM, Hatt M, Oueidat M, Chauvet P

pubmed logopapersJul 1 2025
U-Net is a commonly used model for medical image segmentation. However, when applied to chest X-ray images that show pathologies, it often fails to include these critical pathological areas in the generated masks. To address this limitation, in our study, we tackled the challenge of precise segmentation and mask generation by developing a novel approach, using CycleGAN, that encompasses the areas affected by pathologies within the region of interest, allowing the extraction of relevant radiomic features linked to pathologies. Furthermore, we adopted a feature selection approach to focus the analysis on the most significant features. The results of our proposed pipeline are promising, with an average accuracy of 92.05% and an average AUC of 89.48% for the multi-label classification of effusion and infiltration acquired from the ChestX-ray14 dataset, using the XGBoost model. Furthermore, applying our methodology to the classification of the 14 diseases in the ChestX-ray14 dataset resulted in an average AUC of 83.12%, outperforming previous studies. This research highlights the importance of effective pathological mask generation and features selection for accurate classification of chest diseases. The promising results of our approach underscore its potential for broader applications in the classification of chest diseases.

Challenges, optimization strategies, and future horizons of advanced deep learning approaches for brain lesion segmentation.

Zaman A, Yassin MM, Mehmud I, Cao A, Lu J, Hassan H, Kang Y

pubmed logopapersJul 1 2025
Brain lesion segmentation is challenging in medical image analysis, aiming to delineate lesion regions precisely. Deep learning (DL) techniques have recently demonstrated promising results across various computer vision tasks, including semantic segmentation, object detection, and image classification. This paper offers an overview of recent DL algorithms for brain tumor and stroke segmentation, drawing on literature from 2021 to 2024. It highlights the strengths, limitations, current research challenges, and unexplored areas in imaging-based brain lesion classification based on insights from over 250 recent review papers. Techniques addressing difficulties like class imbalance and multi-modalities are presented. Optimization methods for improving performance regarding computational and structural complexity and processing speed are discussed. These include lightweight neural networks, multilayer architectures, and computationally efficient, highly accurate network designs. The paper also reviews generic and latest frameworks of different brain lesion detection techniques and highlights publicly available benchmark datasets and their issues. Furthermore, open research areas, application prospects, and future directions for DL-based brain lesion classification are discussed. Future directions include integrating neural architecture search methods with domain knowledge, predicting patient survival levels, and learning to separate brain lesions using patient statistics. To ensure patient privacy, future research is anticipated to explore privacy-preserving learning frameworks. Overall, the presented suggestions serve as a guideline for researchers and system designers involved in brain lesion detection and stroke segmentation tasks.
Page 8 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.