Sort by:
Page 499 of 7527514 results

Khosravi P, Fuchs TJ, Ho DJ

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in cancer research has significantly advanced radiology, pathology, and multimodal approaches, offering unprecedented capabilities in image analysis, diagnosis, and treatment planning. AI techniques provide standardized assistance to clinicians, in which many diagnostic and predictive tasks are manually conducted, causing low reproducibility. These AI methods can additionally provide explainability to help clinicians make the best decisions for patient care. This review explores state-of-the-art AI methods, focusing on their application in image classification, image segmentation, multiple instance learning, generative models, and self-supervised learning. In radiology, AI enhances tumor detection, diagnosis, and treatment planning through advanced imaging modalities and real-time applications. In pathology, AI-driven image analysis improves cancer detection, biomarker discovery, and diagnostic consistency. Multimodal AI approaches can integrate data from radiology, pathology, and genomics to provide comprehensive diagnostic insights. Emerging trends, challenges, and future directions in AI-driven cancer research are discussed, emphasizing the transformative potential of these technologies in improving patient outcomes and advancing cancer care. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

Song Q, He X, Wang Y, Gao H, Tan L, Ma J, Kang L, Han P, Luo Y, Wang K

pubmed logopapersJul 2 2025
The study aimed to develop an AI-assisted ultrasound model for early liver trauma identification, using data from Bama miniature pigs and patients in Beijing, China. A deep learning model was created and fine-tuned with animal and clinical data, achieving high accuracy metrics. In internal tests, the model outperformed both Junior and Senior sonographers. External tests showed the model's effectiveness, with a Dice Similarity Coefficient of 0.74, True Positive Rate of 0.80, Positive Predictive Value of 0.74, and 95% Hausdorff distance of 14.84. The model's performance was comparable to Junior sonographers and slightly lower than Senior sonographers. This AI model shows promise for liver injury detection, offering a valuable tool with diagnostic capabilities similar to those of less experienced human operators.

Hu Z, Zhang X, Yang J, Zhang B, Chen H, Shen W, Li H, Zhou Y, Zhang J, Qiu K, Xie Z, Xu G, Tan J, Pang C

pubmed logopapersJul 2 2025
To propose a deep learning model and explore its performance in the auxiliary diagnosis of lung cancer associated with cystic airspaces (LCCA) in computed tomography (CT) images. This study is a retrospective analysis that incorporated a total of 342 CT series, comprising 272 series from patients diagnosed with LCCA and 70 series from patients with pulmonary bulla. A deep learning model named LungSSFNet, developed based on nnUnet, was utilized for image recognition and segmentation by experienced thoracic surgeons. The dataset was divided into a training set (245 series), a validation set (62 series), and a test set (35 series). The performance of LungSSFNet was compared with other models such as UNet, M2Snet, TANet, MADGNet, and nnUnet to evaluate its effectiveness in recognizing and segmenting LCCA and pulmonary bulla. LungSSFNet achieved an intersection over union of 81.05% and a Dice similarity coefficient of 75.15% for LCCA, and 93.03% and 92.04% for pulmonary bulla, respectively. These outcomes demonstrate that LungSSFNet outperformed many existing models in segmentation tasks. Additionally, it attained an accuracy of 96.77%, a precision of 100%, and a sensitivity of 96.15%. LungSSFNet, a new deep-learning model, substantially improved the diagnosis of early-stage LCCA and is potentially valuable for auxiliary clinical decision-making. Our LungSSFNet code is available at https://github.com/zx0412/LungSSFNet .

Zhang C, Wang Z, Shang P, Zhou Y, Zhu J, Xu L, Chen Z, Yu M, Zang Y

pubmed logopapersJul 2 2025
This study aims to investigate the diagnostic value of integrating multi-parametric magnetic resonance imaging (mpMRI) radiomic features with tumor abnormal protein (TAP) and clinical characteristics for diagnosing prostate cancer. A cohort of 109 patients who underwent both mpMRI and TAP assessments prior to prostate biopsy were enrolled. Radiomic features were meticulously extracted from T2-weighted imaging (T2WI) and the apparent diffusion coefficient (ADC) maps. Feature selection was performed using t-tests and the Least Absolute Shrinkage and Selection Operator (LASSO) regression, followed by model construction using the random forest algorithm. To further enhance the model's accuracy and predictive performance, this study incorporated clinical factors including age, serum prostate-specific antigen (PSA) levels, and prostate volume. By integrating these clinical indicators with radiomic features, a more comprehensive and precise predictive model was developed. Finally, the model's performance was quantified by calculating accuracy, sensitivity, specificity, precision, recall, F1 score, and the area under the curve (AUC). From mpMRI sequences of T2WI, dADC(b = 100/1000 s/mm<sup>2</sup>), and dADC(b = 100/2000 s/mm<sup>2</sup>), 8, 10, and 13 radiomic features were identified as significantly correlated with prostate cancer, respectively. Random forest models constructed based on these three sets of radiomic features achieved AUCs of 0.83, 0.86, and 0.87, respectively. When integrating all three sets of data to formulate a random forest model, an AUC of 0.84 was obtained. Additionally, a random forest model constructed on TAP and clinical characteristics achieved an AUC of 0.85. Notably, combining mpMRI radiomic features with TAP and clinical characteristics, or integrating dADC (b = 100/2000 s/mm²) sequence with TAP and clinical characteristics to construct random forest models, improved the AUCs to 0.91 and 0.92, respectively. The proposed model, which integrates radiomic features, TAP and clinical characteristics using machine learning, demonstrated high predictive efficiency in diagnosing prostate cancer.

Cariola A, Sibilano E, Guerriero A, Bevilacqua V, Brunetti A

pubmed logopapersJul 2 2025
Automated segmentation of pediatric brain tumors (PBTs) can support precise diagnosis and treatment monitoring, but it is still poorly investigated in literature. This study proposes two different Deep Learning approaches for semantic segmentation of tumor regions in PBTs from MRI scans. Two pipelines were developed for segmenting enhanced tumor (ET), tumor core (TC), and whole tumor (WT) in pediatric gliomas from the BraTS-PEDs 2024 dataset. First, a pre-trained SegResNet model was retrained with a transfer learning approach and tested on the pediatric cohort. Then, two novel multi-encoder architectures leveraging the attention mechanism were designed and trained from scratch. To enhance the performance on ET regions, an ensemble paradigm and post-processing techniques were implemented. Overall, the 3-encoder model achieved the best performance in terms of Dice Score on TC and WT when trained with Dice Loss and on ET when trained with Generalized Dice Focal Loss. SegResNet showed higher recall on TC and WT, and higher precision on ET. After post-processing, we reached Dice Scores of 0.843, 0.869, 0.757 with the pre-trained model and 0.852, 0.876, 0.764 with the ensemble model for TC, WT and ET, respectively. Both strategies yielded state-of-the-art performances, although the ensemble demonstrated significantly superior results. Segmentation of the ET region was improved after post-processing, which increased test metrics while maintaining the integrity of the data.

Alasiry A, Shinan K, Alsadhan AA, Alhazmi HE, Alanazi F, Ashraf MU, Muhammad T

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that significantly impacts cognitive function, posing a major global health challenge. Despite its rising prevalence, particularly in low and middle-income countries, early diagnosis remains inadequate, with projections estimating over 55 million affected individuals by 2022, expected to triple by 2050. Accurate early detection is critical for effective intervention. This study presents Neuroimaging-based Early Detection of Alzheimer's Disease using Deep Learning (NEDA-DL), a novel computer-aided diagnostic (CAD) framework leveraging a hybrid ResNet-50 and AlexNet architecture optimized with CUDA-based parallel processing. The proposed deep learning model processes MRI and PET neuroimaging data, utilizing depthwise separable convolutions to enhance computational efficiency. Performance evaluation using key metrics including accuracy, sensitivity, specificity, and F1-score demonstrates state-of-the-art classification performance, with the Softmax classifier achieving 99.87% accuracy. Comparative analyses further validate the superiority of NEDA-DL over existing methods. By integrating structural and functional neuroimaging insights, this approach enhances diagnostic precision and supports clinical decision-making in Alzheimer's disease detection.

Zuo C, Xue J, Yuan C

pubmed logopapersJul 2 2025
The early diagnosis of brain tumors is crucial for patient prognosis, and medical imaging techniques such as MRI and CT scans are essential tools for diagnosing brain tumors. However, high-quality medical image data for brain tumors is often scarce and difficult to obtain, which hinders the development and application of medical image analysis models. With the advancement of artificial intelligence, particularly deep learning technologies in the field of medical imaging, new concepts and tools have been introduced for the early diagnosis, treatment planning, and prognosis evaluation of brain tumors. To address the challenge of imbalanced brain tumor datasets, we propose a novel data augmentation technique based on a diffusion model, referred to as the Multi-Channel Fusion Diffusion Model(MCFDiffusion). This method tackles the issue of data imbalance by converting healthy brain MRI images into images containing tumors, thereby enabling deep learning models to achieve better performance and assisting physicians in making more accurate diagnoses and treatment plans. In our experiments, we used a publicly available brain tumor dataset and compared the performance of image classification and segmentation tasks between the original data and the data enhanced by our method. The results show that the enhanced data improved the classification accuracy by approximately 3% and the Dice coefficient for segmentation tasks by 1.5%-2.5%. Our research builds upon previous work involving Denoising Diffusion Implicit Models (DDIMs) for image generation and further enhances the applicability of this model in medical imaging by introducing a multi-channel approach and fusing defective areas with healthy images. Future work will explore the application of this model to various types of medical images and further optimize the model to improve its generalization capabilities. We release our code at https://github.com/feiyueaaa/MCFDiffusion.

Jerković I, Bašić Ž, Kružić I

pubmed logopapersJul 2 2025
This study investigates a deep learning approach for sex estimation using 3D hyoid bone models derived from computed tomography (CT) scans of a Croatian population. We analyzed 202 hyoid samples (101 male, 101 female), converting CT-derived meshes into 2048-point clouds for processing with an adapted PointNet++ network. The model, optimized for small datasets with 1D convolutional layers and global size features, was first applied in an unsupervised framework. Unsupervised clustering achieved 87.10% accuracy, identifying natural sex-based morphological patterns. Subsequently, supervised classification with a support vector machine yielded an accuracy of 88.71% (Matthews Correlation Coefficient, MCC = 0.7746) on a test set (n = 62). Interpretability analysis highlighted key regions influencing classification, with males exhibiting larger, U-shaped hyoids and females showing smaller, more open structures. Despite the modest sample size, the method effectively captured sex differences, providing a data-efficient and interpretable tool. This flexible approach, combining computational efficiency with practical insights, demonstrates potential for aiding sex estimation in cases with limited skeletal remains and may support broader applications in forensic anthropology.

Zhang Z, Xu C, Li Z, Chen Y, Nie C

pubmed logopapersJul 2 2025
The application of sophisticated computer vision techniques for medical image segmentation (MIS) plays a vital role in clinical diagnosis and treatment. Although Transformer-based models are effective at capturing global context, they are often ineffective at dealing with local feature dependencies. In order to improve this problem, we design a Multi-scale Fusion and Semantic Enhancement Network (MFSE-Net) for endoscopic image segmentation, which aims to capture global information and enhance detailed information. MFSE-Net uses a dual encoder architecture, with PVTv2 as the primary encoder to capture global features and CNNs as the secondary encoder to capture local details. The main encoder includes the LGDA (Large-kernel Grouped Deformable Attention) module for filtering noise and enhancing the semantic extraction of the four hierarchical features. The auxiliary encoder leverages the MLCF (Multi-Layered Cross-attention Fusion) module to integrate high-level semantic data from the deep CNN with fine spatial details from the shallow layers, enhancing the precision of boundaries and positioning. On the decoder side, we have introduced the PSE (Parallel Semantic Enhancement) module, which embeds the boundary and position information of the secondary encoder into the output characteristics of the backbone network. In the multi-scale decoding process, we also add SAM (Scale Aware Module) to recover global semantic information and offset for the loss of boundary details. Extensive experiments have shown that MFSE-Net overwhelmingly outperforms SOTA on the renal tumor and polyp datasets.

Said Y, Ayachi R, Afif M, Saidani T, Alanezi ST, Saidani O, Algarni AD

pubmed logopapersJul 2 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, necessitating accurate and efficient diagnostic tools to improve patient outcomes. Lung segmentation plays a pivotal role in the diagnostic pipeline, directly impacting the accuracy of disease detection and treatment planning. This study presents an advanced AI-driven framework, optimized through genetic algorithms, for precise lung segmentation in early cancer diagnosis. The proposed model builds upon the UNET3 + architecture and integrates multi-scale feature extraction with enhanced optimization strategies to improve segmentation accuracy while significantly reducing computational complexity. By leveraging genetic algorithms, the framework identifies optimal neural network configurations within a defined search space, ensuring high segmentation performance with minimal parameters. Extensive experiments conducted on publicly available lung segmentation datasets demonstrated superior results, achieving a dice similarity coefficient of 99.17% with only 26% of the parameters required by the baseline UNET3 + model. This substantial reduction in model size and computational cost makes the system highly suitable for resource-constrained environments, including point-of-care diagnostic devices. The proposed approach exemplifies the transformative potential of AI in medical imaging, enabling earlier and more precise lung cancer diagnosis while reducing healthcare disparities in resource-limited settings.
Page 499 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.