Sort by:
Page 11 of 6036030 results

Abrar M, Salam A, Ullah F, Ullah F, Al Ghamdi AS

pubmed logopapersOct 21 2025
Precise segmentation of brain tumors is essential for efficient diagnosis and therapy planning. While current automated methods frequently fail to capture complicated tumor shapes, traditional manual methods are laborious, subjective, and unpredictable. These issues are addressed by the suggested Attention-based Convolutional U-Net (ACU-Net) model, which incorporates attention processes into the U-Net architecture. The objective is to enhance the degree of precision and dependability of the tumor's edge delineation by proposing and testing the ACU-Net model-based brain tumor segmentation on MRI data. The research framework consists of data acquisition from the BraTS 2018 MRI data set. The first processing steps carried out in this study were the normalization of acquired data, spatial resolution, and augmentation of the obtained data. ACU-Net is a model developed with the use of attention gates and has been trained with dice and cross-entropy losses. Precision, recall, dice similarity coefficient (DSC), and intersection over union (IoU) are the performance measures used in the proposed ACU-Net and compared with the basic benchmark models, including U-Nets and convolutional neural networks (CNNs). The model of ACU-Net was shown to be most effective in brain tumor segmentation, and the dice scores were 94.04% for Whole Tumor (WT), 98. 63% for Tumor Core (TC) and 98.77% for Enhancing Tumor (ET). The proposed ACU-Net performed better than baseline models, showing the high capacity of the current approach to segment various classes of tumors. The model ACU-Net enhances brain tumor segmentation, acting as a reliable tool for clinical applications. These findings confirm that attention mechanisms improve the accuracy and robustness of medical image segmentation.

Gundogdu A, Wetherilt CS, Alpar A, Abdullah S, Yilmaz OC, Celik L

pubmed logopapersOct 21 2025
Ductal carcinoma in situ (DCIS) is a heterogeneous precursor lesion with variable invasive potential. Current predictive parameters for invasion risk offer limited utility for personalized assessment. This study aims to evaluate artificial intelligence (AI)-assisted mammography analysis as a tool for predicting invasion risk in DCIS patients. In this retrospective cohort study, 74 patients with pathologically proven DCIS by preoperative biopsy were analyzed using a deep learning-based AI system (Transpara version 1.7.4). The AI system classified patients into low-risk and high-risk groups, which were validated against postoperative histopathological findings. Statistical analysis included sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy calculations. Invasion was detected in 19 (25.7%) patients, with 18 (94.7%) classified as high-risk by the AI system. The model demonstrated 94.7% sensitivity, 45.5% specificity, 37.5% PPV, and 96.2% NPV. In patients aged ≥ 50 years and those with lesions ≥ 3 cm, the NPV reached 100%. A significant relationship was found between necrosis and invasion (p = 0.004). The high NPV suggests AI-assisted mammography analysis could serve as an effective rule-out tool for invasion in DCIS patients, potentially identifying candidates for less aggressive surgical treatment. Further validation in larger, multi-center studies is necessary to confirm these findings.

Li R, Guo H, Wu Q, Han J, Kang S

pubmed logopapersOct 21 2025
The World Health Organization predicts that by 2030, chronic obstructive pulmonary disease (COPD) will be the third leading cause of death and the seventh leading cause of morbidity worldwide. Pulmonary function tests (PFT) are the gold standard for COPD diagnosis. Since COPD is an incurable disease that takes a considerable amount of time to diagnose, even by an experienced specialist, it becomes important to provide an analysis of abnormalities in a simple manner. Although many deep learning (DL) methods based on computed tomography (CT) have been developed to identify COPD, the pathological changes of COPD based on CT are multi-dimensional and highly spatially heterogeneous, and their predictive performance still needs to be improved. The purpose of this study was to develop a DL-based multimodal feature fusion model to accurately estimate PFT parameters from chest CT images and verify its performance. In this retrospective study, participants underwent chest CT examination and PFT at the Fourth Clinical Medical College of Xinjiang Medical University between January 2018 and July 2024. In this study, the 1-s forced expiratory volume (FEV1), forced vital capacity (FVC), 1-s forced expiratory volume ratio forced vital capacity (FEV1/FVC), 1-s forced expiratory volume to predicted value (FEV1%), and forced vital capacity to predicted value (FVC%) of PFT parameters were used as predictors and the corresponding chest CT of 3108 participants. The data were randomly assigned to the training group and the validation group at a ratio of 9:1, and the model was cross-validated using 10-fold cross-validation. Each parameter was trained and evaluated separately on the DL network. The mean absolute error (MAE), mean squared error (MSE), and Pearson correlation coefficient (r) were used as evaluation indices, and the consistency between the predicted and actual values was analyzed using the Bland-Altman plot. The interpretability of the model's prediction process was analyzed using the Grad-CAM visualization technique. A total of 2408 subjects were included (average age 66 ± 12 years; 1479 males). Among these, 822 cases were used for encoder training to extract image features, and 1,586 cases were used for the development and validation of a multimodal feature fusion model based on a multilayer perceptron (MLP). The MAE, MSE, and r predicted between PFT and model estimates for FEV1 were 0.34, 0.20, and 0.84, respectively. For FVC, the MAE, MSE, and r were 0.42, 0.31, and 0.81, respectively. For FEV1/FVC, the MAE, MSE, and r were 6.64, 0.73, and 0.77, respectively. For FEV1%, the MAE, MSE, and r were 13.42, 3.01, and 0.73, respectively. For FVC%, the MAE, MSE, and r were 13.33, 2.97, and 0.61, respectively. It was observed that there was a strong correlation between the measured and predicted indices of FEV1, FVC, FEV1/FVC, and FEV1%. The Bland-Altman plot analysis showed good consistency between the estimated values and the measured values of all PFT parameters. The preliminary research results indicate that the MLP-based multimodal feature fusion model has the potential to predict PFT parameters in COPD patients in real time. However, it is worth noting that the study used indicators before the use of bronchodilators, which may affect the interpretation of the results. Future studies should use measurements taken after bronchodilator administration to better align with clinical standards.

Yin S, Ming J, Chen H, Sun Y, Jiang C

pubmed logopapersOct 21 2025
Accurate preoperative glioma grading remains a critical challenge in neuro-oncology. This study presents a novel integrated approach combining deep learning architectures with radiomics features derived from multi-parametric MRI to improve preoperative glioma grading accuracy. In this retrospective multi-center study, we analyzed 847 patients with histopathologically confirmed gliomas from 5 tertiary neurosurgical centers. Multi-parametric MRI sequences (T1, T1-contrast, T2, FLAIR) were processed using a dual-stream framework where: (1) a 3D convolutional neural network extracted deep imaging features, and (2) 1,423 quantitative radiomic features were extracted and selected using a recursive feature elimination algorithm. We developed an ensemble model that integrates both feature streams with clinical variables. Model performance was evaluated through 5-fold cross-validation and external validation on an independent cohort (n = 213). The integrated model achieved superior performance (AUC = 0.946, 95% CI: 0.927-0.965) compared to radiomics-only (AUC = 0.891) or deep learning-only (AUC = 0.903) approaches for distinguishing high-grade (WHO grades III-IV) from low-grade (WHO grades I-II) gliomas. Notably, the model demonstrated robust performance across different MRI acquisition parameters (AUC = 0.921 on external validation). Subgroup analysis revealed particular efficacy in identifying isocitrate dehydrogenase (IDH) wild-type gliomas (sensitivity 0.954, specificity 0.912). The model accurately identified 89.2% of gliomas with molecular features associated with aggressive behavior but ambiguous conventional imaging characteristics. This integrated radiomics-deep learning approach significantly improves preoperative glioma grading accuracy across diverse patient populations and imaging protocols. The proposed framework offers a non-invasive tool for preoperative risk stratification, potentially informing surgical planning and treatment strategies. The model's interpretability provides insights into imaging biomarkers associated with glioma aggressiveness.

Namburi P, Pallarès-López R, Folgado D, Magana-Salgado U, Rosendorf J, Ryu E, Kappacher A, Gamboa H, Anthony BW, Daniel L

pubmed logopapersOct 21 2025
Insights into the general nature of motor skill could fundamentally change how we develop movement abilities, with implications for musculoskeletal well-being and injury. Here, we sought to identify indicators of general motor skill-those shared by experts across disciplines (e.g., squash, ballet, volleyball) during non-specialized movements (e.g., reaching for water). Identifying such general indicators of motor skill has remained elusive. Using ultrasound imaging with deep learning and optical flow analysis, we tracked elastic tissues (muscles and associated connective tissues) during a simple reaching task performed similarly by world-class athletes and regional-level athletes drawn from diverse disciplines, as well as untrained non-experts. We analyzed two types of inefficient tissue motions that do not contribute to the net work done by the muscles to actuate joints. These are transverse muscle movements orthogonal to the muscle fiber direction and physiological tremors. We discovered that world-class experts minimize both of these inefficient motions compared to regional-level athletes and non-experts. While regional-level athletes surprisingly showed similar inefficiencies to non-experts, they used elastic tissues more effectively, achieving equivalent arm movements with smaller actuation-related tissue motions. We establish elastic tissue motion as a key indicator of general motor skill, expanding our understanding of elastic mechanisms and their role in general aspects of motor skill.

Chatterjee P, Chakrabarti A, Das Sharma K

pubmed logopapersOct 21 2025
Brain MRI segmentation plays a crucial role in medical imaging, aiding in the identification and monitoring of brain diseases. This research presents a novel deep learning-based framework designed to achieve high segmentation accuracy while maintaining a lightweight architecture suitable for real-world deployment. The proposed method utilizes EfficientNet B0 as an encoder, which ensures rich multi-scale feature extraction with significantly reduced model complexity. To enhance global context modeling without increasing the computational burden, the framework incorporates Visual State-Space blocks. These blocks leverage patch merging and state-space modeling to capture long-range spatial dependencies efficiently. Additionally, a multi-scale attention mechanism inspired by the Mamba architecture is introduced to refine feature representations across different scales, improving the network's ability to segment complex anatomical structures and lesions. The decoder follows a U-Net-inspired design, integrating skip connections to preserve spatial details and enable high-resolution segmentation map reconstruction. The training process is optimized using a hybrid loss function, combining Active Contour Loss for precise boundary delineation and Focal Loss mitigates class imbalance, ensuring robust segmentation performance. By effectively balancing segmentation accuracy with a lightweight model design, the proposed approach provides visually superior segmentation results compared to other state-of-the-art.

Li R, Su X, Li Z, Wang N, Sun H, Ouyang A

pubmed logopapersOct 21 2025
This retrospective study aimed to assess the potential of radiomic features extracted from dual-energy computed tomography (DECT) images, combined with machine learning algorithms, for the noninvasive prediction of microvessel density (MVD) in clear cell renal cell carcinoma (ccRCC). We manually segmented regions of interest (ROIs) on corticomedullary phase (CMP) images to extract radiomic features. Tumor microvessel parameters were determined by immunohistochemical staining. Prediction models for MVD were developed using both multi-energy and monoenergetic sequence DECT images. Subsequently, a combined model was constructed based on the best-performing radiomics score and statistically significant clinical features, and was visualized as a nomogram. Furthermore, an external validation cohort was recruited from Center II to evaluate the performance of the nomogram. The support vector machine (SVM) classifier achieved the best performance for the multi-energy sequence MVD prediction model, with an AUC of 0.914 in the validation set. The MVD prediction model based on iodine-based material decomposition images (IMDI), constructed using the SVM classifier, achieved an AUC of 0.889 in the validation set. The nomogram showed good calibration, achieving an AUC of 0.757 in the external validation cohort. DECT-based radiomic features show potential for noninvasive predicting microangiogenesis in patients with ccRCC.

He D, Li S, Jiang B, Yan H

pubmed logopapersOct 21 2025
High-resolution functional magnetic resonance imaging (fMRI) is essential for mapping human brain activity; however, it remains costly and logistically challenging. If comparable volumes could be generated directly from widely available scalp electroencephalography (EEG), advanced neuroimaging would become significantly more accessible. Existing EEG-to-fMRI generators rely on plain Convolutional Neural Networks (CNNs) that fail to capture cross-channel time-frequency cues or on heavy transformer/Generative Adversarial Network (GAN) decoders that strain memory and stability. To address these limitations, we propose Spec2VolCAMU-Net, a lightweight architecture featuring a Multi-directional Time-Frequency Convolutional Attention Encoder for rich feature extraction and a Vision-Mamba U-Net decoder that uses linear-time state-space blocks for efficient long-range spatial modelling. We frame the goal of this work as establishing a new state of the art in the spatial fidelity of single-volume reconstruction, a foundational prerequisite for the ultimate aim of generating temporally coherent fMRI time series. Trained end-to-end with a hybrid SSI-MSE loss, Spec2VolCAMU-Net achieves state-of-the-art fidelity on three public benchmarks, recording Structural Similarity Index (SSIM) of 0.693 on NODDI, 0.725 on Oddball and 0.788 on CN-EPFL, representing improvements of 14.5%, 14.9%, and 16.9% respectively over previous best SSIM scores. Furthermore, it achieves competitive Signal-to-Noise Ratio (PSNR) scores, particularly excelling on the CN-EPFL dataset with a 4.6% improvement over the previous best PSNR, thus striking a better balance in reconstruction quality. The proposed model is lightweight and efficient, making it suitable for real-time applications in clinical and research settings. The code is available at https://github.com/hdy6438/Spec2VolCAMU-Net.

Eyad Gad, Seif Soliman, M. Saeed Darweesh

arxiv logopreprintOct 21 2025
In the realm of medical diagnostics, rapid advancements in Artificial Intelligence (AI) have significantly yielded remarkable improvements in brain tumor segmentation. Encoder-Decoder architectures, such as U-Net, have played a transformative role by effectively extracting meaningful representations in 3D brain tumor segmentation from Magnetic resonance imaging (MRI) scans. However, standard U-Net models encounter challenges in accurately delineating tumor regions, especially when dealing with irregular shapes and ambiguous boundaries. Additionally, training robust segmentation models on high-resolution MRI data, such as the BraTS datasets, necessitates high computational resources and often faces challenges associated with class imbalance. This study proposes the integration of the attention mechanism into the 3D U-Net model, enabling the model to capture intricate details and prioritize informative regions during the segmentation process. Additionally, a tumor detection algorithm based on digital image processing techniques is utilized to address the issue of imbalanced training data and mitigate bias. This study aims to enhance the performance of brain tumor segmentation, ultimately improving the reliability of diagnosis. The proposed model is thoroughly evaluated and assessed on the BraTS 2020 dataset using various performance metrics to accomplish this goal. The obtained results indicate that the model outperformed related studies, exhibiting dice of 0.975, specificity of 0.988, and sensitivity of 0.995, indicating the efficacy of the proposed model in improving brain tumor segmentation, offering valuable insights for reliable diagnosis in clinical settings.

Eyad Gad, Mustafa Abou Khatwa, Mustafa A. Elattar, Sahar Selim

arxiv logopreprintOct 21 2025
Breast cancer is a leading cause of death among women worldwide, emphasizing the need for early detection and accurate diagnosis. As such Ultrasound Imaging, a reliable and cost-effective tool, is used for this purpose, however the sensitive nature of medical data makes it challenging to develop accurate and private artificial intelligence models. A solution is Federated Learning as it is a promising technique for distributed machine learning on sensitive medical data while preserving patient privacy. However, training on non-Independent and non-Identically Distributed (non-IID) local datasets can impact the accuracy and generalization of the trained model, which is crucial for accurate tumour boundary delineation in BC segmentation. This study aims to tackle this challenge by applying the Federated Proximal (FedProx) method to non-IID Ultrasonic Breast Cancer Imaging datasets. Moreover, we focus on enhancing tumour segmentation accuracy by incorporating a modified U-Net model with attention mechanisms. Our approach resulted in a global model with 96% accuracy, demonstrating the effectiveness of our method in enhancing tumour segmentation accuracy while preserving patient privacy. Our findings suggest that FedProx has the potential to be a promising approach for training precise machine learning models on non-IID local medical datasets.
Page 11 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.