Sort by:
Page 493 of 7527514 results

Md. Sabbir Hossen, Eshat Ahmed Shuvo, Shibbir Ahmed Arif, Pabon Shaha, Md. Saiduzzaman, Mostofa Kamal Nasir

arxiv logopreprintJul 4 2025
Brain stroke is one of the leading causes of mortality and long-term disability worldwide, highlighting the need for precise and fast prediction techniques. Computed Tomography (CT) scan is considered one of the most effective methods for diagnosing brain strokes. The majority of stroke classification techniques rely on a single slice-level prediction mechanism, allowing the radiologist to manually choose the most critical CT slice from the original CT volume. Although clinical evaluations are often used in traditional diagnostic procedures, machine learning (ML) has opened up new avenues for improving stroke diagnosis. To supplement traditional diagnostic techniques, this study investigates the use of machine learning models, specifically concerning the prediction of brain stroke at an early stage utilizing CT scan images. In this research, we proposed a novel approach to brain stroke detection leveraging machine learning techniques, focusing on optimizing classification performance with pre-trained deep learning models and advanced optimization strategies. Pre-trained models, including DenseNet201, InceptionV3, MobileNetV2, ResNet50, and Xception, are utilized for feature extraction. Additionally, we employed feature engineering techniques, including BFO, PCA, and LDA, to enhance models' performance further. These features are subsequently classified using machine learning algorithms such as SVC, RF, XGB, DT, LR, KNN, and GNB. Our experiments demonstrate that the combination of MobileNetV2, LDA, and SVC achieved the highest classification accuracy of 97.93%, significantly outperforming other model-optimizer-classifier combinations. The results underline the effectiveness of integrating lightweight pre-trained models with robust optimization and classification techniques for brain stroke diagnosis.

Anna Breger, Janek Gröhl, Clemens Karner, Thomas R Else, Ian Selby, Jonathan Weir-McCall, Carola-Bibiane Schönlieb

arxiv logopreprintJul 4 2025
Image quality assessment (IQA) is crucial in the evaluation stage of novel algorithms operating on images, including traditional and machine learning based methods. Due to the lack of available quality-rated medical images, most commonly used IQA methods employing reference images (i.e. full-reference IQA) have been developed and tested for natural images. Reported application inconsistencies arising when employing such measures for medical images are not surprising, as they rely on different properties than natural images. In photoacoustic imaging (PAI), especially, standard benchmarking approaches for assessing the quality of image reconstructions are lacking. PAI is a multi-physics imaging modality, in which two inverse problems have to be solved, which makes the application of IQA measures uniquely challenging due to both, acoustic and optical, artifacts. To support the development and testing of full- and no-reference IQA measures we assembled PhotIQA, a data set consisting of 1134 reconstructed photoacoustic (PA) images that were rated by 2 experts across five quality properties (overall quality, edge visibility, homogeneity, inclusion and background intensity), where the detailed rating enables usage beyond PAI. To allow full-reference assessment, highly characterised imaging test objects were used, providing a ground truth. Our baseline experiments show that HaarPSI$_{med}$ significantly outperforms SSIM in correlating with the quality ratings (SRCC: 0.83 vs. 0.62). The dataset is publicly available at https://doi.org/10.5281/zenodo.13325196.

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

Tianyi Ding, Hongli Chen, Yang Gao, Zhuang Xiong, Feng Liu, Martijn A. Cloos, Hongfu Sun

arxiv logopreprintJul 4 2025
Magnetic Resonance Fingerprinting (MRF) enables fast quantitative imaging by matching signal evolutions to a predefined dictionary. However, conventional dictionary matching suffers from exponential growth in computational cost and memory usage as the number of parameters increases, limiting its scalability to multi-parametric mapping. To address this, recent work has explored deep learning-based approaches as alternatives to DM. We propose GAST-Mamba, an end-to-end framework that combines a dual Mamba-based encoder with a Gate-Aware Spatial-Temporal (GAST) processor. Built on structured state-space models, our architecture efficiently captures long-range spatial dependencies with linear complexity. On 5 times accelerated simulated MRF data (200 frames), GAST-Mamba achieved a T1 PSNR of 33.12~dB, outperforming SCQ (31.69~dB). For T2 mapping, it reached a PSNR of 30.62~dB and SSIM of 0.9124. In vivo experiments further demonstrated improved anatomical detail and reduced artifacts. Ablation studies confirmed that each component contributes to performance, with the GAST module being particularly important under strong undersampling. These results demonstrate the effectiveness of GAST-Mamba for accurate and robust reconstruction from highly undersampled MRF acquisitions, offering a scalable alternative to traditional DM-based methods.

Fang X, Weng T, Zhang Z, Gong W, Zhang Y, Wang M, Wang J, Ding Z, Lai C

pubmed logopapersJul 4 2025
To develop an interpretable machine learning model, explained using SHAP, based on imaging features of adolescent idiopathic scoliosis extracted by convolutional neural networks (CNNs), in order to predict the risk of curve progression and identify the most accurate predictive model. This study included 233 patients with adolescent idiopathic scoliosis from three medical centers. CNNs were used to extract features from full-spine coronal X-ray images taken at three follow-up points for each patient. Imaging and clinical features from center 1 were analyzed using the Boruta algorithm to identify independent predictors. Data from center 1 were divided into training (80%) and testing (20%) sets, while data from centers 2 and 3 were used as external validation sets. Six machine learning models were constructed. Receiver operating characteristic (ROC) curves were plotted, and model performance was assessed by calculating the area under the curve (AUC), accuracy, sensitivity, and specificity in the training, testing, and external validation sets. The SHAP interpreter was used to analyze the most effective model. The six models yielded AUCs ranging from 0.565 to 0.989, accuracies from 0.600 to 0.968, sensitivities from 0.625 to 1.0, and specificities from 0.571 to 0.974. The XGBoost model achieved the best performance, with an AUC of 0.896 in the external validation set. SHAP analysis identified the change in the main Cobb angle between the second and first follow-ups [Cobb1(2−1)] as the most important predictor, followed by the main Cobb angle at the second follow-up (Cobb1-2) and the change in the secondary Cobb angle [Cobb2(2−1)]. The XGBoost model demonstrated the best predictive performance in the external validation cohort, confirming its preliminary stability and generalizability. SHAP analysis indicated that Cobb1(2−1) was the most important feature for predicting scoliosis progression. This model offers a valuable tool for clinical decision-making by enabling early identification of high-risk patients and supporting early intervention strategies through automated feature extraction and interpretable analysis. The online version contains supplementary material available at 10.1186/s12891-025-08841-3.

Khaliq A, Ahmad F, Rehman HU, Alanazi SA, Haleem H, Junaid K, Andrikopoulou E

pubmed logopapersJul 4 2025
The integration of artificial intelligence in medical image classification has significantly advanced disease detection. However, traditional deep learning models face persistent challenges, including poor generalizability, high false-positive rates, and difficulties in distinguishing overlapping anatomical features, limiting their clinical utility. To address these limitations, this study proposes a hybrid framework combining Vision Transformers (ViT) and Perceiver IO, designed to enhance multi-disease classification accuracy. Vision Transformers leverage self-attention mechanisms to capture global dependencies in medical images, while Perceiver IO optimizes feature extraction for computational efficiency and precision. The framework is evaluated across three critical clinical domains: neurological disorders, including Stroke (tested on the Brain Stroke Prediction CT Scan Image Dataset) and Alzheimer's (analyzed via the Best Alzheimer MRI Dataset); skin diseases, covering Tinea (trained on the Skin Diseases Dataset) and Melanoma (augmented with dermoscopic images from the HAM10000/HAM10k dataset); and lung diseases, focusing on Lung Cancer (using the Lung Cancer Image Dataset) and Pneumonia (evaluated with the Pneumonia Dataset containing bacterial, viral, and normal X-ray cases). For neurological disorders, the model achieved 0.99 accuracy, 0.99 precision, 1.00 recall, 0.99 F1-score, demonstrating robust detection of structural brain abnormalities. In skin disease classification, it attained 0.95 accuracy, 0.93 precision, 0.97 recall, 0.95 F1-score, highlighting its ability to differentiate fine-grained textural patterns in lesions. For lung diseases, the framework achieved 0.98 accuracy, 0.97 precision, 1.00 recall, 0.98 F1-score, confirming its efficacy in identifying respiratory conditions. To bridge research and clinical practice, an AI-powered chatbot was developed for real-time analysis, enabling users to upload MRI, X-ray, or skin images for automated diagnosis with confidence scores and interpretable insights. This work represents the first application of ViT and Perceiver IO for these disease categories, outperforming conventional architectures in accuracy, computational efficiency, and clinical interpretability. The framework holds significant potential for early disease detection in healthcare settings, reducing diagnostic errors, and improving treatment outcomes for clinicians, radiologists, and patients. By addressing critical limitations of traditional models, such as overlapping feature confusion and false positives, this research advances the deployment of reliable AI tools in neurology, dermatology, and pulmonology.

Ozdemir G, Tulu CN, Isik O, Olmez T, Sozutek A, Seker A

pubmed logopapersJul 4 2025
Neoadjuvant chemoradiotherapy (nCRT) followed by total mesorectal excision is the standard treatment for locally advanced rectal cancer. However, the response to nCRT varies significantly among patients, making it crucial to identify those unlikely to benefit to avoid unnecessary toxicities. Radiomics, a technique for extracting quantitative features from medical images like computed tomography (CT), offers a promising noninvasive approach to analyze disease characteristics and potentially improve treatment decision-making. This retrospective cohort study aimed to compare the performance of various machine learning models in predicting the response to nCRT in rectal cancer based on medical data, including radiomic features extracted from CT, and to investigate the contribution of radiomics to these models. Participants who had completed a long course of nCRT before undergoing surgery were retrospectively enrolled. The patients were categorized into 2 groups: nonresponders and responders based on pathological assessment using the Ryan tumor regression grade. Pretreatment contrast-enhanced CT scans were used to extract 101 radiomic features using the PyRadiomics library. Clinical data, including age, gender, tumor grade, presence of colostomy, carcinoembryonic antigen level, constipation status, albumin, and hemoglobin levels, were also collected. Fifteen machine learning models were trained and evaluated using 10-fold cross-validation on a training set (n = 112 patients). The performance of the trained models was then assessed on an internal test set (n = 35 patients) and an external test set (n = 40 patients) using accuracy, area under the ROC curve (AUC), recall, precision, and F1-score. Among the models, the gradient boosting classifier showed the best training performance (accuracy: 0.92, AUC: 0.95, recall: 0.96, precision: 0.93, F1-score: 0.94). On the internal test set, the extra trees classifier (ETC) achieved an accuracy of 0.84, AUC of 0.90, recall of 0.92, precision of 0.87, and F1-score of 0.90. In the external validation, the ETC model yielded an accuracy of 0.75, AUC of 0.79, recall of 0.91, precision of 0.76, and F1-score of 0.83. Patient-specific biomarkers were more influential than radiomic features in the ETC model. The ETC consistently showed strong performance in predicting nCRT response. Clinical biomarkers, particularly tumor grade, were more influential than radiomic features. The model's external validation performance suggests potential for generalization.

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

Zhong J, Huang T, Jiang R, Zhou Q, Wu G, Zeng Y

pubmed logopapersJul 3 2025
This study aimed to analyze preoperative multimodal magnetic resonance images of patients with rectal cancer using habitat-based, intratumoral, peritumoral, and combined radiomics models for non-invasive prediction of perineural invasion (PNI) status. Data were collected from 385 pathologically confirmed rectal cancer cases across two centers. Patients from Center 1 were randomly assigned to training and internal validation groups at an 8:2 ratio; the external validation group comprised patients from Center 2. Tumors were divided into three subregions via K-means clustering. Radiomics features were isolated from intratumoral and peritumoral (3 mm beyond the tumor) regions, as well as subregions, to form a combined dataset based on T2-weighted imaging and diffusion-weighted imaging. The support vector machine algorithm was used to construct seven predictive models. intratumoral, peritumoral, and subregion features were integrated to generate an additional model, referred to as the Total model. For each radiomics feature, its contribution to prediction outcomes was quantified using Shapley values, providing interpretable evidence to support clinical decision-making. The Total combined model outperformed other predictive models in the training, internal validation, and external validation sets (area under the curve values: 0.912, 0.882, and 0.880, respectively). The integration of intratumoral, peritumoral, and subregion features represents an effective approach for predicting PNI in rectal cancer, providing valuable guidance for rectal cancer treatment, along with enhanced clinical decision-making precision and reliability.

Liu E, Lin A, Kakodkar P, Zhao Y, Wang B, Ling C, Zhang Q

pubmed logopapersJul 3 2025
Accurately and efficiently identifying mitotic figures (MFs) is crucial for diagnosing and grading various cancers, including glioblastoma (GBM), a highly aggressive brain tumour requiring precise and timely intervention. Traditional manual counting of MFs in whole slide images (WSIs) is labour-intensive and prone to interobserver variability. Our study introduces a deep active learning framework that addresses these challenges with minimal human intervention. We utilized a dataset of GBM WSIs from The Cancer Genome Atlas (TCGA). Our framework integrates convolutional neural networks (CNNs) with an active learning strategy. Initially, a CNN is trained on a small, annotated dataset. The framework then identifies uncertain samples from the unlabelled data pool, which are subsequently reviewed by experts. These ambiguous cases are verified and used for model retraining. This iterative process continues until the model achieves satisfactory performance. Our approach achieved 81.75% precision and 82.48% recall for MF detection. For MF subclass classification, it attained an accuracy of 84.1%. Furthermore, this approach significantly reduced annotation time - approximately 900 min across 66 WSIs - cutting the effort nearly in half compared to traditional methods. Our deep active learning framework demonstrates a substantial improvement in both efficiency and accuracy for MF detection and classification in GBM WSIs. By reducing reliance on large annotated datasets, it minimizes manual effort while maintaining high performance. This methodology can be generalized to other medical imaging tasks, supporting broader applications in the healthcare domain.
Page 493 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.