Sort by:
Page 598 of 7627616 results

Nayak GS, Mallick PK, Sahu DP, Kathi A, Reddy R, Viyyapu J, Pabbisetti N, Udayana SP, Sanapathi H

pubmed logopapersJun 14 2025
Brain stroke is a leading cause of disability and mortality worldwide, necessitating the development of accurate and efficient diagnostic models. In this study, we explore the integration of Genetic Algorithm (GA)-based feature selection with three state-of-the-art deep learning architectures InceptionV3, VGG19, and MobileNetV2 to enhance stroke detection from neuroimaging data. GA is employed to optimize feature selection, reducing redundancy and improving model performance. The selected features are subsequently fed into the respective deep-learning models for classification. The dataset used in this study comprises neuroimages categorized into "Normal" and "Stroke" classes. Experimental results demonstrate that incorporating GA improves classification accuracy while reducing computational complexity. A comparative analysis of the three architectures reveals their effectiveness in stroke detection, with MobileNetV2 achieving the highest accuracy of 97.21%. Notably, the integration of Genetic Algorithms with MobileNetV2 for feature selection represents a novel contribution, setting this study apart from prior approaches that rely solely on traditional CNN pipelines. Owing to its lightweight design and low computational demands, MobileNetV2 also offers significant advantages for real-time clinical deployment, making it highly applicable for use in emergency care settings where rapid diagnosis is critical. Additionally, performance metrics such as precision, recall, F1-score, and Receiver Operating Characteristic (ROC) curves are evaluated to provide comprehensive insights into model efficacy. This research underscores the potential of genetic algorithm-driven optimization in enhancing deep learning-based medical image classification, paving the way for more efficient and reliable stroke diagnosis.

Lo Gullo R, van Veldhuizen V, Roa T, Kapetas P, Teuwen J, Pinker K

pubmed logopapersJun 14 2025
The demand for breast imaging services continues to grow, driven by expanding indications in breast cancer diagnosis and treatment. This increasing demand underscores the potential role of artificial intelligence (AI) to enhance workflow efficiency as well as to further unlock the abundant imaging data to achieve improvements along the breast cancer pathway. Although AI has made significant advancements in mammography and digital breast tomosynthesis, with commercially available computer-aided detection (CAD systems) widely used for breast cancer screening and detection, its adoption in breast MRI has been slower. This lag is primarily attributed to the inherent complexity of breast MRI examinations and also hence the more limited availability of large, well-annotated publicly available breast MRI datasets. Despite these challenges, interest in AI implementation in breast MRI remains strong, fueled by the expanding use and indications for breast MRI. This article explores the implementation of AI in breast MRI across the breast cancer care pathway, highlighting its potential to revolutionize the way we detect and manage breast cancer. By addressing current challenges and examining emerging AI applications, we aim to provide a comprehensive overview of how AI is reshaping breast MRI and improving outcomes for patients.

Maharani DA, Utaminingrum F, Husnina DNN, Sukmaningrum B, Rahmania FN, Handani F, Chasanah HN, Arrahman A, Febrianto F

pubmed logopapersJun 14 2025
As one of the leading causes of death worldwide, early detection of lung disease is a very important step to improve the effectiveness of treatment. By using medical image data, such as X-ray or CT-scan, classification of lung disease can be done. Deep learning methods have been widely used to recognize complex patterns in medical images, but this approach has the constraints of requiring large data variations and high computing resources. In overcoming these constraints, the lightweight architecture in deep learning can provide a more efficient solution based on the number of parameters and computing time. This method can be applied to devices with low processor specifications on portable devices such as mobile phones. This article presents a comprehensive review of 23 research studies published between 2020 and 2025, focusing on various lightweight architectures and optimization techniques aimed at improving the accuracy of lung disease detection. The results show that these models are able to significantly reduce parameter sizes, resulting in faster computation times while maintaining competitive accuracy compared to traditional deep learning architectures. From the research that has been done, it can be seen that SqueezeNet applied on public COVID-19 datasets is the best basic architecture with high accuracy, and the number of parameters is 570 thousand, which is very low. On the other hand, UNet requires 31.07 million parameters, and SegNet requires 29.45 million parameters trained on CT scan images from Italian Society of Medical and Interventional Radiology and Radiopedia, so it is less efficient. For the combination method, EfficientNetV2 and Extreme Learning Machine (ELM) are able to achieve the highest accuracy of 98.20 % and can significantly reduce parameters. The worst performance is shown by VGG and UNet with a decrease in accuracy from 91.05 % to 87 % and an increase in the number of parameters. It can be concluded that the lightweight architecture can be applied to medical image classification in the diagnosis of lung disease quickly and efficiently on devices with limited specifications.

Kundu S, Dutta S, Mukhopadhyay J, Chakravorty N

pubmed logopapersJun 14 2025
Brain tumors, particularly glioblastoma multiforme, are considered one of the most threatening types of tumors in neuro-oncology. Segmenting brain tumors is a crucial part of medical imaging. It plays a key role in diagnosing conditions, planning treatments, and keeping track of patients' progress. This paper presents a novel lightweight deep convolutional neural network (CNN) model specifically designed for accurate and efficient brain tumor segmentation from magnetic resonance imaging (MRI) scans. Our model leverages a streamlined architecture that reduces computational complexity while maintaining high segmentation accuracy. We have introduced several novel approaches, including optimized convolutional layers that capture both local and global features with minimal parameters. A layerwise adaptive weighting feature fusion technique is implemented that enhances comprehensive feature representation. By incorporating shifted windowing, the model achieves better generalization across data variations. Dynamic weighting is introduced in skip connections that allows backpropagation to determine the ideal balance between semantic and positional features. To evaluate our approach, we conducted experiments on publicly available MRI datasets and compared our model against state-of-the-art segmentation methods. Our lightweight model has an efficient architecture with 1.45 million parameters - 95% fewer than nnUNet (30.78M), 91% fewer than standard UNet (16.21M), and 85% fewer than a lightweight hybrid CNN-transformer network (Liu et al., 2024) (9.9M). Coupled with a 4.9× faster GPU inference time (0.904 ± 0.002 s vs. nnUNet's 4.416 ± 0.004 s), the design enables real-time deployment on resource-constrained devices while maintaining competitive segmentation accuracy. Code is available at: FFLUNet.

Ozaki K, Hasegawa H, Kwon J, Katsumata Y, Yoneyama M, Ishida S, Iyoda T, Sakamoto M, Aramaki S, Tanahashi Y, Goshima S

pubmed logopapersJun 14 2025
To assess the effects of industry-developed deep learning reconstruction with super resolution (DLR-SR) on single-shot turbo spin-echo (SshTSE) images with thickness of 2 mm with DLR (SshTSE<sup>2mm</sup>) relative to those of images with a thickness of 5 mm with DLR (SSshTSE<sup>5mm</sup>) in the patients with pancreatic cystic lesions. Thirty consecutive patients who underwent abdominal MRI examinations because of pancreatic cystic lesions under observation between June 2024 and July 2024 were enrolled. We qualitatively and quantitatively evaluated the image qualities of SshTSE<sup>2mm</sup> and SshTSE<sup>5mm</sup> with and without DLR-SR. The SNRs of the pancreas, spleen, paraspinal muscle, peripancreatic fat, and pancreatic cystic lesions of SshTSE<sup>2mm</sup> with and without DLR-SR did not decrease in compared to that of SshTSE<sup>5mm</sup> with and without DLR-SR. There were no significant differences in contrast-to-noise ratios (CNRs) of the pancreas-to-cystic lesions and fat between 4 types of images. SshTSE<sup>2mm</sup> with DLR-SR had the highest image quality related to pancreas edge sharpness, perceived coarseness pancreatic duct clarity, noise, artifacts, overall image quality, and diagnostic confidence of cystic lesions, followed by SshTSE<sup>2mm</sup> without DLR-SR and SshTSE<sup>5mm</sup> with and without DLR-SR (P  <  0.0001). SshTSE<sup>2mm</sup> with DLR-SR images had better quality than the other images and did not have decreased SNRs and CNRs. The thin-slice SshTSE with DLR-SR may be feasible and clinically useful for the evaluation of patients with pancreatic cystic lesions.

Bian X, Liu J, Xu S, Liu W, Mei L, Xiao C, Yang F

pubmed logopapersJun 14 2025
Convolutional Neural Networks (CNNs) have achieved remarkable success in breast ultrasound image segmentation, but they still face several challenges when dealing with breast lesions. Due to the limitations of CNNs in modeling long-range dependencies, they often perform poorly in handling issues such as similar intensity distributions, irregular lesion shapes, and blurry boundaries, leading to low segmentation accuracy. To address these issues, we propose the ThreeF-Net, a fine-grained feature fusion network. This network combines the advantages of CNNs and Transformers, aiming to simultaneously capture local features and model long-range dependencies, thereby improving the accuracy and stability of segmentation tasks. Specifically, we designed a Transformer-assisted Dual Encoder Architecture (TDE), which integrates convolutional modules and self-attention modules to achieve collaborative learning of local and global features. Additionally, we designed a Global Group Feature Extraction (GGFE) module, which effectively fuses the features learned by CNNs and Transformers, enhancing feature representation ability. To further improve model performance, we also introduced a Dynamic Fine-grained Convolution (DFC) module, which significantly improves lesion boundary segmentation accuracy by dynamically adjusting convolution kernels and capturing multi-scale features. Comparative experiments with state-of-the-art segmentation methods on three public breast ultrasound datasets demonstrate that ThreeF-Net outperforms existing methods across multiple key evaluation metrics.

Zahid Ullah, Jihie Kim

arxiv logopreprintJun 14 2025
Accurate brain tumor classification is crucial in medical imaging to ensure reliable diagnosis and effective treatment planning. This study introduces a novel double ensembling framework that synergistically combines pre-trained deep learning (DL) models for feature extraction with optimized machine learning (ML) classifiers for robust classification. The framework incorporates comprehensive preprocessing and data augmentation of brain magnetic resonance images (MRI), followed by deep feature extraction using transfer learning with pre-trained Vision Transformer (ViT) networks. The novelty lies in the dual-level ensembling strategy: feature-level ensembling, which integrates deep features from the top-performing ViT models, and classifier-level ensembling, which aggregates predictions from hyperparameter-optimized ML classifiers. Experiments on two public Kaggle MRI brain tumor datasets demonstrate that this approach significantly surpasses state-of-the-art methods, underscoring the importance of feature and classifier fusion. The proposed methodology also highlights the critical roles of hyperparameter optimization (HPO) and advanced preprocessing techniques in improving diagnostic accuracy and reliability, advancing the integration of DL and ML for clinically relevant medical image analysis.

Imaizumi K, Usui S, Nagata T, Hayakawa H, Shiotani S

pubmed logopapersJun 14 2025
Sex estimation is an indispensable test for identifying skeletal remains in the field of forensic anthropology. We developed a novel sex-estimation method for skulls and several parts of the skull using machine learning. A total of 240 skull shapes were obtained from postmortem computed tomography scans. The shapes of the whole skull, cranium, and mandible were simplified by wrapping them with virtual elastic film. These were then transformed into homologous shape models. Homologous models of the cranium and mandible were segmented into six regions containing well-known sexually dimorphic areas. Shape data were reduced in dimensionality by principal component analysis (PCA) or partial least squares regression (PLS). The components of PCA and PLS were applied to a support vector machine (SVM), and the accuracy rates of sex estimation were assessed. High accuracy rates in sex estimation were observed in SVM after reducing the dimensionality of data with PLS. The rates exceeded 90 % in two of the nine regions examined, whereas the SVM with PCA components did not reach 90 % in any region. Virtual shapes created from very large and small scores of the first principal components of PLS closely resembled masculine and feminine models created by emphasizing the shape difference between the averaged shape of male and female skulls. Such similarities were observed in all skull regions examined, particularly in sexually dimorphic areas. Estimation models also achieved high estimation accuracies in newly prepared skull shapes, suggesting that the estimation method developed here may be sufficiently applicable to actual casework.

Ma Y, Li M, Wu H

pubmed logopapersJun 13 2025
Coronary computed tomography angiography (CCTA) has emerged as the first-line noninvasive imaging test for patients at high risk of coronary artery disease (CAD). When combined with machine learning (ML), it provides more valid evidence in diagnosing major adverse cardiovascular events (MACEs). Radiomics provides informative multidimensional features that can help identify high-risk populations and can improve the diagnostic performance of CCTA. However, its role in predicting MACEs remains highly debated. We evaluated the diagnostic value of ML models constructed using radiomic features extracted from CCTA in predicting MACEs, and compared the performance of different learning algorithms and models, thereby providing clinical recommendations for the diagnosis, treatment, and prognosis of MACEs. We comprehensively searched 5 online databases, Cochrane Library, Web of Science, Elsevier, CNKI, and PubMed, up to September 10, 2024, for original studies that used ML models among patients who underwent CCTA to predict MACEs and reported clinical outcomes and endpoints related to it. Risk of bias in the ML models was assessed by the Prediction Model Risk of Bias Assessment Tool, while the radiomics quality score (RQS) was used to evaluate the methodological quality of the radiomics prediction model development and validation. We also followed the TRIPOD (Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis) guidelines to ensure transparency of ML models included. Meta-analysis was performed using Meta-DiSc software (version 1.4), which included the I² score and Cochran Q test, along with StataMP 17 (StataCorp) to assess heterogeneity and publication bias. Due to the high heterogeneity observed, subgroup analysis was conducted based on different model groups. Ten studies were included in the analysis, 5 (50%) of which differentiated between training and testing groups, where the training set collected 17 kinds of models and the testing set gathered 26 models. The pooled area under the receiver operating characteristic (AUROC) curve for ML models predicting MACEs was 0.7879 in the training set and 0.7981 in the testing set. Logistic regression (LR), the most commonly used algorithm, achieved an AUROC of 0.8229 in the testing group and 0.7983 in the training group. Non-LR models yielded AUROCs of 0.7390 in the testing set and 0.7648 in the training set, while the random forest (RF) models reached an AUROC of 0.8444 in the training group. Study limitations included a limited number of studies, high heterogeneity, and the types of included studies. The performance of ML models for predicting MACEs was found to be superior to that of general models based on basic feature extraction and integration from CCTA. Specifically, LR-based ML diagnostic models demonstrated significant clinical potential, particularly when combined with clinical features, and are worth further validation through more clinical trials. PROSPERO CRD42024596364; https://www.crd.york.ac.uk/PROSPERO/view/CRD42024596364.

Lee S, Kim S, Seo M, Park S, Imrus S, Ashok K, Lee D, Park C, Lee S, Kim J, Yoo JH, Kim M

pubmed logopapersJun 13 2025
This study introduces a motion-based learning network with a global-local self-attention module (MoGLo-Net) to enhance 3D reconstruction in handheld photoacoustic and ultrasound (PAUS) imaging. Standard PAUS imaging is often limited by a narrow field of view (FoV) and the inability to effectively visualize complex 3D structures. The 3D freehand technique, which aligns sequential 2D images for 3D reconstruction, faces significant challenges in accurate motion estimation without relying on external positional sensors. MoGLo-Net addresses these limitations through an innovative adaptation of the self-attention mechanism, which effectively exploits the critical regions, such as fully-developed speckle areas or high-echogenic tissue regions within successive ultrasound images to accurately estimate the motion parameters. This facilitates the extraction of intricate features from individual frames. Additionally, we employ a patch-wise correlation operation to generate a correlation volume that is highly correlated with the scanning motion. A custom loss function was also developed to ensure robust learning with minimized bias, leveraging the characteristics of the motion parameters. Experimental evaluations demonstrated that MoGLo-Net surpasses current state-of-the-art methods in both quantitative and qualitative performance metrics. Furthermore, we expanded the application of 3D reconstruction technology beyond simple B-mode ultrasound volumes to incorporate Doppler ultrasound and photoacoustic imaging, enabling 3D visualization of vasculature. The source code for this study is publicly available at: https://github.com/pnu-amilab/US3D.
Page 598 of 7627616 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.