Sort by:
Page 581 of 7627616 results

Nguyen QH, Hoang DA, Pham HV

pubmed logopapersJun 20 2025
The COVID-19 pandemic plays a significant roles in the global health, highlighting the imperative for effective management of post-recovery symptoms. Within this context, Ground Glass Opacity (GGO) in lung computed tomography CT scans emerges as a critical indicator for early intervention. Recently, most researchers have investigated initially a challenge to refine techniques for GGO segmentation. These approaches aim to scrutinize and juxtapose cutting-edge methods for analyzing lung CT images of patients recuperating from COVID-19. While many methods in this challenge utilize the nnU-Net architecture, its general approach has not concerned completely GGO areas such as marking infected areas, ground-glass opacity, irregular shapes and fuzzy boundaries. This research has investigated a specialized machine learning algorithm, advancing the nn-UNet framework to accurately segment GGO in lung CT scans of post-COVID-19 patients. We propose a novel approach for two-stage image segmentation methods based on nnU-Net 2D and 3D models including lung and shadow image segmentation, incorporating the attention mechanism. The combination models enhance automatic segmentation and models' accuracy when using different error function in the training process. Experimental results show that the proposed model's outcomes DSC score ranks fifth among the compared results. The proposed method has also the second-highest sensitivity value among the methods, which shows that this method has a higher true segmentation rate than most of the other methods. The proposed method has achieved a Hausdorff95 of 54.566, Surface dice of 0.7193, Sensitivity of 0.7528, and Specificity of 0.7749. As compared with the state-of-the-art methods, the proposed model in experimental results is improved much better than the current methods in term of segmentation of infected areas. The proposed model has been deployed in the case study of real-world problems with the combination of 2D and 3D models. It is demonstrated the capacity to comprehensively detect lung lesions correctly. Additionally, the boundary loss function has assisted in achieving more precise segmentation for low-resolution images. Initially segmenting lung area has reduced the volume of images requiring processing, while diminishing for training process.

Josué Martínez-Martínez, Olivia Brown, Mostafa Karami, Sheida Nabavi

arxiv logopreprintJun 20 2025
Deep neural networks are increasingly being used to detect and diagnose medical conditions using medical imaging. Despite their utility, these models are highly vulnerable to adversarial attacks and distribution shifts, which can affect diagnostic reliability and undermine trust among healthcare professionals. In this study, we propose a robust training algorithm with data augmentation (RTDA) to mitigate these vulnerabilities in medical image classification. We benchmark classifier robustness against adversarial perturbations and natural variations of RTDA and six competing baseline techniques, including adversarial training and data augmentation approaches in isolation and combination, using experimental data sets with three different imaging technologies (mammograms, X-rays, and ultrasound). We demonstrate that RTDA achieves superior robustness against adversarial attacks and improved generalization performance in the presence of distribution shift in each image classification task while maintaining high clean accuracy.

Mahin Montasir Afif, Abdullah Al Noman, K. M. Tahsin Kabir, Md. Mortuza Ahmmed, Md. Mostafizur Rahman, Mufti Mahmud, Md. Ashraful Babu

arxiv logopreprintJun 20 2025
Generative Adversarial Networks (GAN) have shown potential in expanding limited medical imaging datasets. This study explores how different ratios of GAN-generated and real brain tumor MRI images impact the performance of a CNN in classifying healthy vs. tumorous scans. A DCGAN was used to create synthetic images which were mixed with real ones at various ratios to train a custom CNN. The CNN was then evaluated on a separate real-world test set. Our results indicate that the model maintains high sensitivity and precision in tumor classification, even when trained predominantly on synthetic data. When only a small portion of GAN data was added, such as 900 real images and 100 GAN images, the model achieved excellent performance, with test accuracy reaching 95.2%, and precision, recall, and F1-score all exceeding 95%. However, as the proportion of GAN images increased further, performance gradually declined. This study suggests that while GANs are useful for augmenting limited datasets especially when real data is scarce, too much synthetic data can introduce artifacts that affect the model's ability to generalize to real world cases.

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Shibagaki Y, Oka H, Imanishi R, Shimada S, Nakau K, Takahashi S

pubmed logopapersJun 20 2025
Pulmonary valve regurgitation after repaired Tetralogy of Fallot (TOF) or double-outlet right ventricle (DORV) causes hypertrophy and papillary muscle enlargement. Cardiac magnetic resonance imaging (CMR) can evaluate the right ventricular (RV) dilatation, but the effect of trabecular and papillary muscle (TPM) exclusion on RV volume for TOF or DORV reoperation decision is unclear. Twenty-three patients with repaired TOF or DORV, and 19 healthy controls aged ≥15, underwent CMR from 2012 to 2022. TPM volume is measured by artificial intelligence. Reoperation was considered when RV end-diastolic volume index (RVEDVI) >150 mL/m<sup>2</sup> or RV end-systolic volume index (RVESVI) >80 mL/m<sup>2</sup>. RV volumes were higher in the disease group than controls (P α 0.001). RV mass and TPM volumes were higher in the disease group (P α 0.001). The reduction rate of RV volumes due to the exclusion of TPM volume was 6.3% (2.1-10.5), 11.7% (6.9-13.8), and 13.9% (9.5-19.4) in the control, volume load, and volume α pressure load groups, respectively. TPM/RV volumes were higher in the volume α pressure load group (control: 0.07 g/mL, volume: 0.14 g/mL, volume α pressure: 0.17 g/mL), and correlated with QRS duration (R α 0.77). In 3 patients in the volume α pressure, RV volume included TPM was indicated for reoperation, but when RV volume was reduced by TPM removal, reoperation was no indicated. RV volume measurements, including TPM in volume α pressure load, may help determine appropriate volume recommendations for reoperation.

Minmin Yang, Huantao Ren, Senem Velipasalar

arxiv logopreprintJun 20 2025
Cone-beam computed tomography (CBCT) using only a few X-ray projection views enables faster scans with lower radiation dose, but the resulting severe under-sampling causes strong artifacts and poor spatial coverage. We address these challenges in a unified framework. First, we replace conventional UNet/ResNet encoders with TransUNet, a hybrid CNN-Transformer model. Convolutional layers capture local details, while self-attention layers enhance global context. We adapt TransUNet to CBCT by combining multi-scale features, querying view-specific features per 3D point, and adding a lightweight attenuation-prediction head. This yields Trans-CBCT, which surpasses prior baselines by 1.17 dB PSNR and 0.0163 SSIM on the LUNA16 dataset with six views. Second, we introduce a neighbor-aware Point Transformer to enforce volumetric coherence. This module uses 3D positional encoding and attention over k-nearest neighbors to improve spatial consistency. The resulting model, Trans$^2$-CBCT, provides an additional gain of 0.63 dB PSNR and 0.0117 SSIM. Experiments on LUNA16 and ToothFairy show consistent gains from six to ten views, validating the effectiveness of combining CNN-Transformer features with point-based geometry reasoning for sparse-view CBCT reconstruction.

Xiaoyu Shi, Rahul Kumar Jain, Yinhao Li, Ruibo Hou, Jingliang Cheng, Jie Bai, Guohua Zhao, Lanfen Lin, Rui Xu, Yen-wei Chen

arxiv logopreprintJun 20 2025
Deep learning has demonstrated remarkable success in medical image segmentation and computer-aided diagnosis. In particular, numerous advanced methods have achieved state-of-the-art performance in brain tumor segmentation from MRI scans. While recent studies in other medical imaging domains have revealed that integrating textual reports with visual data can enhance segmentation accuracy, the field of brain tumor analysis lacks a comprehensive dataset that combines radiological images with corresponding textual annotations. This limitation has hindered the exploration of multimodal approaches that leverage both imaging and textual data. To bridge this critical gap, we introduce the TextBraTS dataset, the first publicly available volume-level multimodal dataset that contains paired MRI volumes and rich textual annotations, derived from the widely adopted BraTS2020 benchmark. Building upon this novel dataset, we propose a novel baseline framework and sequential cross-attention method for text-guided volumetric medical image segmentation. Through extensive experiments with various text-image fusion strategies and templated text formulations, our approach demonstrates significant improvements in brain tumor segmentation accuracy, offering valuable insights into effective multimodal integration techniques. Our dataset, implementation code, and pre-trained models are publicly available at https://github.com/Jupitern52/TextBraTS.

Shreeram Athreya, Carlos Olivares, Ameera Ismail, Kambiz Nael, William Speier, Corey Arnold

arxiv logopreprintJun 20 2025
Following successful large-vessel recanalization via endovascular thrombectomy (EVT) for acute ischemic stroke (AIS), some patients experience a complication known as no-reflow, defined by persistent microvascular hypoperfusion that undermines tissue recovery and worsens clinical outcomes. Although prompt identification is crucial, standard clinical practice relies on perfusion magnetic resonance imaging (MRI) within 24 hours post-procedure, delaying intervention. In this work, we introduce the first-ever machine learning (ML) framework to predict no-reflow immediately after EVT by leveraging previously unexplored intra-procedural digital subtraction angiography (DSA) sequences and clinical variables. Our retrospective analysis included AIS patients treated at UCLA Medical Center (2011-2024) who achieved favorable mTICI scores (2b-3) and underwent pre- and post-procedure MRI. No-reflow was defined as persistent hypoperfusion (Tmax > 6 s) on post-procedural imaging. From DSA sequences (AP and lateral views), we extracted statistical and temporal perfusion features from the target downstream territory to train ML classifiers for predicting no-reflow. Our novel method significantly outperformed a clinical-features baseline(AUC: 0.7703 $\pm$ 0.12 vs. 0.5728 $\pm$ 0.12; accuracy: 0.8125 $\pm$ 0.10 vs. 0.6331 $\pm$ 0.09), demonstrating that real-time DSA perfusion dynamics encode critical insights into microvascular integrity. This approach establishes a foundation for immediate, accurate no-reflow prediction, enabling clinicians to proactively manage high-risk patients without reliance on delayed imaging.

Zhou H, Luo Y, Li S, Zhang G, Zeng X

pubmed logopapersJun 20 2025
This study aims to explore research hotspots and development trends in molecular imaging of glioma from 2014 to 2024. A total of 2957 publications indexed in the web of science core collection (WoSCC) were analyzed using bibliometric techniques. To visualize the research landscape, co-citation clustering, keyword analysis, and technological trend mapping were performed using CiteSpace and Excel. Publication output peaked in 2021. Emerging research trends included the integration of radiomics and artificial intelligence and the application of novel imaging modalities such as positron emission tomography and magnetic resonance spectroscopy. Significant progress was observed in blood-brain barrier disruption techniques and the development of molecular probes, especially those targeting IDH and MGMT mutations. Molecular imaging has been pivotal in advancing glioma research, contributing to improved diagnostic accuracy and personalized treatment strategies. However, challenges such as clinical translation and standardization remain. Future studies should focus on integrating advanced technologies into routine clinical practice to enhance patient care.

Budi Susilo, Y. K., Yuliana, D., Mahadi, M., Abdul Rahman, S., Ariffin, A. E.

medrxiv logopreprintJun 20 2025
This review explores the transformative role of artificial intelligence (AI) in the early detection and prognosis prediction of diabetic retinopathy (DR), a leading cause of vision loss in diabetic patients. AI, particularly deep learning and convolutional neural networks (CNNs), has demonstrated remarkable accuracy in analyzing retinal images, identifying early-stage DR with high sensitivity and specificity. These advancements address critical challenges such as intergrader variability in manual screening and the limited availability of specialists, especially in underserved regions. The integration of AI with telemedicine has further enhanced accessibility, enabling remote screening through portable devices and smartphone-based imaging. Economically, AI-based systems reduce healthcare costs by optimizing resource allocation and minimizing unnecessary referrals. Key findings highlight the dominance of Medicine (819 documents) and Computer Science (613 documents) in research output, reflecting the interdisciplinary nature of this field. Geographically, China, the United States, and India lead in contributions, underscoring global efforts to combat DR. Despite these successes, challenges such as algorithmic bias, data privacy, and the need for explainable AI (XAI) remain. Future research should focus on multi-center validation, diverse AI methodologies, and clinician-friendly tools to ensure equitable adoption. By addressing these gaps, AI can revolutionize DR management, reducing the global burden of diabetes-related blindness through early intervention and scalable solutions.
Page 581 of 7627616 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.