Sort by:
Page 343 of 6636627 results

Kunal Kawadkar

arxiv logopreprintJul 24 2025
The emergence of Vision Transformers (ViTs) has revolutionized computer vision, yet their effectiveness compared to traditional Convolutional Neural Networks (CNNs) in medical imaging remains under-explored. This study presents a comprehensive comparative analysis of CNN and ViT architectures across three critical medical imaging tasks: chest X-ray pneumonia detection, brain tumor classification, and skin cancer melanoma detection. We evaluated four state-of-the-art models - ResNet-50, EfficientNet-B0, ViT-Base, and DeiT-Small - across datasets totaling 8,469 medical images. Our results demonstrate task-specific model advantages: ResNet-50 achieved 98.37% accuracy on chest X-ray classification, DeiT-Small excelled at brain tumor detection with 92.16% accuracy, and EfficientNet-B0 led skin cancer classification at 81.84% accuracy. These findings provide crucial insights for practitioners selecting architectures for medical AI applications, highlighting the importance of task-specific architecture selection in clinical decision support systems.

Sutton, C., Prowse, J., Elshehaly, M., Randell, R.

medrxiv logopreprintJul 24 2025
BackgroundAdvancements in imaging technology, alongside increasing longevity and co-morbidities, have led to heightened demand for diagnostic radiology services. However, there is a shortfall in radiology and radiography staff to acquire, read and report on such imaging examinations. Artificial intelligence (AI) has been identified, notably by AI developers, as a potential solution to impact positively the workload of radiology services for diagnostics to address this staffing shortfall. MethodsA rapid review complemented with data from interviews with UK radiology service stakeholders was undertaken. ArXiv, Cochrane Library, Embase, Medline and Scopus databases were searched for publications in English published between 2007 and 2022. Following screening 110 full texts were included. Interviews with 15 radiology service managers, clinicians and academics were carried out between May and September 2022. ResultsMost literature was published in 2021 and 2022 with a distinct focus on AI for diagnostics of lung and chest disease (n = 25) notably COVID-19 and respiratory system cancers, closely followed by AI for breast screening (n = 23). AI contribution to streamline the workload of radiology services was categorised as autonomous, augmentative and assistive contributions. However, percentage estimates, of workload reduction, varied considerably with the most significant reduction identified in national screening programmes. AI was also recognised as aiding radiology services through providing second opinion, assisting in prioritisation of images for reading and improved quantification in diagnostics. Stakeholders saw AI as having the potential to remove some of the laborious work and contribute service resilience. ConclusionsThis review has shown there is limited data on real-world experiences from radiology services for the implementation of AI in clinical production. Autonomous, augmentative and assistive AI can, as noted in the article, decrease workload and aid reading and reporting, however the governance surrounding these advancements lags.

Aljawarneh S, Ray I

pubmed logopapersJul 24 2025
This investigation delves into the diagnosis of COVID-19, using X-ray images generated by way of an effective deep learning model. In terms of assessing the COVID-19 diagnosis learning model, the methods currently employed tend to focus on the accuracy rate level, while neglecting several significant assessment parameters. These parameters, which include precision, sensitivity and specificity, significantly, F1-score, and ROC-AUC influence the performance level of the model. In this paper, we have improved the InceptionResNet and called Enhanced InceptionResNet with restructured parameters termed, "Enhanced InceptionResNet," which incorporates depth-wise separable convolutions to enhance the efficiency of feature extraction and minimize the consumption of computational resources. For this investigation, three residual network (ResNet) models, namely Res- Net, InceptionResNet model, and the Enhanced InceptionResNet with restructured parameters, were employed for a medical image classification assignment. The performance of each model was evaluated on a balanced dataset of 2600 X-ray images. The models were subsequently assessed for accuracy and loss, as well subjected to a confusion matrix analysis. The Enhanced InceptionResNet consistently outperformed ResNet and InceptionResNet in terms of validation and testing accuracy, recall, precision, F1-score, and ROC-AUC demonstrating its superior capacity for identifying pertinent information in the data. In the context of validation and testing accuracy, our Enhanced InceptionRes- Net repeatedly proved to be more reliable than ResNet, an indication of the former's capacity for the efficient identification of pertinent information in the data (99.0% and 98.35%, respectively), suggesting enhanced feature extraction capabilities. The Enhanced InceptionResNet excelled in COVID-19 diagnosis from chest X-rays, surpassing ResNet and Default InceptionResNet in accuracy, precision, and sensitivity. Despite computational demands, it shows promise for medical image classification. Future work should leverage larger datasets, cloud platforms, and hyperparameter optimisation to improve performance, especially for distinguishing normal and pneumonia cases.

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation. Code is available at https://github.com/HealthX-Lab/TextSAM-EUS .

Matthew UO, Rosa RL, Saadi M, Rodriguez DZ

pubmed logopapersJul 23 2025
The integration of artificial intelligence (AI) into medical image analysis has transformed healthcare, offering unprecedented precision in diagnosis, treatment planning, and disease monitoring. However, its adoption within the Internet of Medical Things (IoMT) raises significant challenges related to transparency, trustworthiness, and security. This paper introduces a novel Explainable AI (XAI) framework tailored for Medical Cyber-Physical Systems (MCPS), addressing these challenges by combining deep neural networks with symbolic knowledge reasoning to deliver clinically interpretable insights. The framework incorporates an Enhanced Dynamic Confidence-Weighted Attention (Enhanced DCWA) mechanism, which improves interpretability and robustness by dynamically refining attention maps through adaptive normalization and multi-level confidence weighting. Additionally, a Resilient Observability and Detection Engine (RODE) leverages sparse observability principles to detect and mitigate adversarial threats, ensuring reliable performance in dynamic IoMT environments. Evaluations conducted on benchmark datasets, including CheXpert, RSNA Pneumonia Detection Challenge, and NIH Chest X-ray Dataset, demonstrate significant advancements in classification accuracy, adversarial robustness, and explainability. The framework achieves a 15% increase in lesion classification accuracy, a 30% reduction in robustness loss, and a 20% improvement in the Explainability Index compared to state-of-the-art methods.

Lan G, Zhu Y, Xiao S, Iqbal M, Yang J

pubmed logopapersJul 23 2025
The rapid advancement of artificial intelligence (AI) in healthcare has accelerated innovations in medical algorithms, yet its broader adoption faces critical ethical and technical barriers. A key challenge lies in algorithmic bias stemming from heterogeneous medical data across institutions, equipment, and workflows, which may perpetuate disparities in AI-driven diagnoses and exacerbate inequities in patient care. While AI's ability to extract deep features from large-scale data offers transformative potential, its effectiveness heavily depends on standardized, high-quality datasets. Current standardization gaps not only limit model generalizability but also raise concerns about reliability and fairness in real-world clinical settings, particularly for marginalized populations. Addressing these urgent issues, this paper proposes an ethical AI framework centered on a novel self-supervised medical image standardization method. By integrating self-supervised image style conversion, channel attention mechanisms, and contrastive learning-based loss functions, our approach enhances structural and style consistency in diverse datasets while preserving patient privacy through decentralized learning paradigms. Experiments across multi-institutional medical image datasets demonstrate that our method significantly improves AI generalizability without requiring centralized data sharing. By bridging the data standardization gap, this work advances technical foundations for trustworthy AI in healthcare.

Sun J, Yao H, Han T, Wang Y, Yang L, Hao X, Wu S

pubmed logopapersJul 23 2025
Clinical evaluation of Artificial Intelligence (AI)-based Precise Image (PI) algorithm in brain imaging remains limited. PI is a deep-learning reconstruction (DLR) technique that reduces image noise while maintaining a familiar Filtered Back Projection (FBP)-like appearance at low doses. This study aims to compare PI, Iterative Reconstruction (IR), and FBP-in improving image quality and enhancing lesion detection in 1.0 mm thin-slice brain computed tomography (CT) images. A retrospective analysis was conducted on brain non-contrast CT scans from August to September 2024 at our institution. Each scan was reconstructed using four methods: routine 5.0 mm FBP (Group A), thin-slice 1.0 mm FBP (Group B), thin-slice 1.0 mm IR (Group C), and thin-slice 1.0 mm PI (Group D). Subjective image quality was assessed by two radiologists using a 4- or 5‑point Likert scale. Objective metrics included contrast-to-noise ratio (CNR), signal-to-noise ratio (SNR), and image noise across designated regions of interest (ROIs). 60 patients (65.47 years ± 18.40; 29 males and 31 females) were included. Among these, 39 patients had lesions, primarily low-density lacunar infarcts. Thin-slice PI images demonstrated the lowest image noise and artifacts, alongside the highest CNR and SNR values (p < 0.001) compared to Groups A, B, and C. Subjective assessments revealed that both PI and IR provided significantly improved image quality over routine FBP (p < 0.05). Specifically, Group D (PI) achieved superior lesion conspicuity and diagnostic confidence, with a 100% detection rate for lacunar lesions, outperforming Groups B and A. PI reconstruction significantly enhances image quality and lesion detectability in thin-slice brain CT scans compared to IR and FBP, suggesting its potential as a new clinical standard.

Wintersperger BJ, Alkadhi H, Wildberger JE

pubmed logopapersJul 23 2025
This article, on the 60th anniversary of the journal Investigative Radiology, a journal dedicated to cutting-edge imaging technology, discusses key historical milestones in CT and MRI technology, as well as the ongoing advancement of contrast agent development for cardiovascular imaging over the past decades. It specifically highlights recent developments and the current state-of-the-art technology, including photon-counting detector CT and artificial intelligence, which will further push the boundaries of cardiovascular imaging. What were once ideas and visions have become today's clinical reality for the benefit of patients, and imaging technology will continue to evolve and transform modern medicine.

He J, Xu J, Chen W, Cao M, Zhang J, Yang Q, Li E, Zhang R, Tong Y, Zhang Y, Gao C, Zhao Q, Xu Z, Wang L, Cheng X, Zheng G, Pan S, Hu C

pubmed logopapersJul 23 2025
Early detection and precise preoperative staging of early gastric cancer (EGC) are critical. Therefore, this study aims to develop a deep learning model using portal venous phase CT images to accurately distinguish EGC without lymph node metastasis. This study included 3164 patients with gastric cancer (GC) who underwent radical surgery at two medical centers in China from 2006 to 2019. Moreover, 2.5D radiomic data and multi-instance learning (MIL) were novel approaches applied in this study. By basing the selection of features on 2.5D radiomic data and MIL, the ResNet101 model combined with the XGBoost model represented a satisfactory performance for diagnosing pT1N0 GC. Furthermore, the 2.5D MIL-based model demonstrated a markedly superior predictive performance compared to traditional radiomics models and clinical models. We first constructed a deep learning prediction model based on 2.5D radiomics and MIL for effectively diagnosing pT1N0 GC patients, which provides valuable information for the individualized treatment selection.

Bellini D, Ferrari R, Vicini S, Rengo M, Saletti CL, Carbone I

pubmed logopapersJul 23 2025
This review paper explores the integration of ChatGPT, a generative AI model developed by OpenAI, into radiological practices, focusing on its potential to enhance the operational efficiency of radiologists. ChatGPT operates on the GPT architecture, utilizing advanced machine learning techniques, including unsupervised pre-training and reinforcement learning, to generate human-like text responses. While AI applications in radiology predominantly focus on imaging acquisition, reconstruction, and interpretation-commonly embedded directly within hardware-the accessibility and functional breadth of ChatGPT make it a unique tool. This interview-based review should not be intended as a detailed evaluation of all ChatGPT features. Instead, it aims to test its utility in everyday radiological tasks through real-world examples. ChatGPT demonstrated strong capabilities in structuring radiology reports according to international guidelines (e.g., PI-RADS, CT reporting for diverticulitis), designing a complete research protocol, and performing advanced statistical analysis from Excel datasets, including ROC curve generation and intergroup comparison. Although not capable of directly interpreting DICOM images, ChatGPT provided meaningful assistance in image post-processing and interpretation when images were converted to standard formats. These findings highlight its current strengths and limitations as a supportive tool for radiologists.
Page 343 of 6636627 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.