Sort by:
Page 61 of 2252246 results

Artificial intelligence for age-related macular degeneration diagnosis in Australia: A Novel Qualitative Interview Study.

Ly A, Herse S, Williams MA, Stapleton F

pubmed logopapersJun 14 2025
Artificial intelligence (AI) systems for age-related macular degeneration (AMD) diagnosis abound but are not yet widely implemented. AI implementation is complex, requiring the involvement of multiple, diverse stakeholders including technology developers, clinicians, patients, health networks, public hospitals, private providers and payers. There is a pressing need to investigate how AI might be adopted to improve patient outcomes. The purpose of this first study of its kind was to use the AI translation extended version of the non-adoption, abandonment, scale-up, spread and sustainability of healthcare technologies framework to explore stakeholder experiences, attitudes, enablers, barriers and possible futures of digital diagnosis using AI for AMD and eyecare in Australia. Semi-structured, online interviews were conducted with 37 stakeholders (12 clinicians, 10 healthcare leaders, 8 patients and 7 developers) from September 2022 to March 2023. The interviews were audio-recorded, transcribed and analysed using directed and summative content analysis. Technological features influencing implementation were most frequently discussed, followed by the context or wider system, value proposition, adopters, organisations, the condition and finally embedding the adaptation. Patients preferred to focus on the condition, while healthcare leaders elaborated on organisation factors. Overall, stakeholders supported a portable, device-independent clinical decision support tool that could be integrated with existing diagnostic equipment and patient management systems. Opportunities for AI to drive new models of healthcare, patient education and outreach, and the importance of maintaining equity across population groups were consistently emphasised. This is the first investigation to report numerous, interacting perspectives on the adoption of digital diagnosis for AMD in Australia, incorporating an intentionally diverse stakeholder group and the patient voice. It provides a series of practical considerations for the implementation of AI and digital diagnosis into existing care for people with AMD.

Multi-class transformer-based segmentation of pancreatic ductal adenocarcinoma and surrounding structures in CT imaging: a multi-center evaluation.

Wen S, Xiao X

pubmed logopapersJun 14 2025
Accurate segmentation of pancreatic ductal adenocarcinoma (PDAC) and surrounding anatomical structures is critical for diagnosis, treatment planning, and outcome assessment. This study proposes a deep learning-based framework to automate multi-class segmentation in CT images, comparing the performance of four state-of-the-art architectures. This retrospective multi-center study included 3265 patients from six institutions. Four deep learning models-UNet, nnU-Net, UNETR, and Swin-UNet-were trained using five-fold cross-validation on data from five centers and tested independently on a sixth center (n = 569). Preprocessing included intensity normalization, voxel resampling, and standardized annotation for six structures: PDAC lesion, pancreas, veins, arteries, pancreatic duct, and common bile duct. Evaluation metrics included Dice Similarity Coefficient (DSC), Intersection over Union (IoU), directed Hausdorff Distance (dHD), Average Symmetric Surface Distance (ASSD), and Volume Overlap Error (VOE). Statistical comparisons were made using Wilcoxon signed-rank tests with Bonferroni correction. Swin-UNet outperformed all models with a mean validation DSC of 92.4% and test DSC of 90.8%, showing minimal overfitting. It also achieved the lowest dHD (4.3 mm), ASSD (1.2 mm), and VOE (6.0%) in cross-validation. Per-class DSCs for Swin-UNet were consistently higher across all anatomical targets, including challenging structures like the pancreatic duct (91.0%) and bile duct (91.8%). Statistical analysis confirmed the superiority of Swin-UNet (p < 0.001). All models showed generalization capability, but Swin-UNet provided the most accurate and robust segmentation across datasets. Transformer-based architectures, particularly Swin-UNet, enable precise and generalizable multi-class segmentation of PDAC and surrounding anatomy. This framework has potential for clinical integration in PDAC diagnosis, staging, and therapy planning.

Qualitative evaluation of automatic liver segmentation in computed tomography images for clinical use in radiation therapy.

Khalal DM, Slimani S, Bouraoui ZE, Azizi H

pubmed logopapersJun 14 2025
Segmentation of target volumes and organs at risk on computed tomography (CT) images constitutes an important step in the radiotherapy workflow. Artificial intelligence-based methods have significantly improved organ segmentation in medical images. Automatic segmentations are frequently evaluated using geometric metrics. Before a clinical implementation in the radiotherapy workflow, automatic segmentations must also be evaluated by clinicians. The aim of this study was to investigate the correlation between geometric metrics used for segmentation evaluation and the assessment performed by clinicians. In this study, we used the U-Net model to segment the liver in CT images from a publicly available dataset. The model's performance was evaluated using two geometric metrics: the Dice similarity coefficient and the Hausdorff distance. Additionally, a qualitative evaluation was performed by clinicians who reviewed the automatic segmentations to rate their clinical acceptability for use in the radiotherapy workflow. The correlation between the geometric metrics and the clinicians' evaluations was studied. The results showed that while the Dice coefficient and Hausdorff distance are reliable indicators of segmentation accuracy, they do not always align with clinician segmentation. In some cases, segmentations with high Dice scores still required clinician corrections before clinical use in the radiotherapy workflow. This study highlights the need for more comprehensive evaluation metrics beyond geometric measures to assess the clinical acceptability of artificial intelligence-based segmentation. Although the deep learning model provided promising segmentation results, the present study shows that standardized validation methodologies are crucial for ensuring the clinical viability of automatic segmentation systems.

A multimodal fusion system predicting survival benefits of immune checkpoint inhibitors in unresectable hepatocellular carcinoma.

Xu J, Wang T, Li J, Wang Y, Zhu Z, Fu X, Wang J, Zhang Z, Cai W, Song R, Hou C, Yang LZ, Wang H, Wong STC, Li H

pubmed logopapersJun 14 2025
Early identification of unresectable hepatocellular carcinoma (HCC) patients who may benefit from immune checkpoint inhibitors (ICIs) is crucial for optimizing outcomes. Here, we developed a multimodal fusion (MMF) system integrating CT-derived deep learning features and clinical data to predict overall survival (OS) and progression-free survival (PFS). Using retrospective multicenter data (n = 859), the MMF combining an ensemble deep learning (Ensemble-DL) model with clinical variables achieved strong external validation performance (C-index: OS = 0.74, PFS = 0.69), outperforming radiomics (29.8% OS improvement), mRECIST (27.6% OS improvement), clinical benchmarks (C-index: OS = 0.67, p = 0.0011; PFS = 0.65, p = 0.033), and Ensemble-DL (C-index: OS = 0.69, p = 0.0028; PFS = 0.66, p = 0.044). The MMF system effectively stratified patients across clinical subgroups and demonstrated interpretability through activation maps and radiomic correlations. Differential gene expression analysis revealed enrichment of the PI3K/Akt pathway in patients identified by the MMF system. The MMF system provides an interpretable, clinically applicable approach to guide personalized ICI treatment in unresectable HCC.

FDTooth: Intraoral Photographs and CBCT Images for Fenestration and Dehiscence Detection.

Liu K, Elbatel M, Chu G, Shan Z, Sum FHKMH, Hung KF, Zhang C, Li X, Yang Y

pubmed logopapersJun 14 2025
Fenestration and dehiscence (FD) pose significant challenges in dental treatments as they adversely affect oral health. Although cone-beam computed tomography (CBCT) provides precise diagnostics, its extensive time requirements and radiation exposure limit its routine use for monitoring. Currently, there is no public dataset that combines intraoral photographs and corresponding CBCT images; this limits the development of deep learning algorithms for the automated detection of FD and other potential diseases. In this paper, we present FDTooth, a dataset that includes both intraoral photographs and CBCT images of 241 patients aged between 9 and 55 years. FDTooth contains 1,800 precise bounding boxes annotated on intraoral photographs, with gold-standard ground truth extracted from CBCT. We developed a baseline model for automated FD detection in intraoral photographs. The developed dataset and model can serve as valuable resources for research on interdisciplinary dental diagnostics, offering clinicians a non-invasive, efficient method for early FD screening without invasive procedures.

Optimizing stroke detection with genetic algorithm-based feature selection in deep learning models.

Nayak GS, Mallick PK, Sahu DP, Kathi A, Reddy R, Viyyapu J, Pabbisetti N, Udayana SP, Sanapathi H

pubmed logopapersJun 14 2025
Brain stroke is a leading cause of disability and mortality worldwide, necessitating the development of accurate and efficient diagnostic models. In this study, we explore the integration of Genetic Algorithm (GA)-based feature selection with three state-of-the-art deep learning architectures InceptionV3, VGG19, and MobileNetV2 to enhance stroke detection from neuroimaging data. GA is employed to optimize feature selection, reducing redundancy and improving model performance. The selected features are subsequently fed into the respective deep-learning models for classification. The dataset used in this study comprises neuroimages categorized into "Normal" and "Stroke" classes. Experimental results demonstrate that incorporating GA improves classification accuracy while reducing computational complexity. A comparative analysis of the three architectures reveals their effectiveness in stroke detection, with MobileNetV2 achieving the highest accuracy of 97.21%. Notably, the integration of Genetic Algorithms with MobileNetV2 for feature selection represents a novel contribution, setting this study apart from prior approaches that rely solely on traditional CNN pipelines. Owing to its lightweight design and low computational demands, MobileNetV2 also offers significant advantages for real-time clinical deployment, making it highly applicable for use in emergency care settings where rapid diagnosis is critical. Additionally, performance metrics such as precision, recall, F1-score, and Receiver Operating Characteristic (ROC) curves are evaluated to provide comprehensive insights into model efficacy. This research underscores the potential of genetic algorithm-driven optimization in enhancing deep learning-based medical image classification, paving the way for more efficient and reliable stroke diagnosis.

Application of Machine Learning to Breast MR Imaging.

Lo Gullo R, van Veldhuizen V, Roa T, Kapetas P, Teuwen J, Pinker K

pubmed logopapersJun 14 2025
The demand for breast imaging services continues to grow, driven by expanding indications in breast cancer diagnosis and treatment. This increasing demand underscores the potential role of artificial intelligence (AI) to enhance workflow efficiency as well as to further unlock the abundant imaging data to achieve improvements along the breast cancer pathway. Although AI has made significant advancements in mammography and digital breast tomosynthesis, with commercially available computer-aided detection (CAD systems) widely used for breast cancer screening and detection, its adoption in breast MRI has been slower. This lag is primarily attributed to the inherent complexity of breast MRI examinations and also hence the more limited availability of large, well-annotated publicly available breast MRI datasets. Despite these challenges, interest in AI implementation in breast MRI remains strong, fueled by the expanding use and indications for breast MRI. This article explores the implementation of AI in breast MRI across the breast cancer care pathway, highlighting its potential to revolutionize the way we detect and manage breast cancer. By addressing current challenges and examining emerging AI applications, we aim to provide a comprehensive overview of how AI is reshaping breast MRI and improving outcomes for patients.

A review: Lightweight architecture model in deep learning approach for lung disease identification.

Maharani DA, Utaminingrum F, Husnina DNN, Sukmaningrum B, Rahmania FN, Handani F, Chasanah HN, Arrahman A, Febrianto F

pubmed logopapersJun 14 2025
As one of the leading causes of death worldwide, early detection of lung disease is a very important step to improve the effectiveness of treatment. By using medical image data, such as X-ray or CT-scan, classification of lung disease can be done. Deep learning methods have been widely used to recognize complex patterns in medical images, but this approach has the constraints of requiring large data variations and high computing resources. In overcoming these constraints, the lightweight architecture in deep learning can provide a more efficient solution based on the number of parameters and computing time. This method can be applied to devices with low processor specifications on portable devices such as mobile phones. This article presents a comprehensive review of 23 research studies published between 2020 and 2025, focusing on various lightweight architectures and optimization techniques aimed at improving the accuracy of lung disease detection. The results show that these models are able to significantly reduce parameter sizes, resulting in faster computation times while maintaining competitive accuracy compared to traditional deep learning architectures. From the research that has been done, it can be seen that SqueezeNet applied on public COVID-19 datasets is the best basic architecture with high accuracy, and the number of parameters is 570 thousand, which is very low. On the other hand, UNet requires 31.07 million parameters, and SegNet requires 29.45 million parameters trained on CT scan images from Italian Society of Medical and Interventional Radiology and Radiopedia, so it is less efficient. For the combination method, EfficientNetV2 and Extreme Learning Machine (ELM) are able to achieve the highest accuracy of 98.20 % and can significantly reduce parameters. The worst performance is shown by VGG and UNet with a decrease in accuracy from 91.05 % to 87 % and an increase in the number of parameters. It can be concluded that the lightweight architecture can be applied to medical image classification in the diagnosis of lung disease quickly and efficiently on devices with limited specifications.

FFLUNet: Feature Fused Lightweight UNet for brain tumor segmentation.

Kundu S, Dutta S, Mukhopadhyay J, Chakravorty N

pubmed logopapersJun 14 2025
Brain tumors, particularly glioblastoma multiforme, are considered one of the most threatening types of tumors in neuro-oncology. Segmenting brain tumors is a crucial part of medical imaging. It plays a key role in diagnosing conditions, planning treatments, and keeping track of patients' progress. This paper presents a novel lightweight deep convolutional neural network (CNN) model specifically designed for accurate and efficient brain tumor segmentation from magnetic resonance imaging (MRI) scans. Our model leverages a streamlined architecture that reduces computational complexity while maintaining high segmentation accuracy. We have introduced several novel approaches, including optimized convolutional layers that capture both local and global features with minimal parameters. A layerwise adaptive weighting feature fusion technique is implemented that enhances comprehensive feature representation. By incorporating shifted windowing, the model achieves better generalization across data variations. Dynamic weighting is introduced in skip connections that allows backpropagation to determine the ideal balance between semantic and positional features. To evaluate our approach, we conducted experiments on publicly available MRI datasets and compared our model against state-of-the-art segmentation methods. Our lightweight model has an efficient architecture with 1.45 million parameters - 95% fewer than nnUNet (30.78M), 91% fewer than standard UNet (16.21M), and 85% fewer than a lightweight hybrid CNN-transformer network (Liu et al., 2024) (9.9M). Coupled with a 4.9× faster GPU inference time (0.904 ± 0.002 s vs. nnUNet's 4.416 ± 0.004 s), the design enables real-time deployment on resource-constrained devices while maintaining competitive segmentation accuracy. Code is available at: FFLUNet.

Utility of Thin-slice Single-shot T2-weighted MR Imaging with Deep Learning Reconstruction as a Protocol for Evaluating Pancreatic Cystic Lesions.

Ozaki K, Hasegawa H, Kwon J, Katsumata Y, Yoneyama M, Ishida S, Iyoda T, Sakamoto M, Aramaki S, Tanahashi Y, Goshima S

pubmed logopapersJun 14 2025
To assess the effects of industry-developed deep learning reconstruction with super resolution (DLR-SR) on single-shot turbo spin-echo (SshTSE) images with thickness of 2 mm with DLR (SshTSE<sup>2mm</sup>) relative to those of images with a thickness of 5 mm with DLR (SSshTSE<sup>5mm</sup>) in the patients with pancreatic cystic lesions. Thirty consecutive patients who underwent abdominal MRI examinations because of pancreatic cystic lesions under observation between June 2024 and July 2024 were enrolled. We qualitatively and quantitatively evaluated the image qualities of SshTSE<sup>2mm</sup> and SshTSE<sup>5mm</sup> with and without DLR-SR. The SNRs of the pancreas, spleen, paraspinal muscle, peripancreatic fat, and pancreatic cystic lesions of SshTSE<sup>2mm</sup> with and without DLR-SR did not decrease in compared to that of SshTSE<sup>5mm</sup> with and without DLR-SR. There were no significant differences in contrast-to-noise ratios (CNRs) of the pancreas-to-cystic lesions and fat between 4 types of images. SshTSE<sup>2mm</sup> with DLR-SR had the highest image quality related to pancreas edge sharpness, perceived coarseness pancreatic duct clarity, noise, artifacts, overall image quality, and diagnostic confidence of cystic lesions, followed by SshTSE<sup>2mm</sup> without DLR-SR and SshTSE<sup>5mm</sup> with and without DLR-SR (P  <  0.0001). SshTSE<sup>2mm</sup> with DLR-SR images had better quality than the other images and did not have decreased SNRs and CNRs. The thin-slice SshTSE with DLR-SR may be feasible and clinically useful for the evaluation of patients with pancreatic cystic lesions.
Page 61 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.