Sort by:
Page 1 of 42411 results
Next

Deep learning for differential diagnosis of parotid tumors based on 2.5D magnetic resonance imaging.

Mai W, Fan X, Zhang L, Li J, Chen L, Hua X, Zhang D, Li H, Cai M, Shi C, Liu X

pubmed logopapersDec 1 2025
Accurate preoperative diagnosis of parotid gland tumors (PGTs) is crucial for surgical planning since malignant tumors require more extensive excision. Though fine-needle aspiration biopsy is the diagnostic gold standard, its sensitivity in detecting malignancies is limited. While Deep learning (DL) models based on magnetic resonance imaging (MRI) are common in medicine, they are less studied for parotid gland tumors. This study used a 2.5D imaging approach (Incorporating Inter-Slice Information) to train a DL model to differentiate between benign and malignant PGTs. This retrospective study included 122 parotid tumor patients, using MRI and clinical features to build predictive models. In the traditional model, univariate analysis identified statistically significant features, which were then used in multivariate logistic regression to determine independent predictors. The model was built using four-fold cross-validation. The deep learning model was trained using 2D and 2.5D imaging approaches, with a transformer-based architecture employed for transfer learning. The model's performance was evaluated using the area under the receiver operating characteristic curve (AUC) and confusion matrix metrics. In the traditional model, boundary and peritumoral invasion were identified as independent predictors for PGTs, and the model was constructed based on these features. The model achieved an AUC of 0.79 but demonstrated low sensitivity (0.54). In contrast, the DL model based on 2.5D T2 fat-suppressed images showed superior performance, with an AUC of 0.86 and a sensitivity of 0.78. The 2.5D imaging technique, when integrated with a transformer-based transfer learning model, demonstrates significant efficacy in differentiating between PGTs.

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

MED-NCA: Bio-inspired medical image segmentation.

Kalkhof J, Ihm N, Köhler T, Gregori B, Mukhopadhyay A

pubmed logopapersJul 1 2025
The reliance on computationally intensive U-Net and Transformer architectures significantly limits their accessibility in low-resource environments, creating a technological divide that hinders global healthcare equity, especially in medical diagnostics and treatment planning. This divide is most pronounced in low- and middle-income countries, primary care facilities, and conflict zones. We introduced MED-NCA, Neural Cellular Automata (NCA) based segmentation models characterized by their low parameter count, robust performance, and inherent quality control mechanisms. These features drastically lower the barriers to high-quality medical image analysis in resource-constrained settings, allowing the models to run efficiently on hardware as minimal as a Raspberry Pi or a smartphone. Building upon the foundation laid by MED-NCA, this paper extends its validation across eight distinct anatomies, including the hippocampus and prostate (MRI, 3D), liver and spleen (CT, 3D), heart and lung (X-ray, 2D), breast tumor (Ultrasound, 2D), and skin lesion (Image, 2D). Our comprehensive evaluation demonstrates the broad applicability and effectiveness of MED-NCA in various medical imaging contexts, matching the performance of two magnitudes larger UNet models. Additionally, we introduce NCA-VIS, a visualization tool that gives insight into the inference process of MED-NCA and allows users to test its robustness by applying various artifacts. This combination of efficiency, broad applicability, and enhanced interpretability makes MED-NCA a transformative solution for medical image analysis, fostering greater global healthcare equity by making advanced diagnostics accessible in even the most resource-limited environments.

Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges.

Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D

pubmed logopapersJul 1 2025
The US healthcare system faces significant challenges, including clinician burnout, operational inefficiencies, and concerns about patient safety. Artificial intelligence (AI), particularly generative AI, has the potential to address these challenges, but its adoption, effectiveness, and barriers to implementation are not well understood. To evaluate the current state of AI adoption in US healthcare systems, assess successes and barriers to implementation during the early generative AI era. This cross-sectional survey was conducted in Fall 2024, and included 67 health systems members of the Scottsdale Institute, a collaborative of US non-profit healthcare organizations. Forty-three health systems completed the survey (64% response rate). Respondents provided data on the deployment status and perceived success of 37 AI use cases across 10 categories. The primary outcomes were the extent of AI use case development, piloting, or deployment, the degree of reported success for AI use cases, and the most significant barriers to adoption. Across the 43 responding health systems, AI adoption and perceptions of success varied significantly. Ambient Notes, a generative AI tool for clinical documentation, was the only use case with 100% of respondents reporting adoption activities, and 53% reported a high degree of success with using AI for Clinical Documentation. Imaging and radiology emerged as the most widely deployed clinical AI use case, with 90% of organizations reporting at least partial deployment, although successes with diagnostic use cases were limited. Similarly, many organizations have deployed AI for clinical risk stratification such as early sepsis detection, but only 38% report high success in this area. Immature AI tools were identified a significant barrier to adoption, cited by 77% of respondents, followed by financial concerns (47%) and regulatory uncertainty (40%). Ambient Notes is rapidly advancing in US healthcare systems and demonstrating early success. Other AI use cases show varying degrees of adoption and success, constrained by barriers such as immature AI tools, financial concerns, and regulatory uncertainty. Addressing these challenges through robust evaluations, shared strategies, and governance models will be essential to ensure effective integration and adoption of AI into healthcare practice.

[A deep learning method for differentiating nasopharyngeal carcinoma and lymphoma based on MRI].

Tang Y, Hua H, Wang Y, Tao Z

pubmed logopapersJul 1 2025
<b>Objective:</b>To development a deep learning(DL) model based on conventional MRI for automatic segmentation and differential diagnosis of nasopharyngeal carcinoma(NPC) and nasopharyngeal lymphoma(NPL). <b>Methods:</b>The retrospective study included 142 patients with NPL and 292 patients with NPC who underwent conventional MRI at Renmin Hospital of Wuhan University from June 2012 to February 2023. MRI from 80 patients were manually segmented to train the segmentation model. The automatically segmented regions of interest(ROIs) formed four datasets: T1 weighted images(T1WI), T2 weighted images(T2WI), T1 weighted contrast-enhanced images(T1CE), and a combination of T1WI and T2WI. The ImageNet-pretrained ResNet101 model was fine-tuned for the classification task. Statistical analysis was conducted using SPSS 22.0. The Dice coefficient loss was used to evaluate performance of segmentation task. Diagnostic performance was assessed using receiver operating characteristic(ROC) curves. Gradient-weighted class activation mapping(Grad-CAM) was imported to visualize the model's function. <b>Results:</b>The DICE score of the segmentation model reached 0.876 in the testing set. The AUC values of classification models in testing set were as follows: T1WI: 0.78(95%<i>CI</i> 0.67-0.81), T2WI: 0.75(95%<i>CI</i> 0.72-0.86), T1CE: 0.84(95%<i>CI</i> 0.76-0.87), and T1WI+T2WI: 0.93(95%<i>CI</i> 0.85-0.94). The AUC values for the two clinicians were 0.77(95%<i>CI</i> 0.72-0.82) for the junior, and 0.84(95%<i>CI</i> 0.80-0.89) for the senior. Grad-CAM analysis revealed that the central region of the tumor was highly correlated with the model's classification decisions, while the correlation was lower in the peripheral regions. <b>Conclusion:</b>The deep learning model performed well in differentiating NPC from NPL based on conventional MRI. The T1WI+T2WI combination model exhibited the best performance. The model can assist in the early diagnosis of NPC and NPL, facilitating timely and standardized treatment, which may improve patient prognosis.

Improve robustness to mismatched sampling rate: An alternating deep low-rank approach for exponential function reconstruction and its biomedical magnetic resonance applications.

Huang Y, Wang Z, Zhang X, Cao J, Tu Z, Lin M, Li L, Jiang X, Guo D, Qu X

pubmed logopapersJul 1 2025
Undersampling accelerates signal acquisition at the expense of introducing artifacts. Removing these artifacts is a fundamental problem in signal processing and this task is also called signal reconstruction. Through modeling signals as the superimposed exponential functions, deep learning has achieved fast and high-fidelity signal reconstruction by training a mapping from the undersampled exponentials to the fully sampled ones. However, the mismatch, such as undersampling rates (25 % vs. 50 %), anatomical region (knee vs. brain), and contrast configurations (PDw vs. T<sub>2</sub>w), between the training and target data will heavily compromise the reconstruction. To overcome this limitation, we propose Alternating Deep Low-Rank (ADLR), which combines deep learning solvers and classic optimization solvers. Experimental validation on the reconstruction of synthetic and real-world biomedical magnetic resonance signals demonstrates that ADLR can effectively alleviate the mismatch issue and achieve lower reconstruction errors than state-of-the-art methods.

Use of Artificial Intelligence and Machine Learning in Critical Care Ultrasound.

Peck M, Conway H

pubmed logopapersJul 1 2025
This article explores the transformative potential of artificial intelligence (AI) in critical care ultrasound AI technologies, notably deep learning and convolutional neural networks, now assisting in image acquisition, interpretation, and quality assessment, streamlining workflow and reducing operator variability. By automating routine tasks, AI enhances diagnostic accuracy and bridges training gaps, potentially democratizing advanced ultrasound techniques. Furthermore, AI's integration into tele-ultrasound systems shows promise in extending expert-level diagnostics to underserved areas, significantly broadening access to quality care. The article highlights the ongoing need for explainable AI systems to gain clinician trust and facilitate broader adoption.

SegQC: a segmentation network-based framework for multi-metric segmentation quality control and segmentation error detection in volumetric medical images.

Specktor-Fadida B, Ben-Sira L, Ben-Bashat D, Joskowicz L

pubmed logopapersJul 1 2025
Quality control (QC) of structures segmentation in volumetric medical images is important for identifying segmentation errors in clinical practice and for facilitating model development by enhancing network performance in semi-supervised and active learning scenarios. This paper introduces SegQC, a novel framework for segmentation quality estimation and segmentation error detection. SegQC computes an estimate measure of the quality of a segmentation in volumetric scans and in their individual slices and identifies possible segmentation error regions within a slice. The key components of SegQC include: 1) SegQCNet, a deep network that inputs a scan and its segmentation mask and outputs segmentation error probabilities for each voxel in the scan; 2) three new segmentation quality metrics computed from the segmentation error probabilities; 3) a new method for detecting possible segmentation errors in scan slices computed from the segmentation error probabilities. We introduce a novel evaluation scheme to measure segmentation error discrepancies based on an expert radiologist's corrections of automatically produced segmentations that yields smaller observer variability and is closer to actual segmentation errors. We demonstrate SegQC on three fetal structures in 198 fetal MRI scans - fetal brain, fetal body and the placenta. To assess the benefits of SegQC, we compare it to the unsupervised Test Time Augmentation (TTA)-based QC and to supervised autoencoder (AE)-based QC. Our studies indicate that SegQC outperforms TTA-based quality estimation for whole scans and individual slices in terms of Pearson correlation and MAE for fetal body and fetal brain structures segmentation as well as for volumetric overlap metrics estimation of the placenta structure. Compared to both unsupervised TTA and supervised AE methods, SegQC achieves lower MAE for both 3D and 2D Dice estimates and higher Pearson correlation for volumetric Dice. Our segmentation error detection method achieved recall and precision rates of 0.77 and 0.48 for fetal body, and 0.74 and 0.55 for fetal brain segmentation error detection, respectively. Ranking derived from metrics estimation surpasses rankings based on entropy and sum for TTA and SegQCNet estimations, respectively. SegQC provides high-quality metrics estimation for both 2D and 3D medical images as well as error localization within slices, offering important improvements to segmentation QC.

CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans.

Scardigno RM, Brunetti A, Marvulli PM, Carli R, Dotoli M, Bevilacqua V, Buongiorno D

pubmed logopapersJul 1 2025
High-quality computed tomography (CT) scans are essential for accurate diagnostic and therapeutic decisions, but the presence of metal objects within the body can produce distortions that lower image quality. Deep learning (DL) approaches using image-to-image translation for metal artifact reduction (MAR) show promise over traditional methods but often introduce secondary artifacts. Additionally, most rely on paired simulated data due to limited availability of real paired clinical data, restricting evaluation on clinical scans to qualitative analysis. This work presents CALIMAR-GAN, a generative adversarial network (GAN) model that employs a guided attention mechanism and the linear interpolation algorithm to reduce artifacts using unpaired simulated and clinical data for targeted artifact reduction. Quantitative evaluations on simulated images demonstrated superior performance, achieving a PSNR of 31.7, SSIM of 0.877, and Fréchet inception distance (FID) of 22.1, outperforming state-of-the-art methods. On real clinical images, CALIMAR-GAN achieved the lowest FID (32.7), validated as a valuable complement to qualitative assessments through correlation with pixel-based metrics (r=-0.797 with PSNR, p<0.01; r=-0.767 with MS-SSIM, p<0.01). This work advances DL-based artifact reduction into clinical practice with high-fidelity reconstructions that enhance diagnostic accuracy and therapeutic outcomes. Code is available at https://github.com/roberto722/calimar-gan.

Rethinking boundary detection in deep learning-based medical image segmentation.

Lin Y, Zhang D, Fang X, Chen Y, Cheng KT, Chen H

pubmed logopapersJul 1 2025
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, BraTS, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: CTO.
Page 1 of 42411 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.