Sort by:
Page 93 of 99982 results

Optimizing breast lesions diagnosis and decision-making with a deep learning fusion model integrating ultrasound and mammography: a dual-center retrospective study.

Xu Z, Zhong S, Gao Y, Huo J, Xu W, Huang W, Huang X, Zhang C, Zhou J, Dan Q, Li L, Jiang Z, Lang T, Xu S, Lu J, Wen G, Zhang Y, Li Y

pubmed logopapersMay 14 2025
This study aimed to develop a BI-RADS network (DL-UM) via integrating ultrasound (US) and mammography (MG) images and explore its performance in improving breast lesion diagnosis and management when collaborating with radiologists, particularly in cases with discordant US and MG Breast Imaging Reporting and Data System (BI-RADS) classifications. We retrospectively collected image data from 1283 women with breast lesions who underwent both US and MG within one month at two medical centres and categorised them into concordant and discordant BI-RADS classification subgroups. We developed a DL-UM network via integrating US and MG images, and DL networks using US (DL-U) or MG (DL-M) alone, respectively. The performance of DL-UM network for breast lesion diagnosis was evaluated using ROC curves and compared to DL-U and DL-M networks in the external testing dataset. The diagnostic performance of radiologists with different levels of experience under the assistance of DL-UM network was also evaluated. In the external testing dataset, DL-UM outperformed DL-M in sensitivity (0.962 vs. 0.833, P = 0.016) and DL-U in specificity (0.667 vs. 0.526, P = 0.030), respectively. In the discordant BI-RADS classification subgroup, DL-UM achieved an AUC of 0.910. The diagnostic performance of four radiologists improved when collaborating with the DL-UM network, with AUCs increased from 0.674-0.772 to 0.889-0.910, specificities from 52.1%-75.0 to 81.3-87.5% and reducing unnecessary biopsies by 16.1%-24.6%, particularly for junior radiologists. Meanwhile, DL-UM outputs and heatmaps enhanced radiologists' trust and improved interobserver agreement between US and MG, with weighted kappa increased from 0.048 to 0.713 (P < 0.05). The DL-UM network, integrating complementary US and MG features, assisted radiologists in improving breast lesion diagnosis and management, potentially reducing unnecessary biopsies.

Whole-body CT-to-PET synthesis using a customized transformer-enhanced GAN.

Xu B, Nie Z, He J, Li A, Wu T

pubmed logopapersMay 14 2025
Positron emission tomography with 2-deoxy-2-[fluorine-18]fluoro-D-glucose integrated with computed tomography (18F-FDG PET-CT) is a multi-modality medical imaging technique widely used for screening and diagnosis of lesions and tumors, in which, CT can provide detailed anatomical structures, while PET can show metabolic activities. Nevertheless, it has disadvantages such as long scanning time, high cost, and relatively high radiation doses.&#xD;&#xD;Purpose: We propose a deep learning model for the whole-body CT-to-PET synthesis task, generating high-quality synthetic PET images that are comparable to real ones in both clinical relevance and diagnostic value.&#xD;&#xD;Material: We collect 102 pairs of 3D CT and PET scans, which are sliced into 27,240 pairs of 2D CT and PET images ( training: 21,855 pairs, validation: 2,810, testing: 2,575 pairs).&#xD;&#xD;Methods: We propose a Transformer-enhanced Generative Adversarial Network (GAN) for whole-body CT-to-PET synthesis task. The CPGAN model uses residual blocks and Fully Connected Transformer Residual (FCTR) blocks to capture both local features and global contextual information. A customized loss function incorporating structural consistency is designed to improve the quality of synthesized PET images.&#xD;&#xD;Results: Both quantitative and qualitative evaluation results demonstrate effectiveness of the CPGAN model. The mean and standard variance of NRMSE,PSNR and SSIM values on test set are (16.90 ± 12.27) × 10-4, 28.71 ± 2.67 and 0.926 ± 0.033, respectively, outperforming other seven state-of-the-art models. Three radiologists independently and blindly evaluated and gave subjective scores to 100 randomly chosen PET images (50 real and 50 synthetic). By Wilcoxon signed rank test, there are no statistical differences between the synthetic PET images and the real ones.&#xD;&#xD;Conclusions: Despite the inherent limitations of CT images to directly reflect biological information of metabolic tissues, CPGAN model effectively synthesizes satisfying PET images from CT scans, which has potential in reducing the reliance on actual PET-CT scans.

DCSNet: A Lightweight Knowledge Distillation-Based Model with Explainable AI for Lung Cancer Diagnosis from Histopathological Images

Sadman Sakib Alif, Nasim Anzum Promise, Fiaz Al Abid, Aniqua Nusrat Zereen

arxiv logopreprintMay 14 2025
Lung cancer is a leading cause of cancer-related deaths globally, where early detection and accurate diagnosis are critical for improving survival rates. While deep learning, particularly convolutional neural networks (CNNs), has revolutionized medical image analysis by detecting subtle patterns indicative of early-stage lung cancer, its adoption faces challenges. These models are often computationally expensive and require significant resources, making them unsuitable for resource constrained environments. Additionally, their lack of transparency hinders trust and broader adoption in sensitive fields like healthcare. Knowledge distillation addresses these challenges by transferring knowledge from large, complex models (teachers) to smaller, lightweight models (students). We propose a knowledge distillation-based approach for lung cancer detection, incorporating explainable AI (XAI) techniques to enhance model transparency. Eight CNNs, including ResNet50, EfficientNetB0, EfficientNetB3, and VGG16, are evaluated as teacher models. We developed and trained a lightweight student model, Distilled Custom Student Network (DCSNet) using ResNet50 as the teacher. This approach not only ensures high diagnostic performance in resource-constrained settings but also addresses transparency concerns, facilitating the adoption of AI-driven diagnostic tools in healthcare.

Clinical utility of ultrasound and MRI in rheumatoid arthritis: An expert review.

Kellner DA, Morris NT, Lee SM, Baker JF, Chu P, Ranganath VK, Kaeley GS, Yang HH

pubmed logopapersMay 14 2025
Musculoskeletal ultrasound (MSUS) and magnetic resonance imaging (MRI) are advanced imaging techniques that are increasingly important in the diagnosis and management of rheumatoid arthritis (RA) and have significantly enhanced the rheumatologist's ability to assess RA disease activity and progression. This review serves as a five-year update to our previous publication on the contemporary role of imaging in RA, emphasizing the continued importance of MSUS and MRI in clinical practice and their expanding utility. The review examines the role of MSUS in diagnosing RA, differentiating RA from mimickers, scoring systems and quality control measures, novel longitudinal approaches to disease monitoring, and patient populations that may benefit most from MSUS. It also examines the role of MRI in diagnosing pre-clinical and early RA, disease activity monitoring, research and clinical trials, and development of alternative scoring approaches utilizing artificial intelligence. Finally, the role of MRI in RA diagnosis and management is summarized, and selected practice points offer key tips for integrating MSUS and MRI into clinical practice.

[Radiosurgery of benign intracranial lesions. Indications, results , and perspectives].

Danthez N, De Cournuaud C, Pistocchi S, Aureli V, Giammattei L, Hottinger AF, Schiappacasse L

pubmed logopapersMay 14 2025
Stereotactic radiosurgery (SRS) is a non-invasive technique that is transforming the management of benign intracranial lesions through its precision and preservation of healthy tissues. It is effective for meningiomas, trigeminal neuralgia (TN), pituitary adenomas, vestibular schwannomas, and arteriovenous malformations. SRS ensures high tumor control rates, particularly for Grade I meningiomas and vestibular schwannomas. For refractory TN, it provides initial pain relief > 80 %. The advent of technologies such as PET-MRI, hypofractionation, and artificial intelligence is further improving treatment precision, but challenges remain, including the management of late side effects and standardization of practice.

A survey of deep-learning-based radiology report generation using multimodal inputs.

Wang X, Figueredo G, Li R, Zhang WE, Chen W, Chen X

pubmed logopapersMay 13 2025
Automatic radiology report generation can alleviate the workload for physicians and minimize regional disparities in medical resources, therefore becoming an important topic in the medical image analysis field. It is a challenging task, as the computational model needs to mimic physicians to obtain information from multi-modal input data (i.e., medical images, clinical information, medical knowledge, etc.), and produce comprehensive and accurate reports. Recently, numerous works have emerged to address this issue using deep-learning-based methods, such as transformers, contrastive learning, and knowledge-base construction. This survey summarizes the key techniques developed in the most recent works and proposes a general workflow for deep-learning-based report generation with five main components, including multi-modality data acquisition, data preparation, feature learning, feature fusion and interaction, and report generation. The state-of-the-art methods for each of these components are highlighted. Additionally, we summarize the latest developments in large model-based methods and model explainability, along with public datasets, evaluation methods, current challenges, and future directions in this field. We have also conducted a quantitative comparison between different methods in the same experimental setting. This is the most up-to-date survey that focuses on multi-modality inputs and data fusion for radiology report generation. The aim is to provide comprehensive and rich information for researchers interested in automatic clinical report generation and medical image analysis, especially when using multimodal inputs, and to assist them in developing new algorithms to advance the field.

A Deep Learning-Driven Framework for Inhalation Injury Grading Using Bronchoscopy Images

Yifan Li, Alan W Pang, Jo Woon Chong

arxiv logopreprintMay 13 2025
Inhalation injuries face a challenge in clinical diagnosis and grading due to the limitations of traditional methods, such as Abbreviated Injury Score (AIS), which rely on subjective assessments and show weak correlations with clinical outcomes. This study introduces a novel deep learning-based framework for grading inhalation injuries using bronchoscopy images with the duration of mechanical ventilation as an objective metric. To address the scarcity of medical imaging data, we propose enhanced StarGAN, a generative model that integrates Patch Loss and SSIM Loss to improve synthetic images' quality and clinical relevance. The augmented dataset generated by enhanced StarGAN significantly improved classification performance when evaluated using the Swin Transformer, achieving an accuracy of 77.78%, an 11.11% improvement over the original dataset. Image quality was assessed using the Fr\'echet Inception Distance (FID), where Enhanced StarGAN achieved the lowest FID of 30.06, outperforming baseline models. Burn surgeons confirmed the realism and clinical relevance of the generated images, particularly the preservation of bronchial structures and color distribution. These results highlight the potential of enhanced StarGAN in addressing data limitations and improving classification accuracy for inhalation injury grading.

Enhancing Liver Fibrosis Measurement: Deep Learning and Uncertainty Analysis Across Multi-Centre Cohorts

Wojciechowska, M. K., Malacrino, S., Windell, D., Culver, E., Dyson, J., UK-AIH Consortium,, Rittscher, J.

medrxiv logopreprintMay 13 2025
O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=111 SRC="FIGDIR/small/25326981v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@14e7b87org.highwire.dtl.DTLVardef@19005c4org.highwire.dtl.DTLVardef@6ac42f_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOGraphical AbstractC_FLOATNO C_FIG HighlightsO_LIA retrospective cohort of liver biopsies collected from over 20 healthcare centres has been assembled. C_LIO_LIThe cohort is characterized on the basis of collagen staining used for liver fibrosis assessment. C_LIO_LIA computational pipeline for the quantification of collagen from liver histology slides has been developed and applied to the described cohorts. C_LIO_LIUncertainty estimation is evaluated as a method to build trust in deep-learning based collagen predictions. C_LI The introduction of digital pathology has revolutionised the way in which histology-based measurements can support large, multi-centre studies. How-ever, pooling data from various centres often reveals significant differences in specimen quality, particularly regarding histological staining protocols. These variations present challenges in reliably quantifying features from stained tissue sections using image analysis. In this study, we investigate the statistical variation of measuring fibrosis across a liver cohort composed of four individual studies from 20 clinical sites across Europe and North America. In a first step, we apply colour consistency measurements to analyse staining variability across this diverse cohort. Subsequently, a learnt segmentation model is used to quantify the collagen proportionate area (CPA) and employed uncertainty mapping to evaluate the quality of the segmentations. Our analysis highlights a lack of standardisation in PicroSirius Red (PSR) staining practices, revealing significant variability in staining protocols across institutions. The deconvolution of the staining of the digitised slides identified the different numbers and types of counterstains used, leading to potentially incomparable results. Our analysis highlights the need for standardised staining protocols to ensure reliable collagen quantification in liver biopsies. The tools and methodologies presented here can be applied to perform slide colour quality control in digital pathology studies, thus enhancing the comparability and reproducibility of fibrosis assessment in the liver and other tissues.

Cardiovascular imaging techniques for electrophysiologists.

Rogers AJ, Reynbakh O, Ahmed A, Chung MK, Charate R, Yarmohammadi H, Gopinathannair R, Khan H, Lakkireddy D, Leal M, Srivatsa U, Trayanova N, Wan EY

pubmed logopapersMay 13 2025
Rapid technological advancements in noninvasive and invasive imaging including echocardiography, computed tomography, magnetic resonance imaging and positron emission tomography have allowed for improved anatomical visualization and precise measurement of cardiac structure and function. These imaging modalities allow for evaluation of how cardiac substrate changes, such as myocardial wall thickness, fibrosis, scarring and chamber enlargement and/or dilation, have an important role in arrhythmia initiation and perpetuation. Here, we review the various imaging techniques and modalities used by clinical and basic electrophysiologists to study cardiac arrhythmia mechanisms, periprocedural planning, risk stratification and precise delivery of ablation therapy. We also review the use of artificial intelligence and machine learning to improve identification of areas for triggered activity and isthmuses in reentrant arrhythmias, which may be favorable ablation targets.

Unsupervised Out-of-Distribution Detection in Medical Imaging Using Multi-Exit Class Activation Maps and Feature Masking

Yu-Jen Chen, Xueyang Li, Yiyu Shi, Tsung-Yi Ho

arxiv logopreprintMay 13 2025
Out-of-distribution (OOD) detection is essential for ensuring the reliability of deep learning models in medical imaging applications. This work is motivated by the observation that class activation maps (CAMs) for in-distribution (ID) data typically emphasize regions that are highly relevant to the model's predictions, whereas OOD data often lacks such focused activations. By masking input images with inverted CAMs, the feature representations of ID data undergo more substantial changes compared to those of OOD data, offering a robust criterion for differentiation. In this paper, we introduce a novel unsupervised OOD detection framework, Multi-Exit Class Activation Map (MECAM), which leverages multi-exit CAMs and feature masking. By utilizing mult-exit networks that combine CAMs from varying resolutions and depths, our method captures both global and local feature representations, thereby enhancing the robustness of OOD detection. We evaluate MECAM on multiple ID datasets, including ISIC19 and PathMNIST, and test its performance against three medical OOD datasets, RSNA Pneumonia, COVID-19, and HeadCT, and one natural image OOD dataset, iSUN. Comprehensive comparisons with state-of-the-art OOD detection methods validate the effectiveness of our approach. Our findings emphasize the potential of multi-exit networks and feature masking for advancing unsupervised OOD detection in medical imaging, paving the way for more reliable and interpretable models in clinical practice.
Page 93 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.