Sort by:
Page 21 of 38374 results

Data Augmentation for Medical Image Classification Based on Gaussian Laplacian Pyramid Blending With a Similarity Measure.

Kumar A, Sharma A, Singh AK, Singh SK, Saxena S

pubmed logopapersJun 1 2025
Breast cancer is a devastating disease that affects women worldwide, and computer-aided algorithms have shown potential in automating cancer diagnosis. Recently Generative Artificial Intelligence (GenAI) opens new possibilities for addressing the challenges of labeled data scarcity and accurate prediction in critical applications. However, a lack of diversity, as well as unrealistic and unreliable data, have a detrimental impact on performance. Therefore, this study proposes an augmentation scheme to address the scarcity of labeled data and data imbalance in medical datasets. This approach integrates the concepts of the Gaussian-Laplacian pyramid and pyramid blending with similarity measures. In order to maintain the structural properties of images and capture inter-variability of patient images of the same category similarity-metric-based intermixing has been introduced. It helps to maintain the overall quality and integrity of the dataset. Subsequently, deep learning approach with significant modification, that leverages transfer learning through the usage of concatenated pre-trained models is applied to classify breast cancer histopathological images. The effectiveness of the proposal, including the impact of data augmentation, is demonstrated through a detailed analysis of three different medical datasets, showing significant performance improvement over baseline models. The proposal has the potential to contribute to the development of more accurate and reliable approach for breast cancer diagnosis.

CNS-CLIP: Transforming a Neurosurgical Journal Into a Multimodal Medical Model.

Alyakin A, Kurland D, Alber DA, Sangwon KL, Li D, Tsirigos A, Leuthardt E, Kondziolka D, Oermann EK

pubmed logopapersJun 1 2025
Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications. We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification. CNS-CLIP demonstrated superior performance in neurosurgical information retrieval with a Top-1 accuracy of 24.56%, compared with 8.61% for the baseline. The average area under receiver operating characteristic across 6 neuroradiology tasks achieved by CNS-CLIP was 0.95, slightly superior to OpenAI's Contrastive Language-Image Pretraining at 0.94 and significantly outperforming a vanilla vision transformer at 0.62. In generalist classification, CNS-CLIP reached a Top-1 accuracy of 47.55%, a decrease from the baseline of 52.37%, demonstrating a catastrophic forgetting phenomenon. This study presents a pioneering effort in building a domain-specific multimodal model using data from a medical society publication. The results indicate that domain-specific models, while less globally versatile, can offer advantages in specialized contexts. This emphasizes the importance of using tailored data and domain-focused development in training foundation models in neurosurgery and general medicine.

Enhancing diagnostic accuracy of thyroid nodules: integrating self-learning and artificial intelligence in clinical training.

Kim D, Hwang YA, Kim Y, Lee HS, Lee E, Lee H, Yoon JH, Park VY, Rho M, Yoon J, Lee SE, Kwak JY

pubmed logopapersJun 1 2025
This study explores a self-learning method as an auxiliary approach in residency training for distinguishing between benign and malignant thyroid nodules. Conducted from March to December 2022, internal medicine residents underwent three repeated learning sessions with a "learning set" comprising 3000 thyroid nodule images. Diagnostic performances for internal medicine residents were assessed before the study, after every learning session, and for radiology residents before and after one-on-one education, using a "test set," comprising 120 thyroid nodule images. Finally, all residents repeated the same test using artificial intelligence computer-assisted diagnosis (AI-CAD). Twenty-one internal medicine and eight radiology residents participated. Initially, internal medicine residents had a lower area under the receiver operating characteristic curve (AUROC) than radiology residents (0.578 vs. 0.701, P < 0.001), improving post-learning (0.578 to 0.709, P < 0.001) to a comparable level with radiology residents (0.709 vs. 0.735, P = 0.17). Further improvement occurred with AI-CAD for both group (0.709 to 0.755, P < 0.001; 0.735 to 0.768, P = 0.03). The proposed iterative self-learning method using a large volume of ultrasonographic images can assist beginners, such as residents, in thyroid imaging to differentiate benign and malignant thyroid nodules. Additionally, AI-CAD can improve the diagnostic performance across varied levels of experience in thyroid imaging.

The role of deep learning in diagnostic imaging of spondyloarthropathies: a systematic review.

Omar M, Watad A, McGonagle D, Soffer S, Glicksberg BS, Nadkarni GN, Klang E

pubmed logopapersJun 1 2025
Diagnostic imaging is an integral part of identifying spondyloarthropathies (SpA), yet the interpretation of these images can be challenging. This review evaluated the use of deep learning models to enhance the diagnostic accuracy of SpA imaging. Following PRISMA guidelines, we systematically searched major databases up to February 2024, focusing on studies that applied deep learning to SpA imaging. Performance metrics, model types, and diagnostic tasks were extracted and analyzed. Study quality was assessed using QUADAS-2. We analyzed 21 studies employing deep learning in SpA imaging diagnosis across MRI, CT, and X-ray modalities. These models, particularly advanced CNNs and U-Nets, demonstrated high accuracy in diagnosing SpA, differentiating arthritis forms, and assessing disease progression. Performance metrics frequently surpassed traditional methods, with some models achieving AUCs up to 0.98 and matching expert radiologist performance. This systematic review underscores the effectiveness of deep learning in SpA imaging diagnostics across MRI, CT, and X-ray modalities. The studies reviewed demonstrated high diagnostic accuracy. However, the presence of small sample sizes in some studies highlights the need for more extensive datasets and further prospective and external validation to enhance the generalizability of these AI models. Question How can deep learning models improve diagnostic accuracy in imaging for spondyloarthropathies (SpA), addressing challenges in early detection and differentiation from other forms of arthritis? Findings Deep learning models, especially CNNs and U-Nets, showed high accuracy in SpA imaging across MRI, CT, and X-ray, often matching or surpassing expert radiologists. Clinical relevance Deep learning models can enhance diagnostic precision in SpA imaging, potentially reducing diagnostic delays and improving treatment decisions, but further validation on larger datasets is required for clinical integration.

Parapharyngeal Space: Diagnostic Imaging and Intervention.

Vogl TJ, Burck I, Stöver T, Helal R

pubmed logopapersJun 1 2025
Diagnosis of lesions of the parapharyngeal space (PPS) often poses a diagnostic and therapeutic challenge due to its deep location. As a result of the topographical relationship to nearby neck spaces, a very precise differential diagnosis is possible based on imaging criteria. When in doubt, imaging-guided - usually CT-guided - biopsy and even drainage remain options.Through a precise analysis of the literature including the most recent publications, this review precisely describes the basic and most recent imaging applications for various PPS pathologies and the differential diagnostic scheme for assigning the respective lesions in addition to the possibilities of using interventional radiology.The different pathologies of PPS from congenital malformations and inflammation to tumors are discussed according to frequency. Characteristic criteria and, more recently, the use of advanced imaging procedures and the introduction of artificial intelligence (AI) allow a very precise differential diagnosis and support further diagnosis and therapy. After precise access planning, almost all pathologies of the PPS can be biopsied or, if necessary, drained using CT-assisted procedures.Radiological procedures play an important role in the diagnosis and treatment planning of PPS pathologies. · Lesions of the PPS account for about 1-2% of all pathologies of the head and neck region. The majority are benign lesions and inflammatory processes.. · If differential diagnostic questions remain unanswered, material can - if necessary - be obtained via a CT-guided biopsy. Exclusion criteria are hypervascularized processes, especially paragangliomas and angiomas.. · The use of artificial intelligence (AI) in head and neck imaging of various pathologies, such as tumor segmentation, pathological TMN classification, detection of lymph node metastases, and extranodal extension, has significantly increased in recent years.. · Vogl TJ, Burck I, Stöver T et al. Parapharyngeal Space: Diagnostic Imaging and Intervention. Rofo 2025; 197: 638-646.

Phenotyping atherosclerotic plaque and perivascular adipose tissue: signalling pathways and clinical biomarkers in atherosclerosis.

Grodecki K, Geers J, Kwiecinski J, Lin A, Slipczuk L, Slomka PJ, Dweck MR, Nerlekar N, Williams MC, Berman D, Marwick T, Newby DE, Dey D

pubmed logopapersJun 1 2025
Computed tomography coronary angiography provides a non-invasive evaluation of coronary artery disease that includes phenotyping of atherosclerotic plaques and the surrounding perivascular adipose tissue (PVAT). Image analysis techniques have been developed to quantify atherosclerotic plaque burden and morphology as well as the associated PVAT attenuation, and emerging radiomic approaches can add further contextual information. PVAT attenuation might provide a novel measure of vascular health that could be indicative of the pathogenetic processes implicated in atherosclerosis such as inflammation, fibrosis or increased vascularity. Bidirectional signalling between the coronary artery and adjacent PVAT has been hypothesized to contribute to coronary artery disease progression and provide a potential novel measure of the risk of future cardiovascular events. However, despite the development of more advanced radiomic and artificial intelligence-based algorithms, studies involving large datasets suggest that the measurement of PVAT attenuation contributes only modest additional predictive discrimination to standard cardiovascular risk scores. In this Review, we explore the pathobiology of coronary atherosclerotic plaques and PVAT, describe their phenotyping with computed tomography coronary angiography, and discuss potential future applications in clinical risk prediction and patient management.

Computer-Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI).

Jannatdoust P, Valizadeh P, Saeedi N, Valizadeh G, Salari HM, Saligheh Rad H, Gity M

pubmed logopapersJun 1 2025
Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) is a key tool due to its substantial sensitivity for invasive breast cancers. Computer-aided detection (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas of interest, extracting quantitative features, and integrating with computer-aided diagnosis (CADx) pipelines. This review aims to provide a comprehensive overview of the current state of CADe systems in breast MRI, focusing on the technical details of pipelines and segmentation models including classical intensity-based methods, supervised and unsupervised machine learning (ML) approaches, and the latest deep learning (DL) architectures. It highlights recent advancements from traditional algorithms to sophisticated DL models such as U-Nets, emphasizing CADe implementation of multi-parametric MRI acquisitions. Despite these advancements, CADe systems face challenges like variable false-positive and negative rates, complexity in interpreting extensive imaging data, variability in system performance, and lack of large-scale studies and multicentric models, limiting the generalizability and suitability for clinical implementation. Technical issues, including image artefacts and the need for reproducible and explainable detection algorithms, remain significant hurdles. Future directions emphasize developing more robust and generalizable algorithms, integrating explainable AI to improve transparency and trust among clinicians, developing multi-purpose AI systems, and incorporating large language models to enhance diagnostic reporting and patient management. Additionally, efforts to standardize and streamline MRI protocols aim to increase accessibility and reduce costs, optimizing the use of CADe systems in clinical practice. LEVEL OF EVIDENCE: NA TECHNICAL EFFICACY: Stage 2.

Empowering PET imaging reporting with retrieval-augmented large language models and reading reports database: a pilot single center study.

Choi H, Lee D, Kang YK, Suh M

pubmed logopapersJun 1 2025
The potential of Large Language Models (LLMs) in enhancing a variety of natural language tasks in clinical fields includes medical imaging reporting. This pilot study examines the efficacy of a retrieval-augmented generation (RAG) LLM system considering zero-shot learning capability of LLMs, integrated with a comprehensive database of PET reading reports, in improving reference to prior reports and decision making. We developed a custom LLM framework with retrieval capabilities, leveraging a database of over 10 years of PET imaging reports from a single center. The system uses vector space embedding to facilitate similarity-based retrieval. Queries prompt the system to generate context-based answers and identify similar cases or differential diagnoses. From routine clinical PET readings, experienced nuclear medicine physicians evaluated the performance of system in terms of the relevance of queried similar cases and the appropriateness score of suggested potential diagnoses. The system efficiently organized embedded vectors from PET reports, showing that imaging reports were accurately clustered within the embedded vector space according to the diagnosis or PET study type. Based on this system, a proof-of-concept chatbot was developed and showed the framework's potential in referencing reports of previous similar cases and identifying exemplary cases for various purposes. From routine clinical PET readings, 84.1% of the cases retrieved relevant similar cases, as agreed upon by all three readers. Using the RAG system, the appropriateness score of the suggested potential diagnoses was significantly better than that of the LLM without RAG. Additionally, it demonstrated the capability to offer differential diagnoses, leveraging the vast database to enhance the completeness and precision of generated reports. The integration of RAG LLM with a large database of PET imaging reports suggests the potential to support clinical practice of nuclear medicine imaging reading by various tasks of AI including finding similar cases and deriving potential diagnoses from them. This study underscores the potential of advanced AI tools in transforming medical imaging reporting practices.

Semiautomated Extraction of Research Topics and Trends From National Cancer Institute Funding in Radiological Sciences From 2000 to 2020.

Nguyen MH, Beidler PG, Tsai J, Anderson A, Chen D, Kinahan PE, Kang J

pubmed logopapersJun 1 2025
Investigators and funding organizations desire knowledge on topics and trends in publicly funded research but current efforts for manual categorization have been limited in breadth and depth of understanding. We present a semiautomated analysis of 21 years of R-type National Cancer Institute (NCI) grants to departments of radiation oncology and radiology using natural language processing. We selected all noneducation R-type NCI grants from 2000 to 2020 awarded to departments of radiation oncology/radiology with affiliated schools of medicine. We used pretrained word embedding vectors to represent each grant abstract. A sequential clustering algorithm assigned each grant to 1 of 60 clusters representing research topics; we repeated the same workflow for 15 clusters for comparison. Each cluster was then manually named using the top words and closest documents to each cluster centroid. The interpretability of document embeddings was evaluated by projecting them onto 2 dimensions. Changes in clusters over time were used to examine temporal funding trends. We included 5874 grants totaling 1.9 billion dollars of NCI funding over 21 years. The human-model agreement was similar to the human interrater agreement. Two-dimensional projections of grant clusters showed 2 dominant axes: physics-biology and therapeutic-diagnostic. Therapeutic and physics clusters have grown faster over time than diagnostic and biology clusters. The 3 topics with largest funding increase were imaging biomarkers, informatics, and radiopharmaceuticals, which all had a mean annual growth of >$218,000. The 3 topics with largest funding decrease were cellular stress response, advanced imaging hardware technology, and improving performance of breast cancer computer-aided detection, which all had a mean decrease of >$110,000. We developed a semiautomated natural language processing approach to analyze research topics and funding trends. We applied this approach to NCI funding in the radiological sciences to extract both domains of research being funded and temporal trends.

An explainable transformer model integrating PET and tabular data for histologic grading and prognosis of follicular lymphoma: a multi-institutional digital biopsy study.

Jiang C, Jiang Z, Zhang Z, Huang H, Zhou H, Jiang Q, Teng Y, Li H, Xu B, Li X, Xu J, Ding C, Li K, Tian R

pubmed logopapersJun 1 2025
Pathological grade is a critical determinant of clinical outcomes and decision-making of follicular lymphoma (FL). This study aimed to develop a deep learning model as a digital biopsy for the non-invasive identification of FL grade. This study retrospectively included 513 FL patients from five independent hospital centers, randomly divided into training, internal validation, and external validation cohorts. A multimodal fusion Transformer model was developed integrating 3D PET tumor images with tabular data to predict FL grade. Additionally, the model is equipped with explainable modules, including Gradient-weighted Class Activation Mapping (Grad-CAM) for PET images, SHapley Additive exPlanations analysis for tabular data, and the calculation of predictive contribution ratios for both modalities, to enhance clinical interpretability and reliability. The predictive performance was evaluated using the area under the receiver operating characteristic curve (AUC) and accuracy, and its prognostic value was also assessed. The Transformer model demonstrated high accuracy in grading FL, with AUCs of 0.964-0.985 and accuracies of 90.2-96.7% in the training cohort, and similar performance in the validation cohorts (AUCs: 0.936-0.971, accuracies: 86.4-97.0%). Ablation studies confirmed that the fusion model outperformed single-modality models (AUCs: 0.974 - 0.956, accuracies: 89.8%-85.8%). Interpretability analysis revealed that PET images contributed 81-89% of the predictive value. Grad-CAM highlighted the tumor and peri-tumor regions. The model also effectively stratified patients by survival risk (P < 0.05), highlighting its prognostic value. Our study developed an explainable multimodal fusion Transformer model for accurate grading and prognosis of FL, with the potential to aid clinical decision-making.
Page 21 of 38374 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.