Sort by:
Page 22 of 99982 results

Mitosis detection in domain shift scenarios: a Mamba-based approach

Gennaro Percannella, Mattia Sarno, Francesco Tortorella, Mario Vento

arxiv logopreprintAug 28 2025
Mitosis detection in histopathology images plays a key role in tumor assessment. Although machine learning algorithms could be exploited for aiding physicians in accurately performing such a task, these algorithms suffer from significative performance drop when evaluated on images coming from domains that are different from the training ones. In this work, we propose a Mamba-based approach for mitosis detection under domain shift, inspired by the promising performance demonstrated by Mamba in medical imaging segmentation tasks. Specifically, our approach exploits a VM-UNet architecture for carrying out the addressed task, as well as stain augmentation operations for further improving model robustness against domain shift. Our approach has been submitted to the track 1 of the MItosis DOmain Generalization (MIDOG) challenge. Preliminary experiments, conducted on the MIDOG++ dataset, show large room for improvement for the proposed method.

DECODE: An open-source cloud-based platform for the noninvasive management of peripheral artery disease.

AboArab MA, Anić M, Potsika VT, Saeed H, Zulfiqar M, Skalski A, Stretti E, Kostopoulos V, Psarras S, Pennati G, Berti F, Spahić L, Benolić L, Filipović N, Fotiadis DI

pubmed logopapersAug 28 2025
Peripheral artery disease (PAD) is a progressive vascular condition affecting >237 million individuals worldwide. Accurate diagnosis and patient-specific treatment planning are critical but are often hindered by limited access to advanced imaging tools and real-time analytical support. This study presents DECODE, an open-source, cloud-based platform that integrates artificial intelligence, interactive 3D visualization, and computational modeling to improve the noninvasive management of PAD. The DECODE platform was designed as a modular backend (Django) and frontend (React) architecture that combines deep learning-based segmentation, real-time volume rendering, and finite element simulations. Peripheral artery and intima-media thickness segmentation were implemented via convolutional neural networks, including extended U-Net and nnU-Net architectures. Centreline extraction algorithms provide quantitative vascular geometry analysis. Balloon angioplasty simulations were conducted via nonlinear finite element models calibrated with experimental data. Usability was evaluated via the System Usability Scale (SUS), and user acceptance was assessed via the Technology Acceptance Model (TAM). Peripheral artery segmentation achieved an average Dice coefficient of 0.91 and a 95th percentile Hausdorff distance 1.0 mm across 22 computed tomography dataset. Intima-media segmentation evaluated on 300 intravascular optical coherence tomography images demonstrated Dice scores 0.992 for the lumen boundaries and 0.980 for the intima boundaries, with corresponding Hausdorff distances of 0.056 mm and 0.101 mm, respectively. Finite element simulations successfully reproduced the mechanical interactions between balloon and artery models in both idealized and subject-specific geometries, identifying pressure and stress distributions relevant to treatment outcomes. The platform received an average SUS score 87.5, indicating excellent usability, and an overall TAM score 4.21 out of 5, reflecting high user acceptance. DECODE provides an automated, cloud-integrated solution for PAD diagnosis and intervention planning, combining deep learning, computational modeling, and high-fidelity visualization. The platform enables precise vascular analysis, real-time procedural simulation, and interactive clinical decision support. By streamlining image processing, enhancing segmentation accuracy, and enabling in-silico trials, DECODE offers a scalable infrastructure for personalized vascular care and sets a new benchmark in digital health technologies for PAD.

Privacy-preserving federated transfer learning for enhanced liver lesion segmentation in PET-CT imaging.

Kumar R, Zeng S, Kumar J, Mao X

pubmed logopapersAug 28 2025
Positron Emission Tomography-Computed Tomography (PET-CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning's privacy-preserving collaboration with transfer learning's pre-trained model adaptation, enhancing liver lesion segmentation in PET-CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET-CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET-CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.

PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer.

Du T, Li C, Grzegozek M, Huang X, Rahaman M, Wang X, Sun H

pubmed logopapersAug 28 2025
PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.

Evaluating the Quality and Understandability of Radiology Report Summaries Generated by ChatGPT: Survey Study.

Sunshine A, Honce GH, Callen AL, Zander DA, Tanabe JL, Pisani Petrucci SL, Lin CT, Honce JM

pubmed logopapersAug 27 2025
Radiology reports convey critical medical information to health care providers and patients. Unfortunately, they are often difficult for patients to comprehend, causing confusion and anxiety, thereby limiting patient engagement in health care decision-making. Large language models (LLMs) like ChatGPT (OpenAI) can create simplified, patient-friendly report summaries to increase accessibility, albeit with errors. We evaluated the accuracy and clarity of ChatGPT-generated summaries compared to original radiologist-assessed radiology reports, assessed patients' understanding and satisfaction with the summaries compared to the original reports, and compared the readability of the original reports and summaries using validated readability metrics. We anonymized 30 radiology reports created by neuroradiologists at our institution (6 brain magnetic resonance imaging, 6 brain computed tomography, 6 head and neck computed tomography angiography, 6 neck computed tomography, and 6 spine computed tomography). These anonymized reports were processed by ChatGPT to produce patient-centric summaries. Four board-certified neuroradiologists evaluated the ChatGPT-generated summaries on quality and accuracy compared to the original reports, and 4 patient volunteers separately evaluated the reports and summaries on perceived understandability and satisfaction. Readability was assessed using word count and validated readability scales. After reading the summary, patient confidence in understanding (98%, 116/118 vs 26%, 31/118) and satisfaction regarding the level of jargon/terminology (91%, 107/118 vs 8%, 9/118) and time taken to understand the content (97%, 115/118 vs 23%, 27/118) substantially improved. Ninety-two percent (108/118) of responses indicated the summary clarified patients' questions about the report, and 98% (116/118) of responses indicated patients would use the summary if available, with 67% (79/118) of responses indicating they would want access to both the report and summary, while 26% (31/118) of responses indicated only wanting the summary. Eighty-three percent (100/120) of radiologist responses indicated the summary represented the original report "extremely well" or "very well," with only 5% (6/120) of responses indicating it did so "slightly well" or "not well at all." Five percent (6/120) of responses indicated there was missing relevant medical information in the summary, 12% (14/120) reported instances of overemphasis of nonsignificant findings, and 18% (22/120) reported instances of underemphasis of significant findings. No fabricated findings were identified. Overall, 83% (99/120) of responses indicated that the summary would definitely/probably not lead patients to incorrect conclusions about the original report, with 10% (12/120) of responses indicating the summaries may do so. ChatGPT-generated summaries could significantly improve perceived comprehension and satisfaction while accurately reflecting most key information from original radiology reports. Instances of minor omissions and under-/overemphasis were noted in some summaries, underscoring the need for ongoing validation and oversight. Overall, these artificial intelligence-generated, patient-centric summaries hold promise for enhancing patient-centered communication in radiology.

A Systematic Review on the Generative AI Applications in Human Medical Genomics

Anton Changalidis, Yury Barbitoff, Yulia Nasykhova, Andrey Glotov

arxiv logopreprintAug 27 2025
Although traditional statistical techniques and machine learning methods have contributed significantly to genetics and, in particular, inherited disease diagnosis, they often struggle with complex, high-dimensional data, a challenge now addressed by state-of-the-art deep learning models. Large language models (LLMs), based on transformer architectures, have excelled in tasks requiring contextual comprehension of unstructured medical data. This systematic review examines the role of LLMs in the genetic research and diagnostics of both rare and common diseases. Automated keyword-based search in PubMed, bioRxiv, medRxiv, and arXiv was conducted, targeting studies on LLM applications in diagnostics and education within genetics and removing irrelevant or outdated models. A total of 172 studies were analyzed, highlighting applications in genomic variant identification, annotation, and interpretation, as well as medical imaging advancements through vision transformers. Key findings indicate that while transformer-based models significantly advance disease and risk stratification, variant interpretation, medical imaging analysis, and report generation, major challenges persist in integrating multimodal data (genomic sequences, imaging, and clinical records) into unified and clinically robust pipelines, facing limitations in generalizability and practical implementation in clinical settings. This review provides a comprehensive classification and assessment of the current capabilities and limitations of LLMs in transforming hereditary disease diagnostics and supporting genetic education, serving as a guide to navigate this rapidly evolving field.

Is the medical image segmentation problem solved? A survey of current developments and future directions

Guoping Xu, Jayaram K. Udupa, Jax Luo, Songlin Zhao, Yajun Yu, Scott B. Raymond, Hao Peng, Lipeng Ning, Yogesh Rathi, Wei Liu, You Zhang

arxiv logopreprintAug 27 2025
Medical image segmentation has advanced rapidly over the past two decades, largely driven by deep learning, which has enabled accurate and efficient delineation of cells, tissues, organs, and pathologies across diverse imaging modalities. This progress raises a fundamental question: to what extent have current models overcome persistent challenges, and what gaps remain? In this work, we provide an in-depth review of medical image segmentation, tracing its progress and key developments over the past decade. We examine core principles, including multiscale analysis, attention mechanisms, and the integration of prior knowledge, across the encoder, bottleneck, skip connections, and decoder components of segmentation networks. Our discussion is organized around seven key dimensions: (1) the shift from supervised to semi-/unsupervised learning, (2) the transition from organ segmentation to lesion-focused tasks, (3) advances in multi-modality integration and domain adaptation, (4) the role of foundation models and transfer learning, (5) the move from deterministic to probabilistic segmentation, (6) the progression from 2D to 3D and 4D segmentation, and (7) the trend from model invocation to segmentation agents. Together, these perspectives provide a holistic overview of the trajectory of deep learning-based medical image segmentation and aim to inspire future innovation. To support ongoing research, we maintain a continually updated repository of relevant literature and open-source resources at https://github.com/apple1986/medicalSegReview

Two stage large language model approach enhancing entity classification and relationship mapping in radiology reports.

Shin C, Eom D, Lee SM, Park JE, Kim K, Lee KH

pubmed logopapersAug 27 2025
Large language models (LLMs) hold transformative potential for medical image labeling in radiology, addressing challenges posed by linguistic variability in reports. We developed a two-stage natural language processing pipeline that combines Bidirectional Encoder Representations from Transformers (BERT) and an LLM to analyze radiology reports. In the first stage (Entity Key Classification), BERT model identifies and classifies clinically relevant entities mentioned in the text. In the second stage (Relationship Mapping), the extracted entities are incorporated into the LLM to infer relationships between entity pairs, considering actual presence of entity. The pipeline targets lesion-location mapping in chest CT and diagnosis-episode mapping in brain MRI, both of which are clinically important for structuring radiologic findings and capturing temporal patterns of disease progression. Using over 400,000 reports from Seoul Asan Medical Center, our pipeline achieved a macro F1-score of 77.39 for chest CT and 70.58 for brain MRI. These results highlight the effectiveness of integrating BERT with an LLM to enhance diagnostic accuracy in radiology report analysis.

HONeYBEE: Enabling Scalable Multimodal AI in Oncology Through Foundation Model-Driven Embeddings

Tripathi, A. G., Waqas, A., Schabath, M. B., Yilmaz, Y., Rasool, G.

medrxiv logopreprintAug 27 2025
HONeYBEE (Harmonized ONcologY Biomedical Embedding Encoder) is an open-source framework that integrates multimodal biomedical data for oncology applications. It processes clinical data (structured and unstructured), whole-slide images, radiology scans, and molecular profiles to generate unified patient-level embeddings using domain-specific foundation models and fusion strategies. These embeddings enable survival prediction, cancer-type classification, patient similarity retrieval, and cohort clustering. Evaluated on 11,400+ patients across 33 cancer types from The Cancer Genome Atlas (TCGA), clinical embeddings showed the strongest single-modality performance with 98.5% classification accuracy and 96.4% precision@10 in patient retrieval. They also achieved the highest survival prediction concordance indices across most cancer types. Multimodal fusion provided complementary benefits for specific cancers, improving overall survival prediction beyond clinical features alone. Comparative evaluation of four large language models revealed that general-purpose models like Qwen3 outperformed specialized medical models for clinical text representation, though task-specific fine-tuning improved performance on heterogeneous data such as pathology reports.

Beyond the norm: Exploring the diverse facets of adrenal lesions.

Afif S, Mahmood Z, Zaheer A, Azadi JR

pubmed logopapersAug 26 2025
Radiological diagnosis of adrenal lesions can be challenging due to the overlap between benign and malignant imaging features. The primary challenge in managing adrenal lesions is to accurately identify and characterize them to minimize unnecessary diagnostic examinations and interventions. However, there are substantial risks of underdiagnosis and misdiagnosis. This review article provides a comprehensive overview of typical, atypical, and overlapping imaging features of both common and rare adrenal lesions and explores emerging applications of artificial intelligence powered analysis of CT and MRI, which could play a pivotal role in distinguishing benign from malignant and functioning from non-functioning adrenal lesions with significant diagnostic accuracy, thereby enhancing diagnostic confidence and potentially reducing unnecessary interventions.
Page 22 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.