Sort by:
Page 19 of 46453 results

Prediction of OncotypeDX recurrence score using H&E stained WSI images

Cohen, S., Shamai, G., Sabo, E., Cretu, A., Barshack, I., Goldman, T., Bar-Sela, G., Pearson, A. T., Huo, D., Howard, F. M., Kimmel, R., Mayer, C.

medrxiv logopreprintJul 21 2025
The OncotypeDX 21-gene assay is a widely adopted tool for estimating recurrence risk and informing chemotherapy decisions in early-stage, hormone receptor-positive, HER2-negative breast cancer. Although informative, its high cost and long turnaround time limit accessibility and delay treatment in low- and middle-income countries, creating a need for alternative solutions. This study presents a deep learning-based approach for predicting OncotypeDX recurrence scores directly from hematoxylin and eosin-stained whole slide images. Our approach leverages a deep learning foundation model pre-trained on 171,189 slides via self-supervised learning, which is fine-tuned for our task. The model was developed and validated using five independent cohorts, out of which three are external. On the two external cohorts that include OncotypeDX scores, the model achieved an AUC of 0.825 and 0.817, and identified 21.9% and 25.1% of the patients as low-risk with sensitivity of 0.97 and 0.95 and negative predictive value of 0.97 and 0.96, showing strong generalizability despite variations in staining protocols and imaging devices. Kaplan-Meier analysis demonstrated that patients classified as low-risk by the model had a significantly better prognosis than those classified as high-risk, with a hazard ratio of 4.1 (P<0.001) and 2.0 (P<0.01) on the two external cohorts that include patient outcomes. This artificial intelligence-driven solution offers a rapid, cost-effective, and scalable alternative to genomic testing, with the potential to enhance personalized treatment planning, especially in resource-constrained settings.

DREAM: A framework for discovering mechanisms underlying AI prediction of protected attributes

Gadgil, S. U., DeGrave, A. J., Janizek, J. D., Xu, S., Nwandu, L., Fonjungo, F., Lee, S.-I., Daneshjou, R.

medrxiv logopreprintJul 21 2025
Recent advances in Artificial Intelligence (AI) have started disrupting the healthcare industry, especially medical imaging, and AI devices are increasingly being deployed into clinical practice. Such classifiers have previously demonstrated the ability to discern a range of protected demographic attributes (like race, age, sex) from medical images with unexpectedly high performance, a sensitive task which is difficult even for trained physicians. In this study, we motivate and introduce a general explainable AI (XAI) framework called DREAM (DiscoveRing and Explaining AI Mechanisms) for interpreting how AI models trained on medical images predict protected attributes. Focusing on two modalities, radiology and dermatology, we are successfully able to train high-performing classifiers for predicting race from chest x-rays (ROC-AUC score of [~]0.96) and sex from dermoscopic lesions (ROC-AUC score of [~]0.78). We highlight how incorrect use of these demographic shortcuts can have a detrimental effect on the performance of a clinically relevant downstream task like disease diagnosis under a domain shift. Further, we employ various XAI techniques to identify specific signals which can be leveraged to predict sex. Finally, we propose a technique, which we callremoval via balancing, to quantify how much a signal contributes to the classification performance. Using this technique and the signals identified, we are able to explain [~]15% of the total performance for radiology and [~]42% of the total performance for dermatology. We envision DREAM to be broadly applicable to other modalities and demographic attributes. This analysis not only underscores the importance of cautious AI application in healthcare but also opens avenues for improving the transparency and reliability of AI-driven diagnostic tools.

Transfer Learning for Automated Two-class Classification of Pulmonary Tuberculosis in Chest X-Ray Images.

Nayyar A, Shrivastava R, Jain S

pubmed logopapersJul 21 2025
Early and precise diagnosis is essential for effectively treating and managing pulmonary tuberculosis. The purpose of this research is to leverage artificial intelligence (AI), specifically convolutional neural networks (CNNs), to expedite the diagnosis of tuberculosis (TB) using chest X-ray (CXR) images. Mycobacterium tuberculosis, an aerobic bacterium, is the causative agent of TB. The disease remains a global health challenge, particularly in densely populated countries. Early detection via chest X-rays is crucial, but limited medical expertise hampers timely diagnosis. This study explores the application of CNNs, a highly efficient method, for automated TB detection, especially in areas with limited medical expertise. Previously trained models, specifically VGG-16, VGG-19, ResNet 50, and Inception v3, were used to validate the data. Effective feature extraction and classification in medical image analysis, especially in TB diagnosis, is facilitated by the distinct design and capabilities that each model offers. VGG-16 and VGG-19 are very good at identifying minute distinctions and hierarchical characteristics from CXR images; on the other hand, ResNet 50 avoids overfitting while retaining both low and high-level features. The inception v3 model is quite useful for examining various complex patterns in a CXR image with its capacity to extract multi-scale features. Inception v3 outperformed other models, attaining 97.60% accuracy without pre-processing and 98.78% with pre-processing. The proposed model shows promising results as a tool for improving TB diagnosis, and reducing the global impact of the disease, but further validation with larger and more diverse datasets is needed.

CXR-TFT: Multi-Modal Temporal Fusion Transformer for Predicting Chest X-ray Trajectories

Mehak Arora, Ayman Ali, Kaiyuan Wu, Carolyn Davis, Takashi Shimazui, Mahmoud Alwakeel, Victor Moas, Philip Yang, Annette Esper, Rishikesan Kamaleswaran

arxiv logopreprintJul 19 2025
In intensive care units (ICUs), patients with complex clinical conditions require vigilant monitoring and prompt interventions. Chest X-rays (CXRs) are a vital diagnostic tool, providing insights into clinical trajectories, but their irregular acquisition limits their utility. Existing tools for CXR interpretation are constrained by cross-sectional analysis, failing to capture temporal dynamics. To address this, we introduce CXR-TFT, a novel multi-modal framework that integrates temporally sparse CXR imaging and radiology reports with high-frequency clinical data, such as vital signs, laboratory values, and respiratory flow sheets, to predict the trajectory of CXR findings in critically ill patients. CXR-TFT leverages latent embeddings from a vision encoder that are temporally aligned with hourly clinical data through interpolation. A transformer model is then trained to predict CXR embeddings at each hour, conditioned on previous embeddings and clinical measurements. In a retrospective study of 20,000 ICU patients, CXR-TFT demonstrated high accuracy in forecasting abnormal CXR findings up to 12 hours before they became radiographically evident. This predictive capability in clinical data holds significant potential for enhancing the management of time-sensitive conditions like acute respiratory distress syndrome, where early intervention is crucial and diagnoses are often delayed. By providing distinctive temporal resolution in prognostic CXR analysis, CXR-TFT offers actionable 'whole patient' insights that can directly improve clinical outcomes.

A clinically relevant morpho-molecular classification of lung neuroendocrine tumours

Sexton-Oates, A., Mathian, E., Candeli, N., Lim, Y., Voegele, C., Di Genova, A., Mange, L., Li, Z., van Weert, T., Hillen, L. M., Blazquez-Encinas, R., Gonzalez-Perez, A., Morrison, M. L., Lauricella, E., Mangiante, L., Bonheme, L., Moonen, L., Absenger, G., Altmuller, J., Degletagne, C., Brustugun, O. T., Cahais, V., Centonze, G., Chabrier, A., Cuenin, C., Damiola, F., de Montpreville, V. T., Deleuze, J.-F., Dingemans, A.-M. C., Fadel, E., Gadot, N., Ghantous, A., Graziano, P., Hofman, P., Hofman, V., Ibanez-Costa, A., Lacomme, S., Lopez-Bigas, N., Lund-Iversen, M., Milione, M., Muscarella, L

medrxiv logopreprintJul 18 2025
Lung neuroendocrine tumours (NETs, also known as carcinoids) are rapidly rising in incidence worldwide but have unknown aetiology and limited therapeutic options beyond surgery. We conducted multi-omic analyses on over 300 lung NETs including whole-genome sequencing (WGS), transcriptome profiling, methylation arrays, spatial RNA sequencing, and spatial proteomics. The integration of multi-omic data provides definitive proof of the existence of four strikingly different molecular groups that vary in patient characteristics, genomic and transcriptomic profiles, microenvironment, and morphology, as much as distinct diseases. Among these, we identify a new molecular group, enriched for highly aggressive supra-carcinoids, that displays an immune-rich microenvironment linked to tumour--macrophage crosstalk, and we uncover an undifferentiated cell population within supra-carcinoids, explaining their molecular and behavioural link to high-grade lung neuroendocrine carcinomas. Deep learning models accurately identified the Ca A1, Ca A2, and Ca B groups based on morphology alone, outperforming current histological criteria. The characteristic tumour microenvironment of supra-carcinoids and the validation of a panel of immunohistochemistry markers for the other three molecular groups demonstrates that these groups can be accurately identified based solely on morphological features, facilitating their implementation in the clinical setting. Our proposed morpho-molecular classification highlights group-specific therapeutic opportunities, including DLL3, FGFR, TERT, and BRAF inhibitors. Overall, our findings unify previously proposed molecular classifications and refine the lung cancer map by revealing novel tumour types and potential treatments, with significant implications for prognosis and treatment decision-making.

Detecting Fifth Metatarsal Fractures on Radiographs through the Lens of Smartphones: A FIXUS AI Algorithm

Taseh, A., Shah, A., Eftekhari, M., Flaherty, A., Ebrahimi, A., Jones, S., Nukala, V., Nazarian, A., Waryasz, G., Ashkani-Esfahani, S.

medrxiv logopreprintJul 18 2025
BackgroundFifth metatarsal (5MT) fractures are common but challenging to diagnose, particularly with limited expertise or subtle fractures. Deep learning shows promise but faces limitations due to image quality requirements. This study develops a deep learning model to detect 5MT fractures from smartphone-captured radiograph images, enhancing accessibility of diagnostic tools. MethodsA retrospective study included patients aged >18 with 5MT fractures (n=1240) and controls (n=1224). Radiographs (AP, oblique, lateral) from Electronic Health Records (EHR) were obtained and photographed using a smartphone, creating a new dataset (SP). Models using ResNet 152V2 were trained on EHR, SP, and combined datasets, then evaluated on a separate smartphone test dataset (SP-test). ResultsOn validation, the SP model achieved optimal performance (AUROC: 0.99). On the SP-test dataset, the EHR models performance decreased (AUROC: 0.83), whereas SP and combined models maintained high performance (AUROC: 0.99). ConclusionsSmartphone-specific deep learning models effectively detect 5MT fractures, suggesting their practical utility in resource-limited settings.

Multi-Centre Validation of a Deep Learning Model for Scoliosis Assessment

Šimon Kubov, Simon Klíčník, Jakub Dandár, Zdeněk Straka, Karolína Kvaková, Daniel Kvak

arxiv logopreprintJul 18 2025
Scoliosis affects roughly 2 to 4 percent of adolescents, and treatment decisions depend on precise Cobb angle measurement. Manual assessment is time consuming and subject to inter observer variation. We conducted a retrospective, multi centre evaluation of a fully automated deep learning software (Carebot AI Bones, Spine Measurement functionality; Carebot s.r.o.) on 103 standing anteroposterior whole spine radiographs collected from ten hospitals. Two musculoskeletal radiologists independently measured each study and served as reference readers. Agreement between the AI and each radiologist was assessed with Bland Altman analysis, mean absolute error (MAE), root mean squared error (RMSE), Pearson correlation coefficient, and Cohen kappa for four grade severity classification. Against Radiologist 1 the AI achieved an MAE of 3.89 degrees (RMSE 4.77 degrees) with a bias of 0.70 degrees and limits of agreement from minus 8.59 to plus 9.99 degrees. Against Radiologist 2 the AI achieved an MAE of 3.90 degrees (RMSE 5.68 degrees) with a bias of 2.14 degrees and limits from minus 8.23 to plus 12.50 degrees. Pearson correlations were r equals 0.906 and r equals 0.880 (inter reader r equals 0.928), while Cohen kappa for severity grading reached 0.51 and 0.64 (inter reader kappa 0.59). These results demonstrate that the proposed software reproduces expert level Cobb angle measurements and categorical grading across multiple centres, suggesting its utility for streamlining scoliosis reporting and triage in clinical workflows.

Using Convolutional Neural Networks for the Classification of Suboptimal Chest Radiographs.

Liu EH, Carrion D, Badawy MK

pubmed logopapersJul 18 2025
Chest X-rays (CXR) rank among the most conducted X-ray examinations. They often require repeat imaging due to inadequate quality, leading to increased radiation exposure and delays in patient care and diagnosis. This research assesses the efficacy of DenseNet121 and YOLOv8 neural networks in detecting suboptimal CXRs, which may minimise delays and enhance patient outcomes. The study included 3587 patients with a median age of 67 (0-102). It utilised an initial dataset comprising 10,000 CXRs randomly divided into a training subset (4000 optimal and 4000 suboptimal) and a validation subset (400 optimal and 400 suboptimal). The test subset (25 optimal and 25 suboptimal) was curated from the remaining images to provide adequate variation. Neural networks DenseNet121 and YOLOv8 were chosen due to their capabilities in image classification. DenseNet121 is a robust, well-tested model in the medical industry with high accuracy in object recognition. YOLOv8 is a cutting-edge commercial model targeted at all industries. Their performance was assessed via the area under the receiver operating curve (AUROC) and compared to radiologist classification, utilising the chi-squared test. DenseNet121 attained an AUROC of 0.97, while YOLOv8 recorded a score of 0.95, indicating a strong capability in differentiating between optimal and suboptimal CXRs. The alignment between radiologists and models exhibited variability, partly due to the lack of clinical indications. However, the performance was not statistically significant. Both AI models effectively classified chest X-ray quality, demonstrating the potential for providing radiographers with feedback to improve image quality. Notably, this was the first study to include both PA and lateral CXRs as well as paediatric cases and the first to evaluate YOLOv8 for this application.

Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images

Zahra TehraniNasab, Amar Kumar, Tal Arbel

arxiv logopreprintJul 17 2025
Medical image synthesis presents unique challenges due to the inherent complexity and high-resolution details required in clinical contexts. Traditional generative architectures such as Generative Adversarial Networks (GANs) or Variational Auto Encoder (VAEs) have shown great promise for high-resolution image generation but struggle with preserving fine-grained details that are key for accurate diagnosis. To address this issue, we introduce Pixel Perfect MegaMed, the first vision-language foundation model to synthesize images at resolutions of 1024x1024. Our method deploys a multi-scale transformer architecture designed specifically for ultra-high resolution medical image generation, enabling the preservation of both global anatomical context and local image-level details. By leveraging vision-language alignment techniques tailored to medical terminology and imaging modalities, Pixel Perfect MegaMed bridges the gap between textual descriptions and visual representations at unprecedented resolution levels. We apply our model to the CheXpert dataset and demonstrate its ability to generate clinically faithful chest X-rays from text prompts. Beyond visual quality, these high-resolution synthetic images prove valuable for downstream tasks such as classification, showing measurable performance gains when used for data augmentation, particularly in low-data regimes. Our code is accessible through the project website - https://tehraninasab.github.io/pixelperfect-megamed.

Exploring ChatGPT's potential in diagnosing oral and maxillofacial pathologies: a study of 123 challenging cases.

Tassoker M

pubmed logopapersJul 17 2025
This study aimed to evaluate the diagnostic performance of ChatGPT-4o, a large language model developed by OpenAI, in challenging cases of oral and maxillofacial diseases presented in the <i>Clinicopathologic Conference</i> section of the journal <i>Oral Surgery</i>, <i>Oral Medicine</i>, <i>Oral Pathology</i>, <i>Oral Radiology</i>. A total of 123 diagnostically challenging oral and maxillofacial cases published in the aforementioned journal were retrospectively collected. The case presentations, which included detailed clinical, radiographic, and sometimes histopathologic descriptions, were input into ChatGPT-4o. The model was prompted to provide a single most likely diagnosis for each case. These outputs were then compared to the final diagnoses established by expert consensus in each original case report. The accuracy of ChatGPT-4o was calculated based on exact diagnostic matches. ChatGPT-4o correctly diagnosed 96 out of 123 cases, achieving an overall diagnostic accuracy of 78%. Nevertheless, even in cases where the exact diagnosis was not provided, the model often suggested one of the clinically reasonable differential diagnoses. ChatGPT-4o demonstrates a promising ability to assist in the diagnostic process of complex maxillofacial conditions, with a relatively high accuracy rate in challenging cases. While it is not a replacement for expert clinical judgment, large language models may offer valuable decision support in oral and maxillofacial radiology, particularly in educational or consultative contexts. Not applicable.
Page 19 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.