Sort by:
Page 354 of 6646636 results

Nayyar A, Shrivastava R, Jain S

pubmed logopapersJul 21 2025
Early and precise diagnosis is essential for effectively treating and managing pulmonary tuberculosis. The purpose of this research is to leverage artificial intelligence (AI), specifically convolutional neural networks (CNNs), to expedite the diagnosis of tuberculosis (TB) using chest X-ray (CXR) images. Mycobacterium tuberculosis, an aerobic bacterium, is the causative agent of TB. The disease remains a global health challenge, particularly in densely populated countries. Early detection via chest X-rays is crucial, but limited medical expertise hampers timely diagnosis. This study explores the application of CNNs, a highly efficient method, for automated TB detection, especially in areas with limited medical expertise. Previously trained models, specifically VGG-16, VGG-19, ResNet 50, and Inception v3, were used to validate the data. Effective feature extraction and classification in medical image analysis, especially in TB diagnosis, is facilitated by the distinct design and capabilities that each model offers. VGG-16 and VGG-19 are very good at identifying minute distinctions and hierarchical characteristics from CXR images; on the other hand, ResNet 50 avoids overfitting while retaining both low and high-level features. The inception v3 model is quite useful for examining various complex patterns in a CXR image with its capacity to extract multi-scale features. Inception v3 outperformed other models, attaining 97.60% accuracy without pre-processing and 98.78% with pre-processing. The proposed model shows promising results as a tool for improving TB diagnosis, and reducing the global impact of the disease, but further validation with larger and more diverse datasets is needed.

Jassar S, Zhou Z, Leonard S, Youssef A, Probyn L, Kulasegaram K, Adams SJ

pubmed logopapersJul 21 2025
The integration of artificial intelligence (AI) in radiology may necessitate refinement of the competencies expected of radiologists. There is currently a lack of understanding on what competencies radiology residency programs should ensure their graduates attain related to AI. This study aimed to identify what knowledge, skills, and attitudes are important for radiologists to use AI safely and effectively in clinical practice. Following Arksey and O'Malley's methodology, a scoping review was conducted by searching electronic databases (PubMed, Embase, Scopus, and ERIC) for articles published between 2010 and 2024. Two reviewers independently screened articles based on the title and abstract and subsequently by full-text review. Data were extracted using a standardized form to identify the knowledge, skills, and attitudes surrounding AI that may be important for its safe and effective use. Of 5920 articles screened, 49 articles met inclusion criteria. Core competencies were related to AI model development, evaluation, clinical implementation, algorithm bias and handling discrepancies, regulation, ethics, medicolegal issues, and economics of AI. While some papers proposed competencies for radiologists focused on technical development of AI algorithms, other papers centered competencies around clinical implementation and use of AI. Current AI educational programming in radiology demonstrates substantial heterogeneity with a lack of consensus on the knowledge, skills, and attitudes for the safe and effective use of AI in radiology. Further research is needed to develop consensus on the core competencies for radiologists to safely and effectively use AI to support the integration of AI training and assessment into residency programs.

Gong W, Li M, Wang S, Jiang Y, Wu J, Li X, Ma C, Luo H, Zhou H

pubmed logopapersJul 21 2025
To validate the diagnostic performance of a B-mode ultrasound-based deep learning (DL) model in distinguishing benign and malignant cervical lymphadenopathy (CLP). A total of 210 CLPs with conclusive pathological results were retrospectively included and separated as training (n = 169) or test cohort (n = 41) randomly at a ratio of 4:1. A DL model integrating convolutional neural network, deformable convolution network and attention mechanism was developed. Three diagnostic models were developed: (a) Model I, CLPs with at least one suspicious B-mode ultrasound feature (ratio of longitudinal to short diameter < 2, irregular margin, hyper-echogenicity, hilus absence, cystic necrosis and calcification) were deemed malignant; (b) Model II: total risk score of B-mode ultrasound features obtained by multivariate logistic regression and (c) Model III: CLPs with positive DL output are deemed malignant. The diagnostic utility of these models was assessed by the area under the receiver operating curve (AUC) and corresponding sensitivity and specificity. Multivariate analysis indicated that DL positive result was the most important factor associated with malignant CLPs [odds ratio (OR) = 39.05, p < 0.001], only followed by hilus absence (OR = 6.01, p = 0.001) in the training cohort. In the test cohort, the AUC of the DL model (0.871) was significantly higher than that in model I (AUC = 0.681, p = 0.04) and model II (AUC = 0.679, p = 0.03), respectively. In addition, model III obtained 93.3% specificity, which was significantly higher than that in model I (40.0%, p = 0.002) and model II (60.0%, p = 0.03), respectively. Although the sensitivity of model I was the highest, it did not show a significant difference compared to that of model III (96.2% vs.80.8%, p = 0.083). B-mode ultrasound-based DL is a potentially robust tool for the differential diagnosis of benign and malignant CLPs.

Ramschütz C, Kloth C, Vogele D, Baum T, Rühling S, Beer M, Jansen JU, Schlager B, Wilke HJ, Kirschke JS, Sollmann N

pubmed logopapersJul 21 2025
To investigate lumbar vertebral volumetric bone mineral density (vBMD) from ex vivo opportunistic multi-detector computed tomography (MDCT) scans using different protocols, and compare it to dedicated quantitative CT (QCT) values from the same specimens. Cadavers from two female donors (ages 62 and 68 years) were scanned (L1-L5) using six different MDCT protocols and one dedicated QCT scan. Opportunistic vBMD was extracted using an artificial intelligence-based algorithm. The vBMD measurements from the six MDCT protocols, which varied in peak tube voltage (80-140 kVp), tube load (72-200 mAs), slice thickness (0.75-1 mm), and/or slice increment (0.5-0.75 mm), were compared to those obtained from dedicated QCT. A strong positive correlation was observed between vBMD from opportunistic MDCT and reference QCT (ρ = 0.869, p < 0.01). Agreement between vBMD measurements from MDCT protocols and the QCT reference standard according to the intraclass correlation coefficient (ICC) was 0.992 (95% confidence interval [CI]: 0.982-0.998). Bland-Altman analysis showed biases ranging from - 12.66 to 8.00 mg/cm³ across the six MDCT protocols, with all data points falling within the respective limits of agreement (LOA) for both cadavers. Opportunistic vBMD measurements of lumbar vertebrae demonstrated reliable consistency ex vivo across various scan parameters when compared to dedicated QCT.

Zhang, S.

medrxiv logopreprintJul 21 2025
Classification between first episode psychosis (FEP) patients and healthy controls is of particular interest to the study of schizophrenia. However, predicting psychosis with cognitive assessments alone is prone to human errors and often lacks biological evidence to back up the findings. In this work, we combined a multimodal dataset of structural MRI and cognitive data to disentangle the detection of first-episode psychosis with a machine learning approach. For this purpose, we proposed a robust detection pipeline that explores the variables in high-order feature space. We applied the pipeline to Human Connectome Project for Early Psychosis (HCP-EP) dataset with 108 participants in EP and 47 controls. The pipeline demonstrated strong performance with 74.67% balanced accuracy on this task. Further feature analysis shows that the model is capable of identifying verified causative biological factors for the occurrence of psychosis based on volumetric MRI measurements, which suggests the potential of data-driven approaches for the search for neuroimaging biomarkers in future studies.

Perkonigg M, Bastati N, Ba-Ssalamah A, Mesenbrink P, Goehler A, Martic M, Zhou X, Trauner M, Langs G

pubmed logopapersJul 21 2025
Quantifiable image patterns associated with disease progression and treatment response are critical tools for guiding individual treatment, and for developing novel therapies. Here, we show that unsupervised machine learning can identify a pattern vocabulary of liver tissue in magnetic resonance images that quantifies treatment response in diffuse liver disease. Deep clustering networks simultaneously encode and cluster patches of medical images into a low-dimensional latent space to establish a tissue vocabulary. The resulting tissue types capture differential tissue change and its location in the liver associated with treatment response. We demonstrate the utility of the vocabulary in a randomized controlled trial cohort of patients with nonalcoholic steatohepatitis. First, we use the vocabulary to compare longitudinal liver change in a placebo and a treatment cohort. Results show that the method identifies specific liver tissue change pathways associated with treatment and enables a better separation between treatment groups than established non-imaging measures. Moreover, we show that the vocabulary can predict biopsy derived features from non-invasive imaging data. We validate the method in a separate replication cohort to demonstrate the applicability of the proposed method.

Guarnier, G., Reinelt, J., Molloy, E. N., Mihai, P. G., Einaliyan, P., Valk, S., Modestino, A., Ugolini, M., Mueller, K., Wu, Q., Babayan, A., Castellaro, M., Villringer, A., Scherf, N., Thierbach, K., Schroeter, M. L., Alzheimers Disease Neuroimaging Initiative,, Australian Imaging Biomarkers and Lifestyle flagship study of ageing,, Frontotemporal Lobar Degeneration Neuroimaging Initiative,

medrxiv logopreprintJul 21 2025
Dementia is a complex condition whose multifaceted nature poses significant challenges in the diagnosis, prognosis, and treatment of patients. Despite the availability of large open-source data fueling a wealth of promising research, effective translation of preclinical findings to clinical practice remains difficult. This barrier is largely due to the complexity of unstructured and disparate preclinical and clinical data, which traditional analytical methods struggle to handle. Novel analytical techniques involving Deep Learning (DL), however, are gaining significant traction in this regard. Here, we have investigated the potential of a cascaded multimodal DL-based system (TelDem), assessing the ability to integrate and analyze a large, heterogeneous dataset (n=7,159 patients), applied to three clinically relevant use cases. Using a Cascaded Multi-Modal Mixing Transformer (CMT), we assessed TelDems validity and (using a Cross-Modal Fusion Norm - CMFN) model explainability in (i) differential diagnosis between healthy individuals, AD, and three sub-types of frontotemporal lobar degeneration (ii) disease staging from healthy cognition to mild cognitive impairment (MCI) and AD, and (iii) predicting progression from MCI to AD. Our findings show that the CMT enhances diagnostic and prognostic accuracy when incorporating multimodal data compared to unimodal modeling and that cerebrospinal fluid (CSF) biomarkers play a key role in accurate model decision making. These results reinforce the power of DL technology in tapping deeper into already existing data, thereby accelerating preclinical dementia research by utilizing clinically relevant information to disentangle complex dementia pathophysiology.

Cohen, S., Shamai, G., Sabo, E., Cretu, A., Barshack, I., Goldman, T., Bar-Sela, G., Pearson, A. T., Huo, D., Howard, F. M., Kimmel, R., Mayer, C.

medrxiv logopreprintJul 21 2025
The OncotypeDX 21-gene assay is a widely adopted tool for estimating recurrence risk and informing chemotherapy decisions in early-stage, hormone receptor-positive, HER2-negative breast cancer. Although informative, its high cost and long turnaround time limit accessibility and delay treatment in low- and middle-income countries, creating a need for alternative solutions. This study presents a deep learning-based approach for predicting OncotypeDX recurrence scores directly from hematoxylin and eosin-stained whole slide images. Our approach leverages a deep learning foundation model pre-trained on 171,189 slides via self-supervised learning, which is fine-tuned for our task. The model was developed and validated using five independent cohorts, out of which three are external. On the two external cohorts that include OncotypeDX scores, the model achieved an AUC of 0.825 and 0.817, and identified 21.9% and 25.1% of the patients as low-risk with sensitivity of 0.97 and 0.95 and negative predictive value of 0.97 and 0.96, showing strong generalizability despite variations in staining protocols and imaging devices. Kaplan-Meier analysis demonstrated that patients classified as low-risk by the model had a significantly better prognosis than those classified as high-risk, with a hazard ratio of 4.1 (P<0.001) and 2.0 (P<0.01) on the two external cohorts that include patient outcomes. This artificial intelligence-driven solution offers a rapid, cost-effective, and scalable alternative to genomic testing, with the potential to enhance personalized treatment planning, especially in resource-constrained settings.

Gadgil, S. U., DeGrave, A. J., Janizek, J. D., Xu, S., Nwandu, L., Fonjungo, F., Lee, S.-I., Daneshjou, R.

medrxiv logopreprintJul 21 2025
Recent advances in Artificial Intelligence (AI) have started disrupting the healthcare industry, especially medical imaging, and AI devices are increasingly being deployed into clinical practice. Such classifiers have previously demonstrated the ability to discern a range of protected demographic attributes (like race, age, sex) from medical images with unexpectedly high performance, a sensitive task which is difficult even for trained physicians. In this study, we motivate and introduce a general explainable AI (XAI) framework called DREAM (DiscoveRing and Explaining AI Mechanisms) for interpreting how AI models trained on medical images predict protected attributes. Focusing on two modalities, radiology and dermatology, we are successfully able to train high-performing classifiers for predicting race from chest x-rays (ROC-AUC score of [~]0.96) and sex from dermoscopic lesions (ROC-AUC score of [~]0.78). We highlight how incorrect use of these demographic shortcuts can have a detrimental effect on the performance of a clinically relevant downstream task like disease diagnosis under a domain shift. Further, we employ various XAI techniques to identify specific signals which can be leveraged to predict sex. Finally, we propose a technique, which we callremoval via balancing, to quantify how much a signal contributes to the classification performance. Using this technique and the signals identified, we are able to explain [~]15% of the total performance for radiology and [~]42% of the total performance for dermatology. We envision DREAM to be broadly applicable to other modalities and demographic attributes. This analysis not only underscores the importance of cautious AI application in healthcare but also opens avenues for improving the transparency and reliability of AI-driven diagnostic tools.

Tanjin Taher Toma, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Wesley Jong, Darwish Alabyad, Vivek Batheja, Abhishek Jha, Mayank Patel, Darko Pucar, Jayadira del Rivero, Karel Pacak, Ronald M. Summers

arxiv logopreprintJul 21 2025
Accurate segmentation of pheochromocytoma (PCC) in abdominal CT scans is essential for tumor burden estimation, prognosis, and treatment planning. It may also help infer genetic clusters, reducing reliance on expensive testing. This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation. We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma, introducing a set of novel multi-class schemes based on organ-specific anatomical priors. These priors were derived from adjacent organs commonly surrounding adrenal tumors (e.g., liver, spleen, kidney, aorta, adrenal gland, and pancreas), and were compared against a broad body-region prior used in previous work. The framework was trained and tested on 105 contrast-enhanced CT scans from 91 patients at the NIH Clinical Center. Performance was measured using Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-wise F1 score. Among all strategies, the Tumor + Kidney + Aorta (TKA) annotation achieved the highest segmentation accuracy, significantly outperforming the previously used Tumor + Body (TB) annotation across DSC (p = 0.0097), NSD (p = 0.0110), and F1 score (25.84% improvement at an IoU threshold of 0.5), measured on a 70-30 train-test split. The TKA model also showed superior tumor burden quantification (R^2 = 0.968) and strong segmentation across all genetic subtypes. In five-fold cross-validation, TKA consistently outperformed TB across IoU thresholds (0.1 to 0.5), reinforcing its robustness and generalizability. These findings highlight the value of incorporating relevant anatomical context into deep learning models to achieve precise PCC segmentation, offering a valuable tool to support clinical assessment and longitudinal disease monitoring in PCC patients.
Page 354 of 6646636 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.