Sort by:
Page 2 of 2252246 results

Sureness of classification of breast cancers as pure ductal carcinoma <i>in situ</i> or with invasive components on dynamic contrast-enhanced magnetic resonance imaging: application of likelihood assurance metrics for computer-aided diagnosis.

Whitney HM, Drukker K, Edwards A, Giger ML

pubmed logopapersNov 1 2025
Breast cancer may persist within milk ducts (ductal carcinoma <i>in situ</i>, DCIS) or advance into surrounding breast tissue (invasive ductal carcinoma, IDC). Occasionally, invasiveness in cancer may be underestimated during biopsy, leading to adjustments in the treatment plan based on unexpected surgical findings. Artificial intelligence/computer-aided diagnosis (AI/CADx) techniques in medical imaging may have the potential to predict whether a lesion is purely DCIS or exhibits a mixture of IDC and DCIS components, serving as a valuable supplement to biopsy findings. To enhance the evaluation of AI/CADx performance, assessing variability on a lesion-by-lesion basis via likelihood assurance measures could add value. We evaluated the performance in the task of distinguishing between pure DCIS and mixed IDC/DCIS breast cancers using computer-extracted radiomic features from dynamic contrast-enhanced magnetic resonance imaging using 0.632+ bootstrapping methods (2000 folds) on 550 lesions (135 pure DCIS, 415 mixed IDC/DCIS). Lesion-based likelihood assurance was measured using a sureness metric based on the 95% confidence interval of the classifier output for each lesion. The median and 95% CI of the 0.632+-corrected area under the receiver operating characteristic curve for the task of classifying lesions as pure DCIS or mixed IDC/DCIS were 0.81 [0.75, 0.86]. The sureness metric varied across the dataset with a range of 0.0002 (low sureness) to 0.96 (high sureness), with combinations of high and low classifier output and high and low sureness for some lesions. Sureness metrics can provide additional insights into the ability of CADx algorithms to pre-operatively predict whether a lesion is invasive.

Artificial intelligence in medical imaging diagnosis: are we ready for its clinical implementation?

Ramos-Soto O, Aranguren I, Carrillo M M, Oliva D, Balderas-Mata SE

pubmed logopapersNov 1 2025
We examine the transformative potential of artificial intelligence (AI) in medical imaging diagnosis, focusing on improving diagnostic accuracy and efficiency through advanced algorithms. It addresses the significant challenges preventing immediate clinical adoption of AI, specifically from technical, ethical, and legal perspectives. The aim is to highlight the current state of AI in medical imaging and outline the necessary steps to ensure safe, effective, and ethically sound clinical implementation. We conduct a comprehensive discussion, with special emphasis on the technical requirements for robust AI models, the ethical frameworks needed for responsible deployment, and the legal implications, including data privacy and regulatory compliance. Explainable artificial intelligence (XAI) is examined as a means to increase transparency and build trust among healthcare professionals and patients. The analysis reveals key challenges to AI integration in clinical settings, including the need for extensive high-quality datasets, model reliability, advanced infrastructure, and compliance with regulatory standards. The lack of explainability in AI outputs remains a barrier, with XAI identified as crucial for meeting transparency standards and enhancing trust among end users. Overcoming these barriers requires a collaborative, multidisciplinary approach to integrate AI into clinical practice responsibly. Addressing technical, ethical, and legal issues will support a softer transition, fostering a more accurate, efficient, and patient-centered healthcare system where AI augments traditional medical practices.

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Role of Brain Age Gap as a Mediator in the Relationship Between Cognitive Impairment Risk Factors and Cognition.

Tan WY, Huang X, Huang J, Robert C, Cui J, Chen CPLH, Hilal S

pubmed logopapersJul 22 2025
Cerebrovascular disease (CeVD) and cognitive impairment risk factors contribute to cognitive decline, but the role of brain age gap (BAG) in mediating this relationship remains unclear, especially in Southeast Asian populations. This study investigated the influence of cognitive impairment risk factors on cognition and examined how BAG mediates this relationship, particularly in individuals with varying CeVD burden. This cross-sectional study analyzed Singaporean community and memory clinic participants. Cognitive impairment risk factors were assessed using the Cognitive Impairment Scoring System (CISS), encompassing 11 sociodemographic and vascular factors. Cognition was assessed through a neuropsychological battery, evaluating global cognition and 6 cognitive domains: executive function, attention, memory, language, visuomotor speed, and visuoconstruction. Brain age was derived from structural MRI features using ensemble machine learning model. Propensity score matching balanced risk profiles between model training and the remaining sample. Structural equation modeling examined the mediation effect of BAG on CISS-cognition relationship, stratified by CeVD burden (high: CeVD+, low: CeVD-). The study included 1,437 individuals without dementia, with 646 in the matched sample (mean age 66.4 ± 6.0 years, 47% female, 60% with no cognitive impairment). Higher CISS was consistently associated with poorer cognitive performance across all domains, with the strongest negative associations in visuomotor speed (β = -2.70, <i>p</i> < 0.001) and visuoconstruction (β = -3.02, <i>p</i> < 0.001). Among the CeVD+ group, BAG significantly mediated the relationship between CISS and global cognition (proportion mediated: 19.95%, <i>p</i> = 0.01), with the strongest mediation effects in executive function (34.1%, <i>p</i> = 0.03) and language (26.6%, <i>p</i> = 0.008). BAG also mediated the relationship between CISS and memory (21.1%) and visuoconstruction (14.4%) in the CeVD+ group, but these effects diminished after statistical adjustments. Our findings suggest that BAG is a key intermediary linking cognitive impairment risk factors to cognitive function, particularly in individuals with high CeVD burden. This mediation effect is domain-specific, with executive function, language, and visuoconstruction being the most vulnerable to accelerated brain aging. Limitations of this study include the cross-sectional design, limiting causal inference, and the focus on Southeast Asian populations, limiting generalizability. Future longitudinal studies should verify these relationships and explore additional factors not captured in our model.

Deep Learning for Detecting and Subtyping Renal Cell Carcinoma on Contrast-Enhanced CT Scans Using 2D Neural Network with Feature Consistency Techniques.

Gupta A, Dhanakshirur RR, Jain K, Garg S, Yadav N, Seth A, Das CJ

pubmed logopapersJul 1 2025
<b>Objective</b>  The aim of this study was to explore an innovative approach for developing deep learning (DL) algorithm for renal cell carcinoma (RCC) detection and subtyping on computed tomography (CT): clear cell RCC (ccRCC) versus non-ccRCC using two-dimensional (2D) neural network architecture and feature consistency modules. <b>Materials and Methods</b>  This retrospective study included baseline CT scans from 196 histopathologically proven RCC patients: 143 ccRCCs and 53 non-ccRCCs. Manual tumor annotations were performed on axial slices of corticomedullary phase images, serving as ground truth. After image preprocessing, the dataset was divided into training, validation, and testing subsets. The study tested multiple 2D DL architectures, with the FocalNet-DINO demonstrating highest effectiveness in detecting and classifying RCC. The study further incorporated spatial and class consistency modules to enhance prediction accuracy. Models' performance was evaluated using free-response receiver operating characteristic curves, recall rates, specificity, accuracy, F1 scores, and area under the curve (AUC) scores. <b>Results</b>  The FocalNet-DINO architecture achieved the highest recall rate of 0.823 at 0.025 false positives per image (FPI) for RCC detection. The integration of spatial and class consistency modules into the architecture led to 0.2% increase in recall rate at 0.025 FPI, along with improvements of 0.1% in both accuracy and AUC scores for RCC classification. These enhancements allowed detection of cancer in an additional 21 slices and reduced false positives in 126 slices. <b>Conclusion</b>  This study demonstrates high performance for RCC detection and classification using DL algorithm leveraging 2D neural networks and spatial and class consistency modules, to offer a novel, computationally simpler, and accurate DL approach to RCC characterization.

Establishment and evaluation of an automatic multi?sequence MRI segmentation model of primary central nervous system lymphoma based on the nnU?Net deep learning network method.

Wang T, Tang X, Du J, Jia Y, Mou W, Lu G

pubmed logopapersJul 1 2025
Accurate quantitative assessment using gadolinium-contrast magnetic resonance imaging (MRI) is crucial in therapy planning, surveillance and prognostic assessment of primary central nervous system lymphoma (PCNSL). The present study aimed to develop a multimodal artificial intelligence deep learning segmentation model to address the challenges associated with traditional 2D measurements and manual volume assessments in MRI. Data from 49 pathologically-confirmed patients with PCNSL from six Chinese medical centers were analyzed, and regions of interest were manually segmented on contrast-enhanced T1-weighted and T2-weighted MRI scans for each patient, followed by fully automated voxel-wise segmentation of tumor components using a 3-dimenstional convolutional deep neural network. Furthermore, the efficiency of the model was evaluated using practical indicators and its consistency and accuracy was compared with traditional methods. The performance of the models were assessed using the Dice similarity coefficient (DSC). The Mann-Whitney U test was used to compare continuous clinical variables and the χ<sup>2</sup> test was used for comparisons between categorical clinical variables. T1WI sequences exhibited the optimal performance (training dice: 0.923, testing dice: 0.830, outer validation dice: 0.801), while T2WI showed a relatively poor performance (training dice of 0.761, a testing dice of 0.647, and an outer validation dice of 0.643. In conclusion, the automatic multi-sequences MRI segmentation model for PCNSL in the present study displayed high spatial overlap ratio and similar tumor volume with routine manual segmentation, indicating its significant potential.

[A deep learning method for differentiating nasopharyngeal carcinoma and lymphoma based on MRI].

Tang Y, Hua H, Wang Y, Tao Z

pubmed logopapersJul 1 2025
<b>Objective:</b>To development a deep learning(DL) model based on conventional MRI for automatic segmentation and differential diagnosis of nasopharyngeal carcinoma(NPC) and nasopharyngeal lymphoma(NPL). <b>Methods:</b>The retrospective study included 142 patients with NPL and 292 patients with NPC who underwent conventional MRI at Renmin Hospital of Wuhan University from June 2012 to February 2023. MRI from 80 patients were manually segmented to train the segmentation model. The automatically segmented regions of interest(ROIs) formed four datasets: T1 weighted images(T1WI), T2 weighted images(T2WI), T1 weighted contrast-enhanced images(T1CE), and a combination of T1WI and T2WI. The ImageNet-pretrained ResNet101 model was fine-tuned for the classification task. Statistical analysis was conducted using SPSS 22.0. The Dice coefficient loss was used to evaluate performance of segmentation task. Diagnostic performance was assessed using receiver operating characteristic(ROC) curves. Gradient-weighted class activation mapping(Grad-CAM) was imported to visualize the model's function. <b>Results:</b>The DICE score of the segmentation model reached 0.876 in the testing set. The AUC values of classification models in testing set were as follows: T1WI: 0.78(95%<i>CI</i> 0.67-0.81), T2WI: 0.75(95%<i>CI</i> 0.72-0.86), T1CE: 0.84(95%<i>CI</i> 0.76-0.87), and T1WI+T2WI: 0.93(95%<i>CI</i> 0.85-0.94). The AUC values for the two clinicians were 0.77(95%<i>CI</i> 0.72-0.82) for the junior, and 0.84(95%<i>CI</i> 0.80-0.89) for the senior. Grad-CAM analysis revealed that the central region of the tumor was highly correlated with the model's classification decisions, while the correlation was lower in the peripheral regions. <b>Conclusion:</b>The deep learning model performed well in differentiating NPC from NPL based on conventional MRI. The T1WI+T2WI combination model exhibited the best performance. The model can assist in the early diagnosis of NPC and NPL, facilitating timely and standardized treatment, which may improve patient prognosis.

Predicting Primary Graft Dysfunction in Systemic Sclerosis Lung Transplantation Using Machine-Learning and CT Features.

Singh J, Meng X, Leader JK, Ryan J, Pu L, Deitz R, Chan EG, Shigemura N, Hage CA, Sanchez PG, Pu J

pubmed logopapersJul 1 2025
Primary graft dysfunction (PGD) is a significant barrier to survival in lung transplant (LTx) recipients. PGD in patients with systemic sclerosis (SSc) remains especially underrepresented in research. We investigated 92 SSc recipients (mean age 51 years ± 10) who underwent bilateral LTx between 2007 and 2020. PGD was defined as grade 3 PGD at 72 h post-LTx. A comprehensive set of CT image features was automatically computed from recipient chest CT scans using deep learning algorithms. Volumetric analysis of recipients' lungs and chest cavity was used to estimate lung-size matching. Four machine learning (ML) algorithms were developed to predict PGD, including multivariate logistic regression, support vector machine (SVM), random forest classifier (RFC), and multilayer perceptron (MLP). PGD was significantly associated with BMI >30 kg/m<sup>2</sup> (p = 0.009), African American race (p = 0.011), lower Preop FEV1 (p = 0.002) and FVC (p = 0.004), longer waitlist time (p = 0.014), higher lung allocation score (LAS) (p = 0.028), and interstitial lung disease (p = 0.050). From CT analysis, PGD was significantly associated with decreased lung volume (p < 0.001), increased heart-chest cavity volume ratio (p < 0.001), epicardial (p = 0.033) and total heart (p = 0.049) adipose tissue, and five cardiopulmonary features (p < 0.050). Oversized donor allografts estimated using CT analysis were significantly associated with PGD (p < 0.050). The MLP model achieved a maximum AUROC of 0.85 (95% CI: 0.81-0.88) in predicting PGD with four features: Preop FEV1, heart-chest cavity volume ratio, waitlist time, and donor to recipient chest cavity volume ratio. CT-derived features are significantly associated with PGD, and models incorporating these features can predict PGD in SSc recipients.
Page 2 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.