Sort by:
Page 440 of 4454447 results

Artificial intelligence applications for the diagnosis of pulmonary nodules.

Ost DE

pubmed logopapersMay 6 2025
This review evaluates the role of artificial intelligence (AI) in diagnosing solitary pulmonary nodules (SPNs), focusing on clinical applications and limitations in pulmonary medicine. It explores AI's utility in imaging and blood/tissue-based diagnostics, emphasizing practical challenges over technical details of deep learning methods. AI enhances computed tomography (CT)-based computer-aided diagnosis (CAD) through steps like nodule detection, false positive reduction, segmentation, and classification, leveraging convolutional neural networks and machine learning. Segmentation achieves Dice similarity coefficients of 0.70-0.92, while malignancy classification yields areas under the curve of 0.86-0.97. AI-driven blood tests, incorporating RNA sequencing and clinical data, report AUCs up to 0.907 for distinguishing benign from malignant nodules. However, most models lack prospective, multiinstitutional validation, risking overfitting and limited generalizability. The "black box" nature of AI, coupled with overlapping inputs (e.g., nodule size, smoking history) with physician assessments, complicates integration into clinical workflows and precludes standard Bayesian analysis. AI shows promise for SPN diagnosis but requires rigorous validation in diverse populations and better clinician training for effective use. Rather than replacing judgment, AI should serve as a second opinion, with its reported performance metrics understood as study-specific, not directly applicable at the bedside due to double-counting issues.

Corticospinal tract reconstruction with tumor by using a novel direction filter based tractography method.

Zeng Q, Xia Z, Huang J, Xie L, Zhang J, Huang S, Xing Z, Zhuge Q, Feng Y

pubmed logopapersMay 6 2025
The corticospinal tract (CST) is the primary neural pathway responsible for voluntary motor functions, and preoperative CST reconstruction is crucial for preserving nerve functions during neurosurgery. Diffusion magnetic resonance imaging-based tractography is the only noninvasive method to preoperatively reconstruct CST in clinical practice. However, for the largesize bundle CST with complex fiber geometry (fanning fibers), reconstructing its full extent remains challenging with local-derived methods without incorporating global information. Especially in the presence of tumors, the mass effect and partial volume effect cause abnormal diffusion signals. In this work, a CST reconstruction tractography method based on a novel direction filter was proposed, designed to ensure robust CST reconstruction in the clinical dataset with tumors. A direction filter based on a fourth-order differential equation was introduced for global direction estimation. By considering the spatial consistency and leveraging anatomical prior knowledge, the direction filter was computed by minimizing the energy between the target directions and initial fiber directions. On the basis of the new directions corresponding to CST obtained by the direction filter, the fiber tracking method was implemented to reconstruct the fiber trajectory. Additionally, a deep learning-based method along with tractography template prior information was employed to generate the regions of interest (ROIs) and initial fiber directions. Experimental results showed that the proposed method yields higher valid connections and lower no connections and exhibits the fewest broken fibers and short-connected fibers. The proposed method offers an effective tool to enhance CST-related surgical outcomes by optimizing tumor resection and preserving CST.

Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach.

Khan M, Su'ud MM, Alam MM, Karimullah S, Shaik F, Subhan F

pubmed logopapersMay 6 2025
Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.

Deep Learning for Classification of Solid Renal Parenchymal Tumors Using Contrast-Enhanced Ultrasound.

Bai Y, An ZC, Du LF, Li F, Cai YY

pubmed logopapersMay 6 2025
The purpose of this study is to assess the ability of deep learning models to classify different subtypes of solid renal parenchymal tumors using contrast-enhanced ultrasound (CEUS) images and to compare their classification performance. A retrospective study was conducted using CEUS images of 237 kidney tumors, including 46 angiomyolipomas (AML), 118 clear cell renal cell carcinomas (ccRCC), 48 papillary RCCs (pRCC), and 25 chromophobe RCCs (chRCC), collected from January 2017 to December 2019. Two deep learning models, based on the ResNet-18 and RepVGG architectures, were trained and validated to distinguish between these subtypes. The models' performance was assessed using sensitivity, specificity, positive predictive value, negative predictive value, F1 score, Matthews correlation coefficient, accuracy, area under the receiver operating characteristic curve (AUC), and confusion matrix analysis. Class activation mapping (CAM) was applied to visualize the specific regions that contributed to the models' predictions. The ResNet-18 and RepVGG-A0 models achieved an overall accuracy of 76.7% and 84.5% across all four subtypes. The AUCs for AML, ccRCC, pRCC, and chRCC were 0.832, 0.829, 0.806, and 0.795 for the ResNet-18 model, compared to 0.906, 0.911, 0.840, and 0.827 for the RepVGG-A0 model, respectively. The deep learning models could reliably differentiate between various histological subtypes of renal tumors using CEUS images in an objective and non-invasive manner.

Deep Learning-Based CT-Less Cardiac Segmentation of PET Images: A Robust Methodology for Multi-Tracer Nuclear Cardiovascular Imaging.

Salimi Y, Mansouri Z, Nkoulou R, Mainta I, Zaidi H

pubmed logopapersMay 6 2025
Quantitative cardiovascular PET/CT imaging is useful in the diagnosis of multiple cardiac perfusion and motion pathologies. The common approach for cardiac segmentation consists in using co-registered CT images, exploiting publicly available deep learning (DL)-based segmentation models. However, the mismatch between structural CT images and PET uptake limits the usefulness of these approaches. Besides, the performance of DL models is not consistent over low-dose or ultra-low-dose CT images commonly used in clinical PET/CT imaging. In this work, we developed a DL-based methodology to tackle this issue by segmenting directly cardiac PET images. This study included 406 cardiac PET images from 146 patients (43 <sup>18</sup>F-FDG, 329 <sup>13</sup>N-NH<sub>3</sub>, and 37 <sup>82</sup>Rb images). Using previously trained DL nnU-Net models in our group, we segmented the whole heart and the three main cardiac components, namely the left myocardium (LM), left ventricle cavity (LV), and right ventricle (RV) on co-registered CT images. The segmentation was resampled to PET resolution and edited through a combination of automated image processing and manual correction. The corrected segmentation masks and SUV PET images were fed to a nnU-Net V2 pipeline to be trained in fivefold data split strategy by defining two tasks: task #1 for whole cardiac segmentation and task #2 for segmentation of three cardiac components. Fifteen cardiac images were used as external validation set. The DL delineated masks were compared with standard of reference masks using Dice coefficient, Jaccard distance, mean surface distance, and segment volume relative error (%). Task #1 average Dice coefficient in internal validation fivefold was 0.932 ± 0.033. The average Dice on the 15 external cases were comparable with the fivefold Dice reaching an average of 0.941 ± 0.018. Task #2 average Dice in fivefold validation was 0.88 ± 0.063, 0.828 ± 0.091, and 0.876 ± 0.062 for LM, LV, and RV, respectively. There was no statistically significant difference among the Dice coefficients, neither between images acquired by three radiotracers nor between the different folds (P-values >  > 0.05). The overall average volume prediction error in cardiac components segmentation was less than 2%. We developed an automated DL-based segmentation pipeline to segment the whole heart and cardiac components with acceptable accuracy and robust performance in the external test set and over three radiotracers used in nuclear cardiovascular imaging. The proposed methodology can overcome unreliable segmentations performed on CT images.

Stacking classifiers based on integrated machine learning model: fusion of CT radiomics and clinical biomarkers to predict lymph node metastasis in locally advanced gastric cancer patients after neoadjuvant chemotherapy.

Ling T, Zuo Z, Huang M, Ma J, Wu L

pubmed logopapersMay 6 2025
The early prediction of lymph node positivity (LN+) after neoadjuvant chemotherapy (NAC) is crucial for optimizing individualized treatment strategies. This study aimed to integrate radiomic features and clinical biomarkers through machine learning (ML) approaches to enhance prediction accuracy by focusing on patients with locally advanced gastric cancer (LAGC). We retrospectively enrolled 277 patients with LAGC and randomly divided them into training (n = 193) and validation (n = 84) sets at a 7:3 ratio. In total, 1,130 radiomics features were extracted from pre-treatment portal venous phase computed tomography scans. These features were linearly combined to develop a radiomics score (rad score) through feature engineering. Then, using the rad score and clinical biomarkers as input features, we applied simple statistical strategies (relying on a single ML model) and integrated statistical strategies (including classification model integration techniques, such as hard voting, soft voting, and stacking) to predict LN+ post-NAC. The diagnostic performance of the model was assessed using receiver operating characteristic curves with corresponding areas under the curve (AUC). Of all ML models, the stacking classifier, an integrated statistical strategy, exhibited the best performance, achieving an AUC of 0.859 for predicting LN+ in patients with LAGC. This predictive model was transformed into a publicly available online risk calculator. We developed a stacking classifier that integrates radiomics and clinical biomarkers to predict LN+ in patients with LAGC undergoing surgical resection, providing personalized treatment insights.

Multi-task learning for joint prediction of breast cancer histological indicators in dynamic contrast-enhanced magnetic resonance imaging.

Sun R, Li X, Han B, Xie Y, Nie S

pubmed logopapersMay 6 2025
Achieving efficient analysis of multiple pathological indicators has great significance for breast cancer prognosis and therapeutic decision-making. In this study, we aim to explore a deep multi-task learning (MTL) framework for collaborative prediction of histological grade and proliferation marker (Ki-67) status in breast cancer using multi-phase dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). In the novel design of hybrid multi-task architecture (HMT-Net), co-representative features are explicitly distilled using a feature extraction backbone. A customized prediction network is then introduced to perform soft-parameter sharing between two correlated tasks. Specifically, task-common and task-specific knowledge is transmitted into tower layers for informative interactions. Furthermore, low-level feature maps containing tumor edges and texture details are recaptured by a hard-parameter sharing branch, which are then incorporated into the tower layer for each subtask. Finally, the probabilities of two histological indicators, predicted in the multi-phase DCE-MRI, are separately fused using a decision-level fusion strategy. Experimental results demonstrate that the proposed HMT-Net achieves optimal discriminative performance over other recent MTL architectures and deep models based on single image series, with the area under the receiver operating characteristic curve of 0.908 for tumor grade and 0.694 for Ki-67 status. Benefiting from the innovative HMT-Net, our proposed method elucidates its strong robustness and flexibility in the collaborative prediction task of breast biomarkers. Multi-phase DCE-MRI is expected to contribute valuable dynamic information for breast cancer pathological assessment in a non-invasive manner.

Real-time brain tumour diagnoses using a novel lightweight deep learning model.

Alnageeb MHO, M H S

pubmed logopapersMay 6 2025
Brain tumours continue to be a primary cause of worldwide death, highlighting the critical need for effective and accurate diagnostic tools. This article presents MK-YOLOv8, an innovative lightweight deep learning framework developed for the real-time detection and categorization of brain tumours from MRI images. Based on the YOLOv8 architecture, the proposed model incorporates Ghost Convolution, the C3Ghost module, and the SPPELAN module to improve feature extraction and substantially decrease computational complexity. An x-small object detection layer has been added, supporting precise detection of small and x-small tumours, which is crucial for early diagnosis. Trained on the Figshare Brain Tumour (FBT) dataset comprising (3,064) MRI images, MK-YOLOv8 achieved a mean Average Precision (mAP) of 99.1% at IoU (0.50) and 88.4% at IoU (0.50-0.95), outperforming YOLOv8 (98% and 78.8%, respectively). Glioma recall improved by 26%, underscoring the enhanced sensitivity to challenging tumour types. With a computational footprint of only 96.9 GFLOPs (representing 37.5% of YOYOLOv8x'sFLOPs) and utilizing 12.6 million parameters, a mere 18.5% of YOYOLOv8's parameters, MK-YOLOv8 delivers high efficiency with reduced resource demands. Also, it trained on the Br35H dataset (801 images) to guarantee the model's robustness and generalization; it achieved a mAP of 98.6% at IoU (0.50). The suggested model operates at 62 frames per second (FPS) and is suited for real-time clinical processes. These developments establish MK-YOLOv8 as an innovative framework, overcoming challenges in tiny tumour identification and providing a generalizable, adaptable, and precise detection approach for brain tumour diagnostics in clinical settings.

From manual clinical criteria to machine learning algorithms: Comparing outcome endpoints derived from diverse electronic health record data modalities.

Chappidi S, Belue MJ, Harmon SA, Jagasia S, Zhuge Y, Tasci E, Turkbey B, Singh J, Camphausen K, Krauze AV

pubmed logopapersMay 1 2025
Progression free survival (PFS) is a critical clinical outcome endpoint during cancer management and treatment evaluation. Yet, PFS is often missing from publicly available datasets due to the current subjective, expert, and time-intensive nature of generating PFS metrics. Given emerging research in multi-modal machine learning (ML), we explored the benefits and challenges associated with mining different electronic health record (EHR) data modalities and automating extraction of PFS metrics via ML algorithms. We analyzed EHR data from 92 pathology-proven GBM patients, obtaining 233 corticosteroid prescriptions, 2080 radiology reports, and 743 brain MRI scans. Three methods were developed to derive clinical PFS: 1) frequency analysis of corticosteroid prescriptions, 2) natural language processing (NLP) of reports, and 3) computer vision (CV) volumetric analysis of imaging. Outputs from these methods were compared to manually annotated clinical guideline PFS metrics. Employing data-driven methods, standalone progression rates were 63% (prescription), 78% (NLP), and 54% (CV), compared to the 99% progression rate from manually applied clinical guidelines using integrated data sources. The prescription method identified progression an average of 5.2 months later than the clinical standard, while the CV and NLP algorithms identified progression earlier by 2.6 and 6.9 months, respectively. While lesion growth is a clinical guideline progression indicator, only half of patients exhibited increasing contrast-enhancing tumor volumes during scan-based CV analysis. Our results indicate that data-driven algorithms can extract tumor progression outcomes from existing EHR data. However, ML methods are subject to varying availability bias, supporting contextual information, and pre-processing resource burdens that influence the extracted PFS endpoint distributions. Our scan-based CV results also suggest that the automation of clinical criteria may not align with human intuition. Our findings indicate a need for improved data source integration, validation, and revisiting of clinical criteria in parallel to multi-modal ML algorithm development.

Artificial intelligence-based echocardiography assessment to detect pulmonary hypertension.

Salehi M, Alabed S, Sharkey M, Maiter A, Dwivedi K, Yardibi T, Selej M, Hameed A, Charalampopoulos A, Kiely DG, Swift AJ

pubmed logopapersMay 1 2025
Tricuspid regurgitation jet velocity (TRJV) on echocardiography is used for screening patients with suspected pulmonary hypertension (PH). Artificial intelligence (AI) tools, such as the US2.AI, have been developed for automated evaluation of echocardiograms and can yield measurements that aid PH detection. This study evaluated the performance and utility of the US2.AI in a consecutive cohort of patients with suspected PH. 1031 patients who had been investigated for suspected PH between 2009-2021 were retrospectively identified from the ASPIRE registry. All patients had undergone echocardiography and right heart catheterisation (RHC). Based on RHC results, 771 (75%) patients with a mean pulmonary arterial pressure >20 mmHg were classified as having a diagnosis of PH (as per the 2022 European guidelines). Echocardiograms were evaluated manually and by the US2.AI tool to yield TRJV measurements. The AI tool demonstrated high interpretation yield, successfully measuring TRJV in 87% of echocardiograms. Manually and automatically derived TRJV values showed excellent agreement (intraclass correlation coefficient 0.94, 95% CI 0.94-0.95) with minimal bias (Bland-Altman analysis). Automated TRJV measurements showed equally high diagnostic accuracy for PH as manual measurements (area under the curve 0.88, 95% CI 0.84-0.90 <i>versus</i> 0.88, 95% CI 0.86-0.91). Automated TRJV measurements on echocardiography were similar to manual measurements, with similarly high and noninferior diagnostic accuracy for PH. These findings demonstrate that automated measurement of TRJV on echocardiography is feasible, accurate and reliable and support the implementation of AI-based approaches to echocardiogram evaluation and diagnostic imaging for PH.
Page 440 of 4454447 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.