Sort by:
Page 179 of 6526512 results

Chen Y, Dong M, Sun J, Meng Z, Yang Y, Muhetaier A, Li C, Qin J

pubmed logopapersSep 10 2025
Despite the Coronary Artery Reporting and Data System (CAD-RADS) providing a standardized approach, radiologists continue to favor free-text reports. This preference creates significant challenges for data extraction and analysis in longitudinal studies, potentially limiting large-scale research and quality assessment initiatives. To evaluate the ability of the generative pre-trained transformer (GPT)-4o model to convert real-world coronary computed tomography angiography (CCTA) free-text reports into structured data and automatically identify CAD-RADS categories and P categories. This retrospective study analyzed CCTA reports from January 2024 and July 2024. A subset of 25 reports was used for prompt engineering to instruct the large language models (LLMs) in extracting CAD-RADS categories, P categories, and the presence of myocardial bridges and noncalcified plaques. Reports were processed using the GPT-4o API (application programming interface) and custom Python scripts. The ground truth was established by radiologists based on the CAD-RADS 2.0 guidelines. Model performance was assessed using accuracy, sensitivity, specificity, and F1-score. Intrarater reliability was assessed using Cohen κ coefficient. Among 999 patients (median age 66 y, range 58-74; 650 males), CAD-RADS categorization showed accuracy of 0.98-1.00 (95% CI 0.9730-1.0000), sensitivity of 0.95-1.00 (95% CI 0.9191-1.0000), specificity of 0.98-1.00 (95% CI 0.9669-1.0000), and F1-score of 0.96-1.00 (95% CI 0.9253-1.0000). P categories demonstrated accuracy of 0.97-1.00 (95% CI 0.9569-0.9990), sensitivity from 0.90 to 1.00 (95% CI 0.8085-1.0000), specificity from 0.97 to 1.00 (95% CI 0.9533-1.0000), and F1-score from 0.91 to 0.99 (95% CI 0.8377-0.9967). Myocardial bridge detection achieved an accuracy of 0.98 (95% CI 0.9680-0.9870), and noncalcified coronary plaques detection showed an accuracy of 0.98 (95% CI 0.9680-0.9870). Cohen κ values for all classifications exceeded 0.98. The GPT-4o model efficiently and accurately converts CCTA free-text reports into structured data, excelling in CAD-RADS classification, plaque burden assessment, and detection of myocardial bridges and calcified plaques.

Seo H, Lee JI, Park JU, Sung IY

pubmed logopapersSep 10 2025
This study aimed to develop a deep-learning model for the automatic classification of mandibular fractures using panoramic radiographs. A pretrained convolutional neural network (CNN) was used to classify fractures based on a novel, clinically relevant classification system. The dataset comprised 800 panoramic radiographs obtained from patients with facial trauma. The model demonstrated robust classification performance across 8 fracture categories, achieving consistently high accuracy and F1 scores. Performance was evaluated using standard metrics, including accuracy, precision, recall, and F1-score. To enhance interpretability and clinical applicability, explainable AI techniques-Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME)-were used to visualize the model's decision-making process. These findings suggest that the proposed deep learning framework is a reliable and efficient tool for classifying mandibular fractures on panoramic radiographs. Its application may help reduce diagnostic time and improve decision-making in maxillofacial trauma care. Further validation using larger, multi-institutional datasets is recommended to ensure generalizability.

Sirka Kacafírková K, Poll A, Jacobs A, Cardone A, Ventura JJ

pubmed logopapersSep 10 2025
Breast cancer is the most common cancer among women and a leading cause of mortality in Europe. Early detection through screening reduces mortality, yet participation in mammography-based programs remains suboptimal due to discomfort, radiation exposure, and accessibility issues. Thermography, particularly when driven by artificial intelligence (AI), is being explored as a noninvasive, radiation-free alternative. However, its acceptance, reliability, and impact on the screening experience remain underexplored. This study aimed to explore women's perceptions of AI-enhanced thermography (ThermoBreast) as an alternative to mammography. It aims to identify barriers and motivators related to breast cancer screening and assess how ThermoBreast might improve the screening experience. A mixed methods approach was adopted, combining an online survey with follow-up focus groups. The survey captured women's knowledge, attitudes, and experiences related to breast cancer screening and was used to recruit participants for qualitative exploration. After the focus groups, the survey was relaunched to include additional respondents. Quantitative data were analyzed using SPSS (IBM Corp), and qualitative data were analyzed in MAXQDA (VERBI software). Findings from both strands were synthesized to redesign the breast cancer screening journey. A total of 228 valid survey responses were analyzed. Of 228, 154 women (68%) had previously undergone mammography, while 74 (32%) had not. The most reported motivators were belief in prevention (69/154, 45%), invitations from screening programs (68/154, 44%), and doctor recommendations (45/154, 29%). Among nonscreeners, key barriers included no recommendation from a doctor (39/74, 53%), absence of symptoms (27/74, 36%), and perceived age ineligibility (17/74, 23%). Pain, long appointment waits, and fear of radiation were also mentioned. In total, 18 women (mean age 45.3 years, SD 13.6) participated in 6 focus groups. Participants emphasized the importance of respectful and empathetic interactions with medical staff, clear communication, and emotional comfort-factors they perceived as more influential than the screening technology itself. ThermoBreast was positively received for being contactless, radiation-free, and potentially more comfortable. Participants described it as "less traumatic," "easier," and "a game changer." However, concerns were raised regarding its novelty, lack of clinical validation, and data privacy. Some participants expressed the need for human oversight in AI-supported procedures and requested more information on how AI is used. Based on these insights, an updated screening journey was developed, highlighting improvements in preparation, appointment booking, privacy, and communication of results. While AI-driven thermography shows promise as a noninvasive, user-friendly alternative to mammography, its adoption depends on trust, clinical validation, and effective communication from health care professionals. It may expand screening access for populations underserved by mammography, such as younger and immobile women, but does not eliminate all participation barriers. Long-term studies and direct comparisons between mammography and thermography are needed to assess diagnostic accuracy, patient experience, and their impact on screening participation and outcomes.

Moorthy DK, Nagaraj P

pubmed logopapersSep 10 2025
Alzheimer's disease (AD) is considered to be one of the neurodegenerative diseases with possible cognitive deficits related to dementia in human subjects. High priority should be put on efforts aimed at early detection of AD. Here, images undergo a pre-processing phase that integrates image resizing and the application of median filters. After that, processed images are subjected to data augmentation procedures. Feature extraction from WOA-based ResNet, together with extracted convolutional neural network (CNN) features from pre-processed images, is used to train proposed DL model to classify AD. The process is executed using the proposed Attention Gated-VGG model. The proposed method outperformed normal methodologies when tested and achieved an accuracy of 96.7%, sensitivity of 97.8%, and specificity of 96.3%. The results have proven that Attention Gated-VGG model is a very promising technique for classifying AD.

Huang F, Chen N, Qiu A

pubmed logopapersSep 10 2025
Vision Transformer (ViT) applied to structural magnetic resonance images has demonstrated success in the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, three key challenges have yet to be well addressed: 1) ViT requires a large labeled dataset to mitigate overfitting while most of the current AD-related sMRI data fall short in the sample sizes. 2) ViT neglects the within-patch feature learning, e.g., local brain atrophy, which is crucial for AD diagnosis. 3) While ViT can enhance capturing local features by reducing the patch size and increasing the number of patches, the computational complexity of ViT quadratically increases with the number of patches with unbearable overhead. To this end, this paper proposes a 3D-convolutional neural network (CNN) Enhanced Multiscale Progressive ViT (3D-CNN-MPVT). First, a 3D-CNN is pre-trained on sMRI data to extract detailed local image features and alleviate overfitting. Second, an MPVT module is proposed with an inner CNN module to explicitly characterize the within-patch interactions that are conducive to AD diagnosis. Third, a stitch operation is proposed to merge cross-patch features and progressively reduce the number of patches. The inner CNN alongside the stitch operation in the MPTV module enhances local feature characterization while mitigating computational costs. Evaluations using the Alzheimer's Disease Neuroimaging Initiative dataset with 6610 scans and the Open Access Series of Imaging Studies-3 with 1866 scans demonstrated its superior performance. With minimal preprocessing, our approach achieved an impressive 90% accuracy and 80% in AD classification and MCI conversion prediction, surpassing recent baselines.

Marzi S, Vidiri A, Ianiro A, Parrino C, Ruggiero S, Trobiani C, Teodoli L, Vallati G, Trillò G, Ciolina M, Sperati F, Scarinci A, Virdis M, Busset MDD, Stecca T, Massani M, Morana G, Grazi GL

pubmed logopapersSep 10 2025
To build computed tomography (CT)-based radiomics models, with independent external validation, to predict recurrence and disease-specific mortality in patients with colorectal liver metastases (CRLM) who underwent liver resection. 113 patients were included in this retrospective study: the internal training cohort comprised 66 patients, while the external validation cohort comprised 47. All patients underwent a CT study before surgery. Up to five visible metastases, the whole liver volume, and the surrounding free-disease parenchymal liver were separately delineated on the portal venous phase of CT. Both radiomic features and baseline clinical parameters were considered in the models' building, using different families of machine learning (ML) algorithms. The Support Vector Machine and Naive Bayes ML classifiers provided the best predictive performance. A relevant role of second-order and higher-order texture features emerged from the largest lesion and the liver residual parenchyma. The prediction models for recurrence showed good accuracy, ranging from 70% to 78% and from 66% to 70% in the training and validation sets, respectively. Models for predicting disease-related mortality performed worse, with accuracies ranging from 67% to 73% and from 60% to 64% in the training and validation sets, respectively. CT-based radiomics, alone or in combination with baseline clinical data, allowed the prediction of recurrence and disease-specific mortality of patients with CRLM, with fair to good accuracy after validation in an external cohort. Further investigations with a larger patient population for training and validation are needed to corroborate our analyses.

Shen Z, Chen L, Wang L, Dong S, Wang F, Pan Y, Zhou J, Wang Y, Xu X, Chong H, Lin H, Li W, Li R, Ma H, Ma J, Yu Y, Du L, Wang X, Zhang S, Yan F

pubmed logopapersSep 10 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the effectiveness of an explainable deep learning (DL) model, developed using multiparametric MRI (mpMRI) features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs ≥ 1 cm in diameter at mpMRI were included in the study. nn-Unet and Liver Imaging Feature Transformer (LIFT) models were developed using retrospective data from one hospital (January 2018-August 2023). nnU-Net was used for lesion segmentation and LIFT for FLL classification. External testing was performed on data from three hospitals (January 2018-December 2023), with a prospective test set obtained from January 2024 to April 2024. Model performance was compared with radiologists and impact of model assistance on junior and senior radiologist performance was assessed. Evaluation metrics included the Dice similarity coefficient (DSC) and accuracy. Results A total of 2131 individuals with FLLs (mean age, 56 ± [SD] 12 years; 1476 female) were included in the training, internal test, external test, and prospective test sets. Average DSC values for liver and tumor segmentation across the three test sets were 0.98 and 0.96, respectively. Average accuracy for features and lesion classification across the three test sets were 93% and 97%, respectively. LIFT-assisted readings improved diagnostic accuracy (average 5.3% increase, <i>P</i> < .001), reduced reading time (average 34.5 seconds decrease, <i>P</i> < .001), and enhanced confidence (average 0.3-point increase, <i>P</i> < .001) of junior radiologists. Conclusion The proposed DL model accurately detected and classified FLLs, improving diagnostic accuracy and efficiency of junior radiologists. ©RSNA, 2025.

Shangguan Q, Lian Y, Liao Z, Chen J, Song Y, Yao L, Jiang C, Lu Z, Lin Z

pubmed logopapersSep 10 2025
Hand gesture recognition(HGR) is a key technology in human-computer interaction and human communication. This paper presents a lightweight, parameter-free attention convolutional neural network (LPA-CNN) approach leveraging Gramian Angular Field(GAF)transformation of A-mode ultrasound signals for HGR. First, this paper maps 1-dimensional (1D) A-mode ultrasound signals, collected from the forearm muscles of 10 healthy participants, into 2-dimensional (2D) images. Second, GAF is selected owing to its higher sensitivity against Markov Transition Field (MTF) and Recurrence Plot (RP) in HGR. Third, a novel LPA-CNN consisting of four components, i.e., a convolution-pooling block, an attention mechanism, an inverted residual block, and a classification block, is proposed. Among them, the convolution-pooling block consists of convolutional and pooling layers, the attention mechanism is applied to generate 3-D weights, the inverted residual block consists of multiple channel shuffling units, and the classification block is performed through fully connected layers. Fourth, comparative experiments were conducted on GoogLeNet, MobileNet, and LPA-CNN to validate the effectiveness of the proposed method. Experimental results show that compared to GoogLeNet and MobileNet, LPA-CNN has a smaller model size and better recognition performance, achieving a classification accuracy of 0.98 ±0.02. This paper achieves efficient and high-accuracy HGR by encoding A-mode ultrasound signals into 2D images and integrating the LPA-CNN model, providing a new technological approach for HGR based on ultrasonic signals.

Kuwajima S, Oura D

pubmed logopapersSep 10 2025
In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.

Cao X, Lv Z, Li Y, Li M, Hu Y, Liang M, Deng J, Tan X, Wang S, Geng W, Xu J, Luo P, Zhou M, Xiao W, Guo M, Liu J, Huang Q, Hu S, Sun Y, Lan X, Jin Y

pubmed logopapersSep 10 2025
Precise preoperative discrimination of invasive lung adenocarcinoma (IA) from preinvasive lesions (adenocarcinoma in situ [AIS]/minimally invasive adenocarcinoma [MIA]) and prediction of high-risk histopathological features are critical for optimizing resection strategies in early-stage lung adenocarcinoma (LUAD). In this multicenter study, 813 LUAD patients (tumors ≤3 cm) formed the training cohort. A total of 1,709 radiomic features were extracted from the PET/CT images. Feature selection was performed using the max-relevance and min-redundancy (mRMR) algorithm and least absolute shrinkage and selection operator (LASSO). Hybrid machine learning models integrating [18F]FDG PET/CT radiomics and clinical-radiological features were developed using H2O.ai AutoML. Models were validated in a prospective internal cohort (N = 256, 2021-2022) and external multicenter cohort (N = 418). Performance was assessed via AUC, calibration, decision curve analysis (DCA) and survival assessment. The hybrid model achieved AUCs of 0.93 (95% CI: 0.90-0.96) for distinguishing IA from AIS/MIA (internal test) and 0.92 (0.90-0.95) in external testing. For predicting high-risk histopathological features (grade-III, lymphatic/pleural/vascular/nerve invasion, STAS), AUCs were 0.82 (0.77-0.88) and 0.85 (0.81-0.89) in internal/external sets. DCA confirmed superior net benefit over CT model. The model stratified progression-free (P = 0.002) and overall survival (P = 0.017) in the TCIA cohort. PET/CT radiomics-based models enable accurate non-invasive prediction of invasiveness and high-risk pathology in early-stage LUAD, guiding optimal surgical resection.
Page 179 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.