Sort by:
Page 92 of 3143139 results

Multimodal Deep Learning Model Based on Ultrasound and Cytological Images Predicts Risk Stratification of cN0 Papillary Thyroid Carcinoma.

He F, Chen S, Liu X, Yang X, Qin X

pubmed logopapersJul 14 2025
Accurately assessing the risk stratification of cN0 papillary thyroid carcinoma (PTC) preoperatively aids in making treatment decisions. We integrated preoperative ultrasound and cytological images of patients to develop and validate a multimodal deep learning (DL) model for non-invasive assessment of N0 PTC risk stratification before surgery. In this retrospective multicenter group study, we developed a comprehensive DL model based on ultrasound and cytological images. The model was trained and validated on 890 PTC patients undergoing thyroidectomy and lymph node dissection across five medical centers. The testing group included 107 patients from one medical center. We analyzed the model's performance, including the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The combined DL model demonstrated strong performance, with an area under the curve (AUC) of 0.922 (0.866-0.979) in the internal validation group and an AUC of 0.845 (0.794-0.895) in the testing group. The diagnostic performance of the combined DL model surpassed that of clinical models. Image region heatmaps assisted in interpreting the diagnosis of risk stratification. The multimodal DL model based on ultrasound and cytological images can accurately determine the risk stratification of N0 PTC and guide treatment decisions.

Deep Learning-Accelerated Prostate MRI: Improving Speed, Accuracy, and Sustainability.

Reschke P, Koch V, Gruenewald LD, Bachir AA, Gotta J, Booz C, Alrahmoun MA, Strecker R, Nickel D, D'Angelo T, Dahm DM, Konrad P, Solim LA, Holzer M, Al-Saleh S, Scholtz JE, Sommer CM, Hammerstingl RM, Eichler K, Vogl TJ, Leistner DM, Haberkorn SM, Mahmoudi S

pubmed logopapersJul 14 2025
This study aims to evaluate the effectiveness of a deep learning (DL)-enhanced four-fold parallel acquisition technique (P4) in improving prostate MR image quality while optimizing scan efficiency compared to the traditional two-fold parallel acquisition technique (P2). Patients undergoing prostate MRI with DL-enhanced acquisitions were analyzed from January 2024 to July 2024. The participants prospectively received T2-weighted sequences in all imaging planes using both P2 and P4. Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR). Significant differences in contrast and gray-level properties between P2 and P4 were identified through radiomics analysis (p <.05). A total of 51 participants (mean age 69.4 years ± 10.5 years) underwent P2 and P4 imaging. P4 demonstrated higher CNR and SNR values compared to P2 (p <.001). P4 was consistently rated superior to P2, demonstrating enhanced image quality and greater diagnostic precision across all evaluated categories (p <.001). Furthermore, radiomics analysis confirmed that P4 significantly altered structural and textural differentiation in comparison to P2. The P4 protocol reduced T2w scan times by 50.8%, from 11:48 min to 5:48 min (p <.001). In conclusion, P4 imaging enhances diagnostic quality and reduces scan times, improving workflow efficiency, and potentially contributing to a more patient-centered and sustainable radiology practice.

STF: A Spherical Transformer for Versatile Cortical Surfaces Applications.

Cheng J, Zhao F, Wu Z, Yuan X, Wang L, Gilmore JH, Lin W, Zhang X, Li G

pubmed logopapersJul 14 2025
Inspired by the remarkable success of attention mechanisms in various applications, there is a growing need to adapt the Transformer architecture from conventional Euclidean domains to non-Euclidean spaces commonly encountered in medical imaging. Structures such as brain cortical surfaces, represented by triangular meshes, exhibit spherical topology and present unique challenges. To address this, we propose the Spherical Transformer (STF), a versatile backbone that leverages self-attention for analyzing cortical surface data. Our approach involves mapping cortical surfaces onto a sphere, dividing them into overlapping patches, and tokenizing both patches and vertices. By performing self-attention at patch and vertex levels, the model simultaneously captures global dependencies and preserves fine-grained contextual information within each patch. Overlapping regions between neighboring patches naturally enable efficient cross-patch information sharing. To handle longitudinal cortical surface data, we introduce the spatiotemporal self-attention mechanism, which jointly captures spatial context and temporal developmental patterns within a single layer. This innovation enhances the representational power of the model, making it well-suited for dynamic surface data. We evaluate the Spherical Transformer on key tasks, including cognition prediction at the surface level and two vertex-level tasks: cortical surface parcellation and cortical property map prediction. Across these applications, our model consistently outperforms state-of-the-art methods, demonstrating its ability to effectively model global dependencies and preserve detailed spatial information. The results highlight its potential as a general-purpose framework for cortical surface analysis.

Deep-learning reconstruction for noise reduction in respiratory-triggered single-shot phase sensitive inversion recovery myocardial delayed enhancement cardiac magnetic resonance.

Tang M, Wang H, Wang S, Wali E, Gutbrod J, Singh A, Landeras L, Janich MA, Mor-Avi V, Patel AR, Patel H

pubmed logopapersJul 14 2025
Phase-sensitive inversion recovery late gadolinium enhancement (LGE) improves tissue contrast, however it is challenging to combine with a free-breathing acquisition. Deep-learning (DL) algorithms have growing applications in cardiac magnetic resonance imaging (CMR) to improve image quality. We compared a novel combination of a free-breathing single-shot phase-sensitive LGE with respiratory triggering (FB-PS) sequence with DL noise reduction reconstruction algorithm to a conventional segmented phase-sensitive LGE acquired during breath holding (BH-PS). 61 adult subjects (29 male, age 51 ± 15) underwent clinical CMR (1.5 T) with the FB-PS sequence and the conventional BH-PS sequence. DL noise reduction was incorporated into the image reconstruction pipeline. Qualitative metrics included image quality, artifact severity, diagnostic confidence. Quantitative metrics included septal-blood border sharpness, LGE sharpness, blood-myocardium apparent contrast-to-noise ratio (CNR), LGE-myocardium CNR, LGE apparent signal-to-noise ratio (SNR), and LGE burden. The sequences were compared via paired t-tests. 27 subjects had positive LGE. Average time to acquire a slice for FB-PS was 4-12 s versus ~32-38 s for BH-PS (including breath instructions and break time in between breath hold). FB-PS with medium DL noise reduction had better image quality (FB-PS 3.0 ± 0.7 vs. BH-PS 1.5 ± 0.6, p < 0.0001), less artifact (4.8 ± 0.5 vs. 3.4 ± 1.1, p < 0.0001), and higher diagnostic confidence (4.0 ± 0.6 vs. 2.6 ± 0.8, p < 0.0001). Septum sharpness in FB-PS with DL reconstruction versus BH-PS was not significantly different. There was no significant difference in LGE sharpness or LGE burden. FB-PS had superior blood-myocardium CNR (17.2 ± 6.9 vs. 16.4 ± 6.0, p = 0.040), LGE-myocardium CNR (12.1 ± 7.2 vs. 10.4 ± 6.6, p = 0.054), and LGE SNR (59.8 ± 26.8 vs. 31.2 ± 24.1, p < 0.001); these metrics further improved with DL noise reduction. A FB-PS sequence shortens scan time by over 5-fold and reduces motion artifact. Combined with a DL noise reduction algorithm, FB-PS provides better or similar image quality compared to BH-PS. This is a promising solution for patients who cannot hold their breath.

Comparing large language models and text embedding models for automated classification of textual, semantic, and critical changes in radiology reports.

Lindholz M, Burdenski A, Ruppel R, Schulze-Weddige S, Baumgärtner GL, Schobert I, Haack AM, Eminovic S, Milnik A, Hamm CA, Frisch A, Penzkofer T

pubmed logopapersJul 14 2025
Radiology reports can change during workflows, especially when residents draft preliminary versions that attending physicians finalize. We explored how large language models (LLMs) and embedding techniques can categorize these changes into textual, semantic, or clinically actionable types. We evaluated 400 adult CT reports drafted by residents against finalized versions by attending physicians. Changes were rated on a five-point scale from no changes to critical ones. We examined open-source LLMs alongside traditional metrics like normalized word differences, Levenshtein and Jaccard similarity, and text embedding similarity. Model performance was assessed using quadratic weighted Cohen's kappa (κ), (balanced) accuracy, F<sub>1</sub>, precision, and recall. Inter-rater reliability among evaluators was excellent (κ = 0.990). Of the reports analyzed, 1.3 % contained critical changes. The tested methods showed significant performance differences (P < 0.001). The Qwen3-235B-A22B model using a zero-shot prompt, most closely aligned with human assessments of changes in clinical reports, achieving a κ of 0.822 (SD 0.031). The best conventional metric, word difference, had a κ of 0.732 (SD 0.048), the difference between the two showed statistical significance in unadjusted post-hoc tests (P = 0.038) but lost significance after adjusting for multiple testing (P = 0.064). Embedding models underperformed compared to LLMs and classical methods, showing statistical significance in most cases. Large language models like Qwen3-235B-A22B demonstrated moderate to strong alignment with expert evaluations of the clinical significance of changes in radiology reports. LLMs outperformed embedding methods and traditional string and word approaches, achieving statistical significance in most instances. This demonstrates their potential as tools to support peer review.

Early breast cancer detection via infrared thermography using a CNN enhanced with particle swarm optimization.

Alzahrani RM, Sikkandar MY, Begum SS, Babetat AFS, Alhashim M, Alduraywish A, Prakash NB, Ng EYK

pubmed logopapersJul 13 2025
Breast cancer remains the most prevalent cause of cancer-related mortality among women worldwide, with an estimated incidence exceeding 500,000 new cases annually. Timely diagnosis is vital for enhancing therapeutic outcomes and increasing survival probabilities. Although conventional diagnostic tools such as mammography are widely used and generally effective, they are often invasive, costly, and exhibit reduced efficacy in patients with dense breast tissue. Infrared thermography, by contrast, offers a non-invasive and economical alternative; however, its clinical adoption has been limited, largely due to difficulties in accurate thermal image interpretation and the suboptimal tuning of machine learning algorithms. To overcome these limitations, this study proposes an automated classification framework that employs convolutional neural networks (CNNs) for distinguishing between malignant and benign thermographic breast images. An Enhanced Particle Swarm Optimization (EPSO) algorithm is integrated to automatically fine-tune CNN hyperparameters, thereby minimizing manual effort and enhancing computational efficiency. The methodology also incorporates advanced image preprocessing techniques-including Mamdani fuzzy logic-based edge detection, Contrast-Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, and median filtering for noise suppression-to bolster classification performance. The proposed model achieves a superior classification accuracy of 98.8%, significantly outperforming conventional CNN implementations in terms of both computational speed and predictive accuracy. These findings suggest that the developed system holds substantial potential for early, reliable, and cost-effective breast cancer screening in real-world clinical environments.

Impact of three-dimensional prostate models during robot-assisted radical prostatectomy on surgical margins and functional outcomes.

Khan N, Prezzi D, Raison N, Shepherd A, Antonelli M, Byrne N, Heath M, Bunton C, Seneci C, Hyde E, Diaz-Pinto A, Macaskill F, Challacombe B, Noel J, Brown C, Jaffer A, Cathcart P, Ciabattini M, Stabile A, Briganti A, Gandaglia G, Montorsi F, Ourselin S, Dasgupta P, Granados A

pubmed logopapersJul 13 2025
Robot-assisted radical prostatectomy (RARP) is the standard surgical procedure for the treatment of prostate cancer. RARP requires a trade-off between performing a wider resection in order to reduce the risk of positive surgical margins (PSMs) and performing minimal resection of the nerve bundles that determine functional outcomes, such as incontinence and potency, which affect patients' quality of life. In order to achieve favourable outcomes, a precise understanding of the three-dimensional (3D) anatomy of the prostate, nerve bundles and tumour lesion is needed. This is the protocol for a single-centre feasibility study including a prospective two-arm interventional group (a 3D virtual and a 3D printed prostate model), and a prospective control group. The primary endpoint will be PSM status and the secondary endpoint will be functional outcomes, including incontinence and sexual function. The study will consist of a total of 270 patients: 54 patients will be included in each of the interventional groups (3D virtual, 3D printed models), 54 in the retrospective control group and 108 in the prospective control group. Automated segmentation of prostate gland and lesions will be conducted on multiparametric magnetic resonance imaging (mpMRI) using 'AutoProstate' and 'AutoLesion' deep learning approaches, while manual annotation of the neurovascular bundles, urethra and external sphincter will be conducted on mpMRI by a radiologist. This will result in masks that will be post-processed to generate 3D printed/virtual models. Patients will be allocated to either interventional arm and the surgeon will be given either a 3D printed or a 3D virtual model at the start of the RARP procedure. At the 6-week follow-up, the surgeon will meet with the patient to present PSM status and capture functional outcomes from the patient via questionnaires. We will capture these measures as endpoints for analysis. These questionnaires will be re-administered at 3, 6 and 12 months postoperatively.

An improved U-NET3+ with transformer and adaptive attention map for lung segmentation.

Joseph Raj V, Christopher P

pubmed logopapersJul 13 2025
Accurate segmentation of lung regions from CT scan images is critical for diagnosing and monitoring respiratory diseases. This study introduces a novel hybrid architecture Adaptive Attention U-NetAA, which combines the strengths of U-Net3 + and Transformer based attention mechanisms models for high-precision lung segmentation. The U-Net3 + module effectively segments the lung region by leveraging its deep convolutional network with nested skip connections, ensuring rich multi-scale feature extraction. A key innovation is introducing an adaptive attention mechanism within the Transformer module, which dynamically adjusts the focus on critical regions in the image based on local and global contextual relationships. This model's adaptive attention mechanism addresses variations in lung morphology, image artifacts, and low-contrast regions, leading to improved segmentation accuracy. The combined convolutional and attention-based architecture enhances robustness and precision. Experimental results on benchmark CT datasets demonstrate that the proposed model achieves an IoU of 0.984, a Dice coefficient of 0.989, a MIoU of 0.972, and an HD95 of 1.22 mm, surpassing state-of-the-art methods. These results establish U-NetAA as a superior tool for clinical lung segmentation, with enhanced accuracy, sensitivity, and generalization capability.

Central Obesity-related Brain Alterations Predict Cognitive Impairments in First Episode of Psychosis.

Kolenič M, McWhinney SR, Selitser M, Šafářová N, Franke K, Vochoskova K, Burdick K, Španiel F, Hajek T

pubmed logopapersJul 13 2025
Cognitive impairment is a key contributor to disability and poor outcomes in schizophrenia, yet it is not adequately addressed by currently available treatments. Thus, it is important to search for preventable or treatable risk factors for cognitive impairment. Here, we hypothesized that obesity-related neurostructural alterations will be associated with worse cognitive outcomes in people with first episode of psychosis (FEP). This observational study presents cross-sectional data from the Early-Stage Schizophrenia Outcome project. We acquired T1-weighted 3D MRI scans in 440 participants with FEP at the time of the first hospitalization and in 257 controls. Metabolic assessments included body mass index (BMI), waist-to-hip ratio (WHR), serum concentrations of triglycerides, cholesterol, glucose, insulin, and hs-CRP. We chose machine learning-derived brain age gap estimate (BrainAGE) as our measure of neurostructural changes and assessed attention, working memory and verbal learning using Digit Span and the Auditory Verbal Learning Test. Among obesity/metabolic markers, only WHR significantly predicted both, higher BrainAGE (t(281)=2.53, p=0.012) and worse verbal learning (t(290) = -2.51, P = .026). The association between FEP and verbal learning was partially mediated by BrainAGE (average causal mediated effects, ACME = -0.04 [-0.10, -0.01], P = .022) and the higher BrainAGE in FEP was partially mediated by higher WHR (ACME = 0.08 [0.02, 0.15], P = .006). Central obesity-related brain alterations were linked with worse cognitive performance already early in the course of psychosis. These structure-function links suggest that preventing or treating central obesity could target brain and cognitive impairments in FEP.

Establishing an AI-based diagnostic framework for pulmonary nodules in computed tomography.

Jia R, Liu B, Ali M

pubmed logopapersJul 12 2025
Pulmonary nodules seen by computed tomography (CT) can be benign or malignant, and early detection is important for optimal management. The existing manual methods of identifying nodules have limitations, such as being time-consuming and erroneous. This study aims to develop an Artificial Intelligence (AI) diagnostic scheme that improves the performance of identifying and categorizing pulmonary nodules using CT scans. The proposed deep learning framework used convolutional neural networks, and the image database totaled 1,056 3D-DICOM CT images. The framework was initially preprocessing, including lung segmentation, nodule detection, and classification. Nodule detection was done using the Retina-UNet model, while the features were classified using a Support Vector Machine (SVM). Performance measures, including accreditation, sensitivity, specificity, and the AUROC, were used to evaluate the model's performance during training and validation. Overall, the developed AI model received an AUROC of 0.9058. The diagnostic accuracy was 90.58%, with an overall positive predictive value of 89% and an overall negative predictive value of 86%. The algorithm effectively handled the CT images at the preprocessing stage, and the deep learning model performed well in detecting and classifying nodules. The application of the new diagnostic framework based on AI algorithms increased the accuracy of the diagnosis compared with the traditional approach. It also provides high reliability for detecting pulmonary nodules and classifying the lesions, thus minimizing intra-observer differences and improving the clinical outcome. In perspective, the advancements may include increasing the size of the annotated data-set and fine-tuning the model due to detection issues of non-solitary nodules.
Page 92 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.