Sort by:
Page 72 of 3993982 results

XVertNet: Unsupervised Contrast Enhancement of Vertebral Structures with Dynamic Self-Tuning Guidance and Multi-Stage Analysis.

Eidlin E, Hoogi A, Rozen H, Badarne M, Netanyahu NS

pubmed logopapersJul 25 2025
Chest X-ray is one of the main diagnostic tools in emergency medicine, yet its limited ability to capture fine anatomical details can result in missed or delayed diagnoses. To address this, we introduce XVertNet, a novel deep-learning framework designed to enhance vertebral structure visualization in X-ray images significantly. Our framework introduces two key innovations: (1) an unsupervised learning architecture that eliminates reliance on manually labeled training data-a persistent bottleneck in medical imaging, and (2) a dynamic self-tuned internal guidance mechanism featuring an adaptive feedback loop for real-time image optimization. Extensive validation across four major public datasets revealed that XVertNet outperforms state-of-the-art enhancement methods, as demonstrated by improvements in evaluation measures such as entropy, the Tenengrad criterion, LPC-SI, TMQI, and PIQE. Furthermore, clinical validation conducted by two board-certified clinicians confirmed that the enhanced images enabled more sensitive examination of vertebral structural changes. The unsupervised nature of XVertNet facilitates immediate clinical deployment without requiring additional training overhead. This innovation represents a transformative advancement in emergency radiology, providing a scalable and time-efficient solution to enhance diagnostic accuracy in high-pressure clinical environments.

3D-WDA-PMorph: Efficient 3D MRI/TRUS Prostate Registration using Transformer-CNN Network and Wavelet-3D-Depthwise-Attention.

Mahmoudi H, Ramadan H, Riffi J, Tairi H

pubmed logopapersJul 25 2025
Multimodal image registration is crucial in medical imaging, particularly for aligning Magnetic Resonance Imaging (MRI) and Transrectal Ultrasound (TRUS) data, which are widely used in prostate cancer diagnosis and treatment planning. However, this task presents significant challenges due to the inherent differences between these imaging modalities, including variations in resolution, contrast, and noise. Recently, conventional Convolutional Neural Network (CNN)-based registration methods, while effective at extracting local features, often struggle to capture global contextual information and fail to adapt to complex deformations in multimodal data. Conversely, Transformer-based methods excel at capturing long-range dependencies and hierarchical features but face difficulties in integrating fine-grained local details, which are essential for accurate spatial alignment. To address these limitations, we propose a novel 3D image registration framework that combines the strengths of both paradigms. Our method employs a Swin Transformer (ST)-CNN encoder-decoder architecture, with a key innovation focusing on enhancing the skip connection stages. Specifically, we introduce an innovative module named Wavelet-3D-Depthwise-Attention (WDA). The WDA module leverages an attention mechanism that integrates wavelet transforms for multi-scale spatial-frequency representation and 3D-Depthwise convolution to improve computational efficiency and modality fusion. Experimental evaluations on clinical MRI/TRUS datasets confirm that the proposed method achieves a median Dice score of 0.94 and a target registration error of 0.85, indicating an improvement in registration accuracy and robustness over existing state-of-the-art (SOTA) methods. The WDA-enhanced skip connections significantly empower the registration network to preserve critical anatomical details, making our method a promising advancement in prostate multimodal registration. Furthermore, the proposed framework shows strong potential for generalization to other image registration tasks.

Privacy-Preserving Generation of Structured Lymphoma Progression Reports from Cross-sectional Imaging: A Comparative Analysis of Llama 3.3 and Llama 4.

Prucker P, Bressem KK, Kim SH, Weller D, Kader A, Dorfner FJ, Ziegelmayer S, Graf MM, Lemke T, Gassert F, Can E, Meddeb A, Truhn D, Hadamitzky M, Makowski MR, Adams LC, Busch F

pubmed logopapersJul 25 2025
Efficient processing of radiology reports for monitoring disease progression is crucial in oncology. Although large language models (LLMs) show promise in extracting structured information from medical reports, privacy concerns limit their clinical implementation. This study evaluates the feasibility and accuracy of two of the most recent Llama models for generating structured lymphoma progression reports from cross-sectional imaging data in a privacy-preserving, real-world clinical setting. This single-center, retrospective study included adult lymphoma patients who underwent cross-sectional imaging and treatment between July 2023 and July 2024. We established a chain-of-thought prompting strategy to leverage the locally deployed Llama-3.3-70B-Instruct and Llama-4-Scout-17B-16E-Instruct models to generate lymphoma disease progression reports across three iterations. Two radiologists independently scored nodal and extranodal involvement, as well as Lugano staging and treatment response classifications. For each LLM and task, we calculated the F1 score, accuracy, recall, precision, and specificity per label, as well as the case-weighted average with 95% confidence intervals (CIs). Both LLMs correctly implemented the template structure for all 65 patients included in this study. Llama-4-Scout-17B-16E-Instruct demonstrated significantly greater accuracy in extracting nodal and extranodal involvement information (nodal: 0.99 [95% CI = 0.98-0.99] vs. 0.97 [95% CI = 0.95-0.96], p < 0.001; extranodal: 0.99 [95% CI = 0.99-1.00] vs. 0.99 [95% CI = 0.98-0.99], p = 0.013). This difference was more pronounced when predicting Lugano stage and treatment response (stage: 0.85 [95% CI = 0.79-0.89] vs. 0.60 [95% CI = 0.53-0.67], p < 0.001; treatment response: 0.88 [95% CI = 0.83-0.92] vs. 0.65 [95% CI = 0.58-0.71], p < 0.001). Neither model produced hallucinations of newly involved nodal or extranodal sites. The highest relative error rates were found when interpreting the level of disease after treatment. In conclusion, privacy-preserving LLMs can effectively extract clinical information from lymphoma imaging reports. While they excel at data extraction, they are limited in their ability to generate new clinical inferences from the extracted information. Our findings suggest their potential utility in streamlining documentation and highlight areas requiring optimization before clinical implementation.

Could a New Method of Acromiohumeral Distance Measurement Emerge? Artificial Intelligence vs. Physician.

Dede BT, Çakar İ, Oğuz M, Alyanak B, Bağcıer F

pubmed logopapersJul 25 2025
The aim of this study was to evaluate the reliability of ChatGPT-4 measurement of acromiohumeral distance (AHD), a popular assessment in patients with shoulder pain. In this retrospective study, 71 registered shoulder magnetic resonance imaging (MRI) scans were included. AHD measurements were performed on a coronal oblique T1 sequence with a clear view of the acromion and humerus. Measurements were performed by an experienced radiologist twice at 3-day intervals and by ChatGPT-4 twice at 3-day intervals in different sessions. The first, second, and mean values of AHD measured by the physician were 7.6 ± 1.7, 7.5 ± 1.6, and 7.6 ± 1.7, respectively. The first, second, and mean values measured by ChatGPT-4 were 6.7 ± 0.8, 7.3 ± 1.1, and 7.1 ± 0.8, respectively. There was a significant difference between the physician and ChatGPT-4 between the first and mean measurements (p < 0.0001 and p = 0.009, respectively). However, there was no significant difference between the second measurements (p = 0.220). Intrarater reliability for the physician was excellent (ICC = 0.99); intrarater reliability for ChatGPT-4 was poor (ICC = 0.41). Interrater reliability was poor (ICC = 0.45). In conclusion, this study demonstrated that the reliability of ChatGPT-4 in AHD measurements is inferior to that of an experienced radiologist. This study may help improve the possible future contribution of large language models to medical science.

Diagnostic performance of artificial intelligence models for pulmonary nodule classification: a multi-model evaluation.

Herber SK, Müller L, Pinto Dos Santos D, Jorg T, Souschek F, Bäuerle T, Foersch S, Galata C, Mildenberger P, Halfmann MC

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality. While early detection improves survival, distinguishing malignant from benign pulmonary nodules remains challenging. Artificial intelligence (AI) has been proposed to enhance diagnostic accuracy, but its clinical reliability is still under investigation. Here, we aimed to evaluate the diagnostic performance of AI models in classifying pulmonary nodules. This single-center retrospective study analyzed pulmonary nodules (4-30 mm) detected on CT scans, using three AI software models. Sensitivity, specificity, false-positive and false-negative rates were calculated. The diagnostic accuracy was assessed using the area under the receiver operating characteristic (ROC) curve (AUC), with histopathology serving as the gold standard. Subgroup analyses were based on nodule size and histopathological classification. The impact of imaging parameters was evaluated using regression analysis. A total of 158 nodules (n = 30 benign, n = 128 malignant) were analyzed. One AI model classified most nodules as intermediate risk, preventing further accuracy assessment. The other models demonstrated moderate sensitivity (53.1-70.3%) but low specificity (46.7-66.7%), leading to a high false-positive rate (45.5-52.4%). AUC values were between 0.5 and 0.6 (95% CI). Subgroup analyses revealed decreased sensitivity (47.8-61.5%) but increased specificity (100%), highlighting inconsistencies. In total, up to 49.0% of the pulmonary nodules were classified as intermediate risk. CT scan type influenced performance (p = 0.03), with better classification accuracy on breath-held CT scans. AI-based software models are not ready for standalone clinical use in pulmonary nodule classification due to low specificity, a high false-negative rate and a high proportion of intermediate-risk classifications. Question How accurate are commercially available AI models for the classification of pulmonary nodules compared to the gold standard of histopathology? Findings The evaluated AI models demonstrated moderate sensitivity, low specificity and high false-negative rates. Up to 49% of pulmonary nodules were classified as intermediate risk. Clinical relevance The high false-negative rates could influence radiologists' decision-making, leading to an increased number of interventions or unnecessary surgical procedures.

Multimodal prediction based on ultrasound for response to neoadjuvant chemotherapy in triple negative breast cancer.

Lyu M, Yi S, Li C, Xie Y, Liu Y, Xu Z, Wei Z, Lin H, Zheng Y, Huang C, Lin X, Liu Z, Pei S, Huang B, Shi Z

pubmed logopapersJul 25 2025
Pathological complete response (pCR) can guide surgical strategy and postoperative treatments in triple-negative breast cancer (TNBC). In this study, we developed a Breast Cancer Response Prediction (BCRP) model to predict the pCR in patients with TNBC. The BCRP model integrated multi-dimensional longitudinal quantitative imaging features, clinical factors and features from the Breast Imaging Data and Reporting System (BI-RADS). Multi-dimensional longitudinal quantitative imaging features, including deep learning features and radiomics features, were extracted from multiview B-mode and colour Doppler ultrasound images before and after treatment. The BCRP model achieved the areas under the receiver operating curves (AUCs) of 0.94 [95% confidence interval (CI), 0.91-0.98] and 0.84 [95%CI, 0.75-0.92] in the training and external test cohorts, respectively. Additionally, the low BCRP score was an independent risk factor for event-free survival (P < 0.05). The BCRP model showed a promising ability in predicting response to neoadjuvant chemotherapy in TNBC, and could provide valuable information for survival.

Automated characterization of abdominal MRI exams using deep learning.

Kim J, Chae A, Duda J, Borthakur A, Rader DJ, Gee JC, Kahn CE, Witschey WR, Sagreiya H

pubmed logopapersJul 25 2025
Advances in magnetic resonance imaging (MRI) have revolutionized disease detection and treatment planning. However, the growing volume and complexity of MRI data-along with heterogeneity in imaging protocols, scanner technology, and labeling practices-creates a need for standardized tools to automatically identify and characterize key imaging attributes. Such tools are essential for large-scale, multi-institutional studies that rely on harmonized data to train robust machine learning models. In this study, we developed convolutional neural networks (CNNs) to automatically classify three core attributes of abdominal MRI: pulse sequence type, imaging orientation, and contrast enhancement status. Three distinct CNNs with similar backbone architectures were trained to classify single image slices into one of 12 pulse sequences, 4 orientations, or 2 contrast classes. The models achieved high classification accuracies of 99.51%, 99.87%, and 99.99% for pulse sequence, orientation, and contrast, respectively. We applied Grad-CAM to visualize image regions influencing pulse sequence predictions and highlight relevant anatomical features. To enhance performance, we implemented a majority voting approach to aggregate slice-level predictions, achieving 100% accuracy at the volume level for all tasks. External validation using the Duke Liver Dataset demonstrated strong generalizability; after adjusting for class label mismatch, volume-level accuracies exceeded 96.9% across all classification tasks.

Enhancing the Characterization of Dural Tears on Photon Counting CT Myelography: An Analysis of Reconstruction Techniques.

Madhavan AA, Kranz PG, Kodet ML, Yu L, Zhou Z, Amrhein TJ

pubmed logopapersJul 25 2025
Photon counting detector CT myelography is an effective modality for the localization of spinal CSF leaks. The initial studies describing this technique employed a relatively smooth Br56 kernel. However, subsequent studies have demonstrated that the use of the sharpest quantitative kernel on photon counting CT (Qr89), particularly when denoised with techniques such as quantum iterative reconstruction or convolutional neural networks, enhances detection of CSF-venous fistulas. In this clinical report, we sought to determine whether the Qr89 kernel has utility in patients with dural tears, the other main type of spinal CSF leak. We performed a retrospective review of patients with dural tears diagnosed on photon counting CT myelography, comparing Br56, Qr89 denoised with quantum iterative reconstruction, and Qr89 denoised with a trained convolutional neural network. We specifically assessed spatial resolution, noise level, and diagnostic confidence in eight such cases, finding that the sharper Qr89 kernel outperformed the smoother Br56 kernel. This was particularly true when Qr89 was denoised using a convolutional neural network. Furthermore, in two cases, the dural tear was only seen on the Qr89 reconstructions and missed on the Br56 kernel. Overall, our study demonstrates the potential value of further optimizing post-processing techniques for photon counting CT myelography aimed at localizing dural tears.ABBREVIATIONS: CNN = convolutional neural network; CVF = CSF-venous fistula; DSM = digital subtraction myelography; EID = energy integrating detector; PCD = photon counting detector; QIR = quantum iterative reconstruction.

Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting: the DECIPHER study.

Bloom B, Haimovich A, Pott J, Williams SL, Cheetham M, Langsted S, Skene I, Astin-Chamberlain R, Thomas SH

pubmed logopapersJul 25 2025
Identifying whether there is a traumatic intracranial bleed (ICB+) on head CT is critical for clinical care and research. Free text CT reports are unstructured and therefore must undergo time-consuming manual review. Existing artificial intelligence classification schemes are not optimised for the emergency department endpoint of classification of ICB+ or ICB-. We sought to assess three methods for classifying CT reports: a text classification (TC) programme, a commercial natural language processing programme (Clinithink) and a generative pretrained transformer large language model (Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting (DECIPHER)-LLM). Primary objective: determine the diagnostic classification performance of the dichotomous categorisation of each of the three approaches. determine whether the LLM could achieve a substantial reduction in CT report review workload while maintaining 100% sensitivity.Anonymised radiology reports of head CT scans performed for trauma were manually labelled as ICB+/-. Training and validation sets were randomly created to train the TC and natural language processing models. Prompts were written to train the LLM. 898 reports were manually labelled. Sensitivity and specificity (95% CI)) of TC, Clinithink and DECIPHER-LLM (with probability of ICB set at 10%) were respectively 87.9% (76.7% to 95.0%) and 98.2% (96.3% to 99.3%), 75.9% (62.8% to 86.1%) and 96.2% (93.8% to 97.8%) and 100% (93.8% to 100%) and 97.4% (95.3% to 98.8%).With DECIPHER-LLM probability of ICB+ threshold of 10% set to identify CT reports requiring manual evaluation, CT reports requiring manual classification reduced by an estimated 385/449 cases (85.7% (95% CI 82.1% to 88.9%)) while maintaining 100% sensitivity. DECIPHER-LLM outperformed other tested free-text classification methods.

A novel approach for breast cancer detection using a Nesterov accelerated adam optimizer with an attention mechanism.

Saber A, Emara T, Elbedwehy S, Hassan E

pubmed logopapersJul 25 2025
Image-based automatic breast tumor detection has become a significant research focus, driven by recent advancements in machine learning (ML) algorithms. Traditional disease detection methods often involve manual feature extraction from images, a process requiring extensive expertise from specialists and pathologists. This labor-intensive approach is not only time-consuming but also impractical for widespread application. However, advancements in digital technologies and computer vision have enabled convolutional neural networks (CNNs) to learn features automatically, thereby overcoming these challenges. This paper presents a deep neural network model based on the MobileNet-V2 architecture, enhanced with a convolutional block attention mechanism for identifying tumor types in ultrasound images. The attention module improves the MobileNet-V2 model's performance by highlighting disease-affected areas within the images. The proposed model refines features extracted by MobileNet-V2 using the Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimizer. This integration enhances convergence and stability, leading to improved classification accuracy. The proposed approach was evaluated on the BUSI ultrasound image dataset. Experimental results demonstrated strong performance, achieving an accuracy of 99.1%, sensitivity of 99.7%, specificity of 99.5%, precision of 97.7%, and an area under the curve (AUC) of 1.0 using an 80-20 data split. Additionally, under 10-fold cross-validation, the model achieved an accuracy of 98.7%, sensitivity of 99.1%, specificity of 98.3%, precision of 98.4%, F1-score of 98.04%, and an AUC of 0.99.
Page 72 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.