Sort by:
Page 39 of 3433422 results

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Analysis of intra- and inter-observer variability in 4D liver ultrasound landmark labeling.

Wulff D, Ernst F

pubmed logopapersSep 1 2025
Four-dimensional (4D) ultrasound imaging is widely used in clinics for diagnostics and therapy guidance. Accurate target tracking in 4D ultrasound is crucial for autonomous therapy guidance systems, such as radiotherapy, where precise tumor localization ensures effective treatment. Supervised deep learning approaches rely on reliable ground truth, making accurate labels essential. We investigate the reliability of expert-labeled ground truth data by evaluating intra- and inter-observer variability in landmark labeling for 4D ultrasound imaging in the liver. Eight 4D liver ultrasound sequences were labeled by eight expert observers, each labeling eight landmarks three times. Intra- and inter-observer variability was quantified, and observer survey and motion analysis were conducted to determine factors influencing labeling accuracy, such as ultrasound artifacts and motion amplitude. The mean intra-observer variability ranged from <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>1.58</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>0.90</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> to <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.05</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.22</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> depending on the observer. The inter-observer variability for the two observer groups was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.68</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.69</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>3.06</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.74</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . The observer survey and motion analysis revealed that ultrasound artifacts significantly affected labeling accuracy due to limited landmark visibility, whereas motion amplitude had no measurable effect. Our measured mean landmark motion was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>11.56</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>5.86</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . We highlight variability in expert-labeled ground truth data for 4D ultrasound imaging and identify ultrasound artifacts as a major source of labeling inaccuracies. These findings underscore the importance of addressing observer variability and artifact-related challenges to improve the reliability of ground truth data for evaluating target tracking algorithms in 4D ultrasound applications.

Challenges in diagnosis of sarcoidosis.

Bączek K, Piotrowski WJ, Bonella F

pubmed logopapersSep 1 2025
Diagnosing sarcoidosis remains challenging. Histology findings and a variable clinical presentation can mimic other infectious, malignant, and autoimmune diseases. This review synthesizes current evidence on histopathology, sampling techniques, imaging modalities, and biomarkers and explores how emerging 'omics' and artificial intelligence tools may sharpen diagnostic accuracy. Within the typical granulomatous lesions, limited or 'burned-out' necrosis is an ancillary finding, which can be present in up to one-third of sarcoid biopsies, and demands a careful differential diagnostic work-up. Endobronchial ultrasound-guided transbronchial needle aspiration of lymph nodes has replaced mediastinoscopy as first-line sampling tool, while cryobiopsy is still under validation. Volumetric PET metrics such as total lung glycolysis and somatostatin-receptor tracers refine activity assessment; combined FDG PET/MRI improves detection of occult cardiac disease. Advanced bronchoalveolar lavage (BAL) immunophenotyping via flow cytometry and serum, BAL, and genetic biomarkers show to correlate with inflammatory burden but have low diagnostic value. Multi-omics signatures and Positron Emission Tomography with Computer Tomography radiomics, supported by deep-learning algorithms, show promising results for noninvasive diagnostic confirmation, phenotyping, and disease monitoring. No single test is conclusive for diagnosing sarcoidosis. An integrated, multidisciplinary strategy is needed. Large, multicenter, and multiethnic studies are essential to translate and validate data from emerging AI tools and -omics research into clinical routine.

Application of deep learning for detection of nasal bone fracture on X-ray nasal bone lateral view.

Mortezaei T, Dalili Kajan Z, Mirroshandel SA, Mehrpour M, Shahidzadeh S

pubmed logopapersSep 1 2025
This study aimed to assess the efficacy of deep learning applications for the detection of nasal bone fracture on X-ray nasal bone lateral view. In this retrospective observational study, 2968 X-ray nasal bone lateral views of trauma patients were collected from a radiology centre, and randomly divided into training, validation, and test sets. Preprocessing included noise reduction by using the Gaussian filter and image resizing. Edge detection was performed using the Canny edge detector. Feature extraction was conducted using the gray-level co-occurrence matrix (GLCM), histogram of oriented gradients (HOG), and local binary pattern (LBP) techniques. Several machine learning algorithms namely CNN, VGG16, VGG19, MobileNet, Xception, ResNet50V2, and InceptionV3 were employed for the classification of images into 2 classes of normal and fracture. The accuracy was the highest for VGG16 and Swin Transformer (79%) followed by ResNet50V2 and InceptionV3 (0.74), Xception (0.72), and MobileNet (0.71). The AUC was the highest for VGG16 (0.86) followed by VGG19 (0.84), MobileNet and Xception (0.83), and Swin Transformer (0.79). The tested deep learning models were capable of detecting nasal bone fractures on X-ray nasal bone lateral views with high accuracy. VGG16 was the best model with successful results.

Pulmonary Embolism Survival Prediction Using Multimodal Learning Based on Computed Tomography Angiography and Clinical Data.

Zhong Z, Zhang H, Fayad FH, Lancaster AC, Sollee J, Kulkarni S, Lin CT, Li J, Gao X, Collins S, Greineder CF, Ahn SH, Bai HX, Jiao Z, Atalay MK

pubmed logopapersSep 1 2025
Pulmonary embolism (PE) is a significant cause of mortality in the United States. The objective of this study is to implement deep learning (DL) models using computed tomography pulmonary angiography (CTPA), clinical data, and PE Severity Index (PESI) scores to predict PE survival. In total, 918 patients (median age 64 y, range 13 to 99 y, 48% male) with 3978 CTPAs were identified via retrospective review across 3 institutions. To predict survival, an AI model was used to extract disease-related imaging features from CTPAs. Imaging features and clinical variables were then incorporated into independent DL models to predict survival outcomes. Cross-modal fusion CoxPH models were used to develop multimodal models from combinations of DL models and calculated PESI scores. Five multimodal models were developed as follows: (1) using CTPA imaging features only, (2) using clinical variables only, (3) using both CTPA and clinical variables, (4) using CTPA and PESI score, and (5) using CTPA, clinical variables, and PESI score. Performance was evaluated using the concordance index (c-index). Kaplan-Meier analysis was performed to stratify patients into high-risk and low-risk groups. Additional factor-risk analysis was conducted to account for right ventricular (RV) dysfunction. For both data sets, the multimodal models incorporating CTPA features, clinical variables, and PESI score achieved higher c-indices than PESI alone. Following the stratification of patients into high-risk and low-risk groups by models, survival outcomes differed significantly (both P <0.001). A strong correlation was found between high-risk grouping and RV dysfunction. Multiomic DL models incorporating CTPA features, clinical data, and PESI achieved higher c-indices than PESI alone for PE survival prediction.

Automatic detection of mandibular fractures on CT scan using deep learning.

Liu Y, Wang X, Tu Y, Chen W, Shi F, You M

pubmed logopapersSep 1 2025
This study explores the application of artificial intelligence (AI), specifically deep learning, in the detection and classification of mandibular fractures using CT scans. Data from 459 patients were retrospectively obtained from West China Hospital of Stomatology, Sichuan University, spanning from 2020 to 2023. The CT scans were divided into training, testing, and independent validation sets. This research focuses on training and validating a deep learning model using the nnU-Net segmentation framework for pixel-level accuracy in identifying fracture locations. Additionally, a 3D-ResNet with pre-trained weights was employed to classify fractures into 3 types based on severity. Performance metrics included sensitivity, precision, specificity, and area under the receiver operating characteristic curve (AUC). The study achieved high diagnostic accuracy in mandibule fracture detection, with sensitivity >0.93, precision >0.79, and specificity >0.80. For mandibular fracture classification, accuracies were all above 0.718, with a mean AUC of 0.86. Detection and classification of mandibular fractures in CT images can be significantly enhanced using the nnU-Net segmentation framework, aiding in clinical diagnosis.

Can super resolution via deep learning improve classification accuracy in dental radiography?

Çelik B, Mikaeili M, Genç MZ, Çelik ME

pubmed logopapersSep 1 2025
Deep learning-driven super resolution (SR) aims to enhance the quality and resolution of images, offering potential benefits in dental imaging. Although extensive research has focused on deep learning based dental classification tasks, the impact of applying SR techniques on classification remains underexplored. This study seeks to address this gap by evaluating and comparing the performance of deep learning classification models on dental images with and without SR enhancement. An open-source dental image dataset was utilized to investigate the impact of SR on image classification performance. SR was applied by 2 models with a scaling ratio of 2 and 4, while classification was performed by 4 deep learning models. Performances were evaluated by well-accepted metrics like structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), accuracy, recall, precision, and F1 score. The effect of SR on classification performance is interpreted through 2 different approaches. Two SR models yielded average SSIM and PSNR values of 0.904 and 36.71 for increasing resolution with 2 scaling ratios. Average accuracy and F-1 score for the classification trained and tested with 2 SR-generated images were 0.859 and 0.873. In the first of the comparisons carried out with 2 different approaches, it was observed that the accuracy increased in at least half of the cases (8 out of 16) when different models and scaling ratios were considered, while in the second approach, SR showed a significantly higher performance for almost all cases (12 out of 16). This study demonstrated that the classification with SR-generated images significantly improved outcomes. For the first time, the classification performance of dental radiographs with improved resolution by SR has been investigated. Significant performance improvement was observed compared to the case without SR.

Evaluating Undersampling Schemes and Deep Learning Reconstructions for High-Resolution 3D Double Echo Steady State Knee Imaging at 7 T: A Comparison Between GRAPPA, CAIPIRINHA, and Compressed Sensing.

Marth T, Marth AA, Kajdi GW, Nickel MD, Paul D, Sutter R, Nanz D, von Deuster C

pubmed logopapersSep 1 2025
The 3-dimensional (3D) double echo steady state (DESS) magnetic resonance imaging sequence can image knee cartilage with high, isotropic resolution, particularly at high and ultra-high field strengths. Advanced undersampling techniques with high acceleration factors can provide the short acquisition times required for clinical use. However, the optimal undersampling scheme and its limits are unknown. High-resolution isotropic (reconstructed voxel size: 0.3 × 0.3 × 0.3 mm 3 ) 3D DESS images of 40 knees in 20 volunteers were acquired at 7 T with varying undersampling factors (R = 4-30) and schemes (regular: GRAPPA, CAIPIRINHA; incoherent: compressed sensing [CS]), whereas the remaining imaging parameters were kept constant. All imaging data were reconstructed with deep learning (DL) algorithms. Three readers rated image quality on a 4-point Likert scale. Four-fold accelerated GRAPPA was used as reference standard. Incidental cartilage lesions were graded on a modified Whole-Organ Magnetic Resonance Imaging Score (WORMS). Friedman's analysis of variance characterized rating differences. The interreader agreement was assessed using κ statistics. The quality of 16-fold accelerated CS images was not rated significantly different from that of 4-fold accelerated GRAPPA and 8-fold accelerated CAIPIRINHA images, whereas the corresponding data were acquired 4.5 and 2 times faster (01:12 min:s) than in 4-fold accelerated GRAPPA (5:22 min:s) and 8-fold accelerated CAIPIRINHA (2:22 min:s) acquisitions, respectively. Interreader agreement for incidental cartilage lesions was almost perfect for 4-fold accelerated GRAPPA (κ = 0.91), 8-fold accelerated CAIPIRINHA (κ = 0.86), and 8- to 16-fold accelerated CS (κ = 0.91). Our results suggest significant advantages of incoherent versus regular undersampling patterns for high-resolution 3D DESS cartilage imaging with high acceleration factors. The combination of CS undersampling with DL reconstruction enables fast, isotropic, high-resolution acquisitions without apparent impairment of image quality. Since DESS specific absorption rate values tend to be moderate, CS DESS with DL reconstruction promises potential for high-resolution assessment of cartilage morphology and other musculoskeletal anatomies at 7 T.

MRI detection and grading of knee osteoarthritis - a pilot study using an AI technique with a novel imaging-based scoring system.

Roy C, Roshan M, Goyal N, Rana P, Ghonge NP, Jena A, Vaishya R, Ghosh S

pubmed logopapersSep 1 2025
Precise and rapid identification of knee osteoarthritis (OA) is essential for efficient management and therapy planning. Conventional diagnostic techniques frequently depend on subjective interpretation, which have shortcomings, particularly during the first phases of the illness. In this study, magnetic resonance imaging (MRI) was used to create knee datasets as novel techniques for evaluating knee OA. This methodology utilizes artificial intelligence (AI) algorithms to identify and evaluate important indications of knee osteoarthritis, including osteophytes, eburnation, bone marrow lesions (BMLs), and cartilage thickness. We conducted training and evaluation on multiple deep learning models, including ResNet50, DenseNet121, VGG16 and ResNet101 utilizing annotated MRI data. By conducting thorough statistical analysis and validation, we have proven the efficacy of our models in precisely diagnosing and grading knee OA. This research presents a new grading method, verified by experienced radiologists, that uses eburnation as a significant indicator of the severity of knee OA. This study provides a new method for an AI-powered automated system designed to diagnose knee OA. This system will simplify the diagnostic process, minimize mistakes made by humans, and enhance the effectiveness of clinical treatment. Through the integration of AI-ML (machine learning) technologies, our goal is to improve patient outcomes, optimize the utilization of healthcare resources, and enable personalized knee OA therapy.

Automatic design and optimization of MRI-based neurochemical sensors via reinforcement learning.

Ali Z, Asparin A, Zhang Y, Mettee H, Taha D, Ha Y, Bhanot D, Sarwar K, Kiran H, Wu S, Wei H

pubmed logopapersSep 1 2025
Magnetic resonance imaging (MRI) is a cornerstone of medical imaging, celebrated for its non-invasiveness, high spatial and temporal resolution, and exceptional soft tissue contrast, with over 100 million clinical procedures performed annually worldwide. In this field, MRI-based nanosensors have garnered significant interest in biomedical research due to their tunable sensing mechanisms, high permeability, rapid kinetics, and surface functionality. Extensive studies in the field have reported the use of superparamagnetic iron oxide nanoparticles (SPIONs) and proteins as a proof-of-concept for sensing critical neurochemicals via MRI. However, the signal change ratio and response rate of our SPION-protein-based in vitro dopamine and in vivo calcium sensors need to be further enhanced to detect the subtle and transient fluctuations in neurochemical levels associated with neural activities, starting from in vitro diagnostics. In this paper, we present an advanced reinforcement-learning-based computational model that treats sensor design as an optimal decision-making problem by choosing sensor performance as a weighted reward objective function. The adjustments of the SPION's and protein's three-dimensional configuration and magnetic moment establish a set of actions that can autonomously maximize the cumulative reward in the computational environment. Our new model first elucidates the sensor's conformation alteration behind the increment in T<sub>2</sub> contrast observed experimentally in MRI in the presence and absence of calcium and dopamine neurochemicals. Additionally, our enhanced machine-learning algorithm can autonomously learn the performance trends of SPION-protein-based sensors and identify their optimal structural parameters. Experimental in vitro validation with TEM and MR relaxometry confirmed the predicted optimal SPION diameters, demonstrating the highest sensing performance at 9 nm for calcium and 11 nm for dopamine detection. Beginning with in vitro diagnostics, these results demonstrate a versatile modeling platform for the development of MRI-based neurochemical sensors, providing insights into their behavior under operational conditions. This platform also enables the autonomous design of improved sensor sizes and geometries, providing a roadmap for the future optimization of MRI sensors.
Page 39 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.