Sort by:
Page 205 of 6546537 results

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Wulff D, Ernst F

pubmed logopapersSep 1 2025
Four-dimensional (4D) ultrasound imaging is widely used in clinics for diagnostics and therapy guidance. Accurate target tracking in 4D ultrasound is crucial for autonomous therapy guidance systems, such as radiotherapy, where precise tumor localization ensures effective treatment. Supervised deep learning approaches rely on reliable ground truth, making accurate labels essential. We investigate the reliability of expert-labeled ground truth data by evaluating intra- and inter-observer variability in landmark labeling for 4D ultrasound imaging in the liver. Eight 4D liver ultrasound sequences were labeled by eight expert observers, each labeling eight landmarks three times. Intra- and inter-observer variability was quantified, and observer survey and motion analysis were conducted to determine factors influencing labeling accuracy, such as ultrasound artifacts and motion amplitude. The mean intra-observer variability ranged from <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>1.58</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>0.90</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> to <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.05</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.22</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> depending on the observer. The inter-observer variability for the two observer groups was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>2.68</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.69</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>3.06</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>1.74</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . The observer survey and motion analysis revealed that ultrasound artifacts significantly affected labeling accuracy due to limited landmark visibility, whereas motion amplitude had no measurable effect. Our measured mean landmark motion was <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>11.56</mn> <mtext>  </mtext> <mi>mm</mi> <mo>±</mo> <mn>5.86</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> . We highlight variability in expert-labeled ground truth data for 4D ultrasound imaging and identify ultrasound artifacts as a major source of labeling inaccuracies. These findings underscore the importance of addressing observer variability and artifact-related challenges to improve the reliability of ground truth data for evaluating target tracking algorithms in 4D ultrasound applications.

Bączek K, Piotrowski WJ, Bonella F

pubmed logopapersSep 1 2025
Diagnosing sarcoidosis remains challenging. Histology findings and a variable clinical presentation can mimic other infectious, malignant, and autoimmune diseases. This review synthesizes current evidence on histopathology, sampling techniques, imaging modalities, and biomarkers and explores how emerging 'omics' and artificial intelligence tools may sharpen diagnostic accuracy. Within the typical granulomatous lesions, limited or 'burned-out' necrosis is an ancillary finding, which can be present in up to one-third of sarcoid biopsies, and demands a careful differential diagnostic work-up. Endobronchial ultrasound-guided transbronchial needle aspiration of lymph nodes has replaced mediastinoscopy as first-line sampling tool, while cryobiopsy is still under validation. Volumetric PET metrics such as total lung glycolysis and somatostatin-receptor tracers refine activity assessment; combined FDG PET/MRI improves detection of occult cardiac disease. Advanced bronchoalveolar lavage (BAL) immunophenotyping via flow cytometry and serum, BAL, and genetic biomarkers show to correlate with inflammatory burden but have low diagnostic value. Multi-omics signatures and Positron Emission Tomography with Computer Tomography radiomics, supported by deep-learning algorithms, show promising results for noninvasive diagnostic confirmation, phenotyping, and disease monitoring. No single test is conclusive for diagnosing sarcoidosis. An integrated, multidisciplinary strategy is needed. Large, multicenter, and multiethnic studies are essential to translate and validate data from emerging AI tools and -omics research into clinical routine.

Hangody G, Szoldán P, Egyed Z, Szabó E, Hangody LR, Hangody L

pubmed logopapersSep 1 2025
Transplantation of fresh osteochondral allografts is a possible biological resurfacing option to substitute massive bone loss and provide proper gliding surfaces for extended and deep osteochondral lesions of weight-bearing articular surfaces. Limited chondrocyte survival and technical difficulties may compromise the efficacy of osteochondral transfers. As experimental data suggest that minimizing the time between graft harvest and implantation may improve chondrocyte survival rate a <48 hours donor to recipient time was used to repair massive osteochondral defects. For optimal graft congruency, a magnetic resonance-based artificial intelligence algorithm was also developed to provide proper technical support. Based on 3 years of experience, increased survival rate of transplanted chondrocytes and improved clinical outcomes were observed.

Mortezaei T, Dalili Kajan Z, Mirroshandel SA, Mehrpour M, Shahidzadeh S

pubmed logopapersSep 1 2025
This study aimed to assess the efficacy of deep learning applications for the detection of nasal bone fracture on X-ray nasal bone lateral view. In this retrospective observational study, 2968 X-ray nasal bone lateral views of trauma patients were collected from a radiology centre, and randomly divided into training, validation, and test sets. Preprocessing included noise reduction by using the Gaussian filter and image resizing. Edge detection was performed using the Canny edge detector. Feature extraction was conducted using the gray-level co-occurrence matrix (GLCM), histogram of oriented gradients (HOG), and local binary pattern (LBP) techniques. Several machine learning algorithms namely CNN, VGG16, VGG19, MobileNet, Xception, ResNet50V2, and InceptionV3 were employed for the classification of images into 2 classes of normal and fracture. The accuracy was the highest for VGG16 and Swin Transformer (79%) followed by ResNet50V2 and InceptionV3 (0.74), Xception (0.72), and MobileNet (0.71). The AUC was the highest for VGG16 (0.86) followed by VGG19 (0.84), MobileNet and Xception (0.83), and Swin Transformer (0.79). The tested deep learning models were capable of detecting nasal bone fractures on X-ray nasal bone lateral views with high accuracy. VGG16 was the best model with successful results.

Zhong Z, Zhang H, Fayad FH, Lancaster AC, Sollee J, Kulkarni S, Lin CT, Li J, Gao X, Collins S, Greineder CF, Ahn SH, Bai HX, Jiao Z, Atalay MK

pubmed logopapersSep 1 2025
Pulmonary embolism (PE) is a significant cause of mortality in the United States. The objective of this study is to implement deep learning (DL) models using computed tomography pulmonary angiography (CTPA), clinical data, and PE Severity Index (PESI) scores to predict PE survival. In total, 918 patients (median age 64 y, range 13 to 99 y, 48% male) with 3978 CTPAs were identified via retrospective review across 3 institutions. To predict survival, an AI model was used to extract disease-related imaging features from CTPAs. Imaging features and clinical variables were then incorporated into independent DL models to predict survival outcomes. Cross-modal fusion CoxPH models were used to develop multimodal models from combinations of DL models and calculated PESI scores. Five multimodal models were developed as follows: (1) using CTPA imaging features only, (2) using clinical variables only, (3) using both CTPA and clinical variables, (4) using CTPA and PESI score, and (5) using CTPA, clinical variables, and PESI score. Performance was evaluated using the concordance index (c-index). Kaplan-Meier analysis was performed to stratify patients into high-risk and low-risk groups. Additional factor-risk analysis was conducted to account for right ventricular (RV) dysfunction. For both data sets, the multimodal models incorporating CTPA features, clinical variables, and PESI score achieved higher c-indices than PESI alone. Following the stratification of patients into high-risk and low-risk groups by models, survival outcomes differed significantly (both P <0.001). A strong correlation was found between high-risk grouping and RV dysfunction. Multiomic DL models incorporating CTPA features, clinical data, and PESI achieved higher c-indices than PESI alone for PE survival prediction.

Liu Y, Wang X, Tu Y, Chen W, Shi F, You M

pubmed logopapersSep 1 2025
This study explores the application of artificial intelligence (AI), specifically deep learning, in the detection and classification of mandibular fractures using CT scans. Data from 459 patients were retrospectively obtained from West China Hospital of Stomatology, Sichuan University, spanning from 2020 to 2023. The CT scans were divided into training, testing, and independent validation sets. This research focuses on training and validating a deep learning model using the nnU-Net segmentation framework for pixel-level accuracy in identifying fracture locations. Additionally, a 3D-ResNet with pre-trained weights was employed to classify fractures into 3 types based on severity. Performance metrics included sensitivity, precision, specificity, and area under the receiver operating characteristic curve (AUC). The study achieved high diagnostic accuracy in mandibule fracture detection, with sensitivity >0.93, precision >0.79, and specificity >0.80. For mandibular fracture classification, accuracies were all above 0.718, with a mean AUC of 0.86. Detection and classification of mandibular fractures in CT images can be significantly enhanced using the nnU-Net segmentation framework, aiding in clinical diagnosis.

Çelik B, Mikaeili M, Genç MZ, Çelik ME

pubmed logopapersSep 1 2025
Deep learning-driven super resolution (SR) aims to enhance the quality and resolution of images, offering potential benefits in dental imaging. Although extensive research has focused on deep learning based dental classification tasks, the impact of applying SR techniques on classification remains underexplored. This study seeks to address this gap by evaluating and comparing the performance of deep learning classification models on dental images with and without SR enhancement. An open-source dental image dataset was utilized to investigate the impact of SR on image classification performance. SR was applied by 2 models with a scaling ratio of 2 and 4, while classification was performed by 4 deep learning models. Performances were evaluated by well-accepted metrics like structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), accuracy, recall, precision, and F1 score. The effect of SR on classification performance is interpreted through 2 different approaches. Two SR models yielded average SSIM and PSNR values of 0.904 and 36.71 for increasing resolution with 2 scaling ratios. Average accuracy and F-1 score for the classification trained and tested with 2 SR-generated images were 0.859 and 0.873. In the first of the comparisons carried out with 2 different approaches, it was observed that the accuracy increased in at least half of the cases (8 out of 16) when different models and scaling ratios were considered, while in the second approach, SR showed a significantly higher performance for almost all cases (12 out of 16). This study demonstrated that the classification with SR-generated images significantly improved outcomes. For the first time, the classification performance of dental radiographs with improved resolution by SR has been investigated. Significant performance improvement was observed compared to the case without SR.

Song Y, Dornisch AM, Dess RT, Margolis DJA, Weinberg EP, Barrett T, Cornell M, Fan RE, Harisinghani M, Kamran SC, Lee JH, Li CX, Liss MA, Rusu M, Santos J, Sonn GA, Vidic I, Woolen SA, Dale AM, Seibert TM

pubmed logopapersSep 1 2025
Evaluation of artificial intelligence (AI) algorithms for prostate segmentation is challenging because ground truth is lacking. We aimed to: (1) create a reference standard data set with precise prostate contours by expert consensus, and (2) evaluate various AI tools against this standard. We obtained prostate magnetic resonance imaging cases from six institutions from the Qualitative Prostate Imaging Consortium. A panel of 4 experts (2 genitourinary radiologists and 2 prostate radiation oncologists) meticulously developed consensus prostate segmentations on axial T<sub>2</sub>-weighted series. We evaluated the performance of 6 AI tools (3 commercially available and 3 academic) using Dice scores, distance from reference contour, and volume error. The panel achieved consensus prostate segmentation on each slice of all 68 patient cases included in the reference data set. We present 2 patient examples to serve as contouring guides. Depending on the AI tool, median Dice scores (across patients) ranged from 0.80 to 0.94 for whole prostate segmentation. For a typical (median) patient, AI tools had a mean error over the prostate surface ranging from 1.3 to 2.4 mm. They maximally deviated 3.0 to 9.4 mm outside the prostate and 3.0 to 8.5 mm inside the prostate for a typical patient. Error in prostate volume measurement for a typical patient ranged from 4.3% to 31.4%. We established an expert consensus benchmark for prostate segmentation. The best-performing AI tools have typical accuracy greater than that reported for radiation oncologists using computed tomography scans (the most common clinical approach for radiation therapy planning). Physician review remains essential to detect occasional major errors.

Marth T, Marth AA, Kajdi GW, Nickel MD, Paul D, Sutter R, Nanz D, von Deuster C

pubmed logopapersSep 1 2025
The 3-dimensional (3D) double echo steady state (DESS) magnetic resonance imaging sequence can image knee cartilage with high, isotropic resolution, particularly at high and ultra-high field strengths. Advanced undersampling techniques with high acceleration factors can provide the short acquisition times required for clinical use. However, the optimal undersampling scheme and its limits are unknown. High-resolution isotropic (reconstructed voxel size: 0.3 × 0.3 × 0.3 mm 3 ) 3D DESS images of 40 knees in 20 volunteers were acquired at 7 T with varying undersampling factors (R = 4-30) and schemes (regular: GRAPPA, CAIPIRINHA; incoherent: compressed sensing [CS]), whereas the remaining imaging parameters were kept constant. All imaging data were reconstructed with deep learning (DL) algorithms. Three readers rated image quality on a 4-point Likert scale. Four-fold accelerated GRAPPA was used as reference standard. Incidental cartilage lesions were graded on a modified Whole-Organ Magnetic Resonance Imaging Score (WORMS). Friedman's analysis of variance characterized rating differences. The interreader agreement was assessed using κ statistics. The quality of 16-fold accelerated CS images was not rated significantly different from that of 4-fold accelerated GRAPPA and 8-fold accelerated CAIPIRINHA images, whereas the corresponding data were acquired 4.5 and 2 times faster (01:12 min:s) than in 4-fold accelerated GRAPPA (5:22 min:s) and 8-fold accelerated CAIPIRINHA (2:22 min:s) acquisitions, respectively. Interreader agreement for incidental cartilage lesions was almost perfect for 4-fold accelerated GRAPPA (κ = 0.91), 8-fold accelerated CAIPIRINHA (κ = 0.86), and 8- to 16-fold accelerated CS (κ = 0.91). Our results suggest significant advantages of incoherent versus regular undersampling patterns for high-resolution 3D DESS cartilage imaging with high acceleration factors. The combination of CS undersampling with DL reconstruction enables fast, isotropic, high-resolution acquisitions without apparent impairment of image quality. Since DESS specific absorption rate values tend to be moderate, CS DESS with DL reconstruction promises potential for high-resolution assessment of cartilage morphology and other musculoskeletal anatomies at 7 T.
Page 205 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.