Sort by:
Page 548 of 6256241 results

Schmolke SA, Diekhoff T, Mews J, Khayata K, Kotlyarov M

pubmed logopapersMay 28 2025
This study aimed to compare two deep learning reconstruction (DLR) techniques (AiCE mild; AiCE strong) with two established methods-iterative reconstruction (IR) and filtered back projection (FBP)-for the detection of monosodium urate (MSU) in dual-energy computed tomography (DECT). An ex vivo bio-phantom and a raster phantom were prepared by inserting syringes containing different MSU concentrations and scanned in a 320-rows volume DECT scanner at different tube currents. The scans were reconstructed in a soft tissue kernel using the four reconstruction techniques mentioned above, followed by quantitative assessment of MSU volumes and image quality parameters, i.e., signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Both DLR techniques outperformed conventional IR and FBP in terms of volume detection and image quality. Notably, unlike IR and FBP, the two DLR methods showed no positive correlation of the MSU detection rate with the CT dose index (CTDIvol) in the bio-phantom. Our study highlights the potential of DLR for DECT imaging in gout, where it offers enhanced detection sensitivity, improved image contrast, reduced image noise, and lower radiation exposure. Further research is needed to assess the clinical reliability of this approach.

Yamauchi H, Aoyama G, Tsukihara H, Ino K, Tomii N, Takagi S, Fujimoto K, Sakaguchi T, Sakuma I, Ono M

pubmed logopapersMay 28 2025
The aim of this study was to retrain our existing deep learning-based fully automated aortic valve leaflets/root measurement algorithm, using computed tomography (CT) data for root dilatation (RD), and assess its clinical feasibility. 67 ECG-gated cardiac CT scans were retrospectively collected from 40 patients with RD to retrain the algorithm. An additional 100 patients' CT data with aortic stenosis (AS, n=50) and aortic regurgitation (AR) with/without RD (n=50) were collected to evaluate the algorithm. 45 AR patients had RD. The algorithm provided patient-specific 3-dimensional aortic valve/root visualization. The measurements of 100 cases automatically obtained by the algorithm were compared with an expert's manual measurements. Overall, there was a moderate-to-high correlation, with differences of 6.1-13.4 mm<sup>2</sup>for the virtual basal ring area, 1.1-2.6 mm for sinus diameter, 0.1-0.6 mm for coronary artery height, 0.2-0.5 mm for geometric height, and 0.9 mm for effective height, except for the sinotubular junction of the AR cases (10.3 mm) with an indefinite borderline over the dilated sinuses, compared with 2.1 mm in AS cases. The measurement time (122 s) per case by the algorithm was significantly shorter than those of the experts (618-1,126 s). This fully automated algorithm can assist in evaluating aortic valve/root anatomy for planning surgical and transcatheter treatments while saving time and minimizing workload.

Currie GM, Hawk KE

pubmed logopapersMay 28 2025
Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has significant potential to advance the capabilities of nuclear neuroimaging. The current and emerging applications of ML and DL in the processing, analysis, enhancement and interpretation of SPECT and PET imaging are explored for brain imaging. Key developments include automated image segmentation, disease classification, and radiomic feature extraction, including lower dimensionality first and second order radiomics, higher dimensionality third order radiomics and more abstract fourth order deep radiomics. DL-based reconstruction, attenuation correction using pseudo-CT generation, and denoising of low-count studies have a role in enhancing image quality. AI has a role in sustainability through applications in radioligand design and preclinical imaging while federated learning addresses data security challenges to improve research and development in nuclear cerebral imaging. There is also potential for generative AI to transform the nuclear cerebral imaging space through solutions to data limitations, image enhancement, patient-centered care, workflow efficiencies and trainee education. Innovations in ML and DL are re-engineering the nuclear neuroimaging ecosystem and reimagining tomorrow's precision medicine landscape.

To S, Mavroidis P, Chen RC, Wang A, Royce T, Tan X, Zhu T, Lian J

pubmed logopapersMay 28 2025
Permanent prostate brachytherapy has inherent intraoperative organ deformation due to the inflatable trans-rectal ultrasound probe cover. Since the majority of the dose is delivered postoperatively with no deformation, the dosimetry approved at the time of implant may not accurately represent the dose delivered to the target and organs at risk. We aimed to evaluate the biological effect of the prostate deformation and its correlation with patient-reported outcomes. We prospectively acquired ultrasound images of the prostate pre- and postprobe cover inflation for 27 patients undergoing I-125 seed implant. The coordinates of implanted seeds from approved clinical plan were transferred to deformation-corrected prostate to simulate the actual dosimetry using a machine learning-based deformable image registration. The DVHs of both sets of plans were reduced to biologically effective dose (BED) distribution and subsequently to Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) metrics. The change in fourteen patient-reported rectal and urinary symptoms between pretreatment to 6 months post-op time points were correlated with the TCP and NTCP metrics using the area under the curve (AUC) and odds ratio (OR). Between the clinical and the deformation corrected research plans, the mean TCP decreased by 9.4% (p < 0.01), whereas mean NTCP of rectum decreased by 10.3% and that of urethra increased by 16.3%, respectively (p < 0.01). For the diarrhea symptom, the deformation corrected research plans showed AUC=0.75 and OR = 8.9 (1.3-58.8) for the threshold NTCP>20%, while the clinical plan showed AUC=0.56 and OR = 1.4 (0.2 to 9.0). For the symptom of urinary control, the deformation corrected research plans showed AUC = 0.70, OR = 6.9 (0.6 to 78.0) for the threshold of NTCP>15%, while the clinical plan showed AUC = 0.51 and no positive OR. Taking organ deformation into consideration, clinical brachytherapy plans showed worse tumor coverage, worse urethra sparing but better rectal sparing. The deformation corrected research plans showed a stronger correlation with the patient-reported outcome than the clinical plans for the symptoms of diarrhea and urinary control.

Belton N, Lawlor A, Curran KM

pubmed logopapersMay 28 2025
The diagnostic accuracy and subjectivity of existing Knee Osteoarthritis (OA) ordinal grading systems has been a subject of on-going debate and concern. Existing automated solutions are trained to emulate these imperfect systems, whilst also being reliant on large annotated databases for fully-supervised training. This work proposes a three stage approach for automated continuous grading of knee OA that is built upon the principles of Anomaly Detection (AD); learning a robust representation of healthy knee X-rays and grading disease severity based on its distance to the centre of normality. In the first stage, SS-FewSOME is proposed, a self-supervised AD technique that learns the 'normal' representation, requiring only examples of healthy subjects and <3% of the labels that existing methods require. In the second stage, this model is used to pseudo label a subset of unlabelled data as 'normal' or 'anomalous', followed by denoising of pseudo labels with CLIP. The final stage involves retraining on labelled and pseudo labelled data using the proposed Dual Centre Representation Learning (DCRL) which learns the centres of two representation spaces; normal and anomalous. Disease severity is then graded based on the distance to the learned centres. The proposed methodology outperforms existing techniques by margins of up to 24% in terms of OA detection and the disease severity scores correlate with the Kellgren-Lawrence grading system at the same level as human expert performance. Code available at https://github.com/niamhbelton/SS-FewSOME_Disease_Severity_Knee_Osteoarthritis.

Shah R, Bozic KJ, Jayakumar P

pubmed logopapersMay 28 2025
Artificial intelligence (AI) presents new opportunities to advance value-based healthcare in orthopedic surgery through 3 potential mechanisms: agency, automation, and augmentation. AI may enhance patient agency through improved health literacy and remote monitoring while reducing costs through triage and reduction in specialist visits. In automation, AI optimizes operating room scheduling and streamlines administrative tasks, with documented cost savings and improved efficiency. For augmentation, AI has been shown to be accurate in diagnostic imaging interpretation and surgical planning, while enabling more precise outcome predictions and personalized treatment approaches. However, implementation faces substantial challenges, including resistance from healthcare professionals, technical barriers to data quality and privacy, and significant financial investments required for infrastructure. Success in healthcare AI integration requires careful attention to regulatory frameworks, data privacy, and clinical validation.

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Although Alzheimer's disease detection via MRIs has advanced significantly thanks to contemporary deep learning models, challenges such as class imbalance, protocol variations, and limited dataset diversity often hinder their generalization capacity. To address this issue, this article focuses on the single domain generalization setting, where given the data of one domain, a model is designed and developed with maximal performance w.r.t. an unseen domain of distinct distribution. Since brain morphology is known to play a crucial role in Alzheimer's diagnosis, we propose the use of learnable pseudo-morphological modules aimed at producing shape-aware, anatomically meaningful class-specific augmentations in combination with a supervised contrastive learning module to extract robust class-specific representations. Experiments conducted across three datasets show improved performance and generalization capacity, especially under class imbalance and imaging protocol variations. The source code will be made available upon acceptance at https://github.com/zobia111/SDG-Alzheimer.

Siraz, S., Kamanda, H., Gholami, S., Nabil, A. S., Ong, S. S. Y., Alam, M. N.

medrxiv logopreprintMay 28 2025
PurposeTo develop and validate deep learning (DL)-based models for classifying geographic atrophy (GA) subtypes using Optical Coherence Tomography (OCT) scans across four clinical classification tasks. DesignRetrospective comparative study evaluating three DL architectures on OCT data with two experimental approaches. Subjects455 OCT volumes (258 Central GA [CGA], 74 Non-Central GA [NCGA], 123 no GA [NGA]) from 104 patients at Atrium Health Wake Forest Baptist. For GA versus age-related macular degeneration (AMD) classification, we supplemented our dataset with AMD cases from four public repositories. MethodsWe implemented ResNet50, MobileNetV2, and Vision Transformer (ViT-B/16) architectures using two approaches: (1) utilizing all B-scans within each OCT volume and (2) selectively using B-scans containing foveal regions. Models were trained using transfer learning, standardized data augmentation, and patient-level data splitting (70:15:15 ratio) for training, validation, and testing. Main Outcome MeasuresArea under the receiver operating characteristic curve (AUC-ROC), F1 score, and accuracy for each classification task (CGA vs. NCGA, CGA vs. NCGA vs. NGA, GA vs. NGA, and GA vs. other forms of AMD). ResultsViT-B/16 consistently outperformed other architectures across all classification tasks. For CGA versus NCGA classification, ViT-B/16 achieved an AUC-ROC of 0.728{+/-}0.083 and accuracy of 0.831{+/-}0.006 using selective B-scans. In GA versus NGA classification, ViT-B/16 attained an AUC-ROC of 0.950{+/-}0.002 and accuracy of 0.873{+/-}0.012 with selective B-scans. All models demonstrated exceptional performance in distinguishing GA from other AMD forms (AUC-ROC>0.998). For multi-class classification, ViT-B/16 achieved an AUC-ROC of 0.873{+/-}0.003 and accuracy of 0.751{+/-}0.002 using selective B-scans. ConclusionsOur DL approach successfully classifies GA subtypes with clinically relevant accuracy. ViT-B/16 demonstrates superior performance due to its ability to capture spatial relationships between atrophic regions and the foveal center. Focusing on B-scans containing foveal regions improved diagnostic accuracy while reducing computational requirements, better aligning with clinical practice workflows.

Chau NK, Kim WJ, Lee CH, Chae KJ, Jin GY, Choi S

pubmed logopapersMay 27 2025
Occupational health assessment is critical for detecting respiratory issues caused by harmful exposures, such as cement dust. Quantitative computed tomography (QCT) imaging provides detailed insights into lung structure and function, enhancing the diagnosis of lung diseases. However, its high dimensionality poses challenges for traditional machine learning methods. In this study, Kolmogorov-Arnold networks (KANs) were used for the binary classification of QCT imaging data to assess respiratory conditions associated with cement dust exposure. The dataset comprised QCT images from 609 individuals, including 311 subjects exposed to cement dust and 298 healthy controls. We derived 141 QCT-based variables and employed KANs with two hidden layers of 15 and 8 neurons. The network parameters, including grid intervals, polynomial order, learning rate, and penalty strengths, were carefully fine-tuned. The performance of the model was assessed through various metrics, including accuracy, precision, recall, F1 score, specificity, and the Matthews Correlation Coefficient (MCC). A five-fold cross-validation was employed to enhance the robustness of the evaluation. SHAP analysis was applied to interpret the sensitive QCT features. The KAN model demonstrated consistently high performance across all metrics, with an average accuracy of 98.03 %, precision of 97.35 %, recall of 98.70 %, F1 score of 98.01 %, and specificity of 97.40 %. The MCC value further confirmed the robustness of the model in managing imbalanced datasets. The comparative analysis demonstrated that the KAN model outperformed traditional methods and other deep learning approaches, such as TabPFN, ANN, FT-Transformer, VGG19, MobileNets, ResNet101, XGBoost, SVM, random forest, and decision tree. SHAP analysis highlighted structural and functional lung features, such as airway geometry, wall thickness, and lung volume, as key predictors. KANs significantly improved the classification of QCT imaging data, enhancing early detection of cement dust-induced respiratory conditions. SHAP analysis supported model interpretability, enhancing its potential for clinical translation in occupational health assessments.
Page 548 of 6256241 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.