Sort by:
Page 36 of 82813 results

Recent Advances in Applying Machine Learning to Proton Radiotherapy.

Wildman VL, Wynne J, Momin S, Kesarwala AH, Yang X

pubmed logopapersJul 3 2025
In radiation oncology, precision and timeliness of both planning and treatment are paramount values of patient care. Machine learning has increasingly been applied to various aspects of photon radiotherapy to reduce manual error and improve the efficiency of clinical decision making; however, applications to proton therapy remain an emerging field in comparison. This systematic review aims to comprehensively cover all current and potential applications of machine learning to the proton therapy clinical workflow, an area that has not been extensively explored in literature. PubMed and Embase were utilized to identify studies pertinent to machine learning in proton therapy between 2019 to 2024. An initial search on PubMed was made with the search strategy "'proton therapy', 'machine learning', 'deep learning'". A subsequent search on Embase was made with "("proton therapy") AND ("machine learning" OR "deep learning")". In total, 38 relevant studies have been summarized and incorporated. It is observed that U-Net architectures are prevalent in the patient pre-screening process, while convolutional neural networks play an important role in dose and range prediction. Both image quality improvement and transformation between modalities to decrease extraneous radiation are popular targets of various models. To adaptively improve treatments, advanced architectures such as general deep inception or deep cascaded convolution neural networks improve online dose verification and range monitoring. With the rising clinical usage of proton therapy, machine learning models have been increasingly proposed to facilitate both treatment and discovery. Significantly improving patient screening, planning, image quality, and dose and range calculation, machine learning is advancing the precision and personalization of proton therapy.

A comparative three-dimensional analysis of skeletal and dental changes induced by Herbst and PowerScope appliances in Class II malocclusion treatment: a retrospective cohort study.

Caleme E, Moro A, Mattos C, Miguel J, Batista K, Claret J, Leroux G, Cevidanes L

pubmed logopapersJul 3 2025
Skeletal Class II malocclusion is commonly treated using mandibular advancement appliances during growth. Evaluating the comparative effectiveness of different appliances can help optimize treatment outcomes. This study aimed to compare dental and skeletal outcomes of Class II malocclusion treatment using Herbst and PowerScope appliances in conjunction with fixed orthodontic therapy. This retrospective comparative study included 46 consecutively treated patients in two university clinics: 26 with PowerScope and 20 with Herbst MiniScope. CBCT scans were obtained before and after treatment. Skeletal and dental changes were analyzed using maxillary and mandibular voxel-based regional superimpositions and cranial base registrations, aided by AI-based landmark detection. Measurement bias was minimized through the use of a calibrated, blinded examiner. No patients were excluded from the analysis. Due to the study's retrospective nature, no prospective registration was performed; the institutional review board granted ethical approval. The Herbst group showed greater anterior displacement at B-point and Pogonion than PowerScope (2.4 mm and 2.6 mm, respectively). Both groups exhibited improved maxillomandibular relationships, with PowerScope's SNA angle reduced and Herbst's SNB increased. Vertical skeletal changes were observed at points A, B, and Pog in both groups. Herbst also resulted in less lower incisor proclination and more pronounced distal movement of upper incisors. Both appliances effectively corrected Class II malocclusion. Herbst promoted more pronounced skeletal advancement, while PowerScope induced greater dental compensation. These findings may be generalizable to similarly aged Class II patients in CVM stages 3-4.

Quantification of Optical Coherence Tomography Features in >3500 Patients with Inherited Retinal Disease Reveals Novel Genotype-Phenotype Associations

Woof, W. A., de Guimaraes, T. A. C., Al-Khuzaei, S., Daich Varela, M., Shah, M., Naik, G., Sen, S., Bagga, P., Chan, Y. W., Mendes, B. S., Lin, S., Ghoshal, B., Liefers, B., Fu, D. J., Georgiou, M., da Silva, A. S., Nguyen, Q., Liu, Y., Fujinami-Yokokawa, Y., Sumodhee, D., Furman, J., Patel, P. J., Moghul, I., Moosajee, M., Sallum, J., De Silva, S. R., Lorenz, B., Herrmann, P., Holz, F. G., Fujinami, K., Webster, A. R., Mahroo, O. A., Downes, S. M., Madhusudhan, S., Balaskas, K., Michaelides, M., Pontikos, N.

medrxiv logopreprintJul 3 2025
PurposeTo quantify spectral-domain optical coherence tomography (SD-OCT) images cross-sectionally and longitudinally in a large cohort of molecularly characterized patients with inherited retinal disease (IRDs) from the UK. DesignRetrospective study of imaging data. ParticipantsPatients with a clinical and molecularly confirmed diagnosis of IRD who have undergone macular SD-OCT imaging at Moorfields Eye Hospital (MEH) between 2011 and 2019. We retrospectively identified 4,240 IRD patients from the MEH database (198 distinct IRD genes), including 69,664 SD-OCT macular volumes. MethodsEight features of interest were defined: retina, fovea, intraretinal cystic spaces (ICS), subretinal fluid (SRF), subretinal hyper-reflective material (SHRM), pigment epithelium detachment (PED), ellipsoid zone loss (EZ-loss) and retinal pigment epithelium loss (RPE-loss). Manual annotations of five b-scans per SD-OCT volume was performed for the retinal features by four graders based on a defined grading protocol. A total of 1,749 b-scans from 360 SD-OCT volumes across 275 patients were annotated for the eight retinal features for training and testing of a neural-network-based segmentation model, AIRDetect-OCT, which was then applied to the entire imaging dataset. Main Outcome MeasuresPerformance of AIRDetect-OCT, comparing to inter-grader agreement was evaluated using Dice score on a held-out dataset. Feature prevalence, volume and area were analysed cross-sectionally and longitudinally. ResultsThe inter-grader Dice score for manual segmentation was [&ge;]90% for retina, ICS, SRF, SHRM and PED, >77% for both EZ-loss and RPE-loss. Model-grader agreement was >80% for segmentation of retina, ICS, SRF, SHRM, and PED, and >68% for both EZ-loss and RPE-loss. Automatic segmentation was applied to 272,168 b-scans across 7,405 SD-OCT volumes from 3,534 patients encompassing 176 unique genes. Accounting for age, male patients exhibited significantly more EZ-loss (19.6mm2 vs 17.9mm2, p<2.8x10-4) and RPE-loss (7.79mm2 vs 6.15mm2, p<3.2x10-6) than females. RPE-loss was significantly higher in Asian patients than other ethnicities (9.37mm2 vs 7.29mm2, p<0.03). ICS average total volume was largest in RS1 (0.47mm3) and NR2E3 (0.25mm3), SRF in BEST1 (0.21mm3) and PED in EFEMP1 (0.34mm3). BEST1 and PROM1 showed significantly different patterns of EZ-loss (p<10-4) and RPE-loss (p<0.02) comparing the dominant to the recessive forms. Sectoral analysis revealed significantly increased EZ-loss in the inferior quadrant compared to superior quadrant for RHO ({Delta}=-0.414 mm2, p=0.036) and EYS ({Delta}=-0.908 mm2, p=1.5x10-4). In ABCA4 retinopathy, more severe genotypes (group A) were associated with faster progression of EZ-loss (2.80{+/-}0.62 mm2/yr), whilst the p.(Gly1961Glu) variant (group D) was associated with slower progression (0.56 {+/-}0.18 mm2/yr). There were also sex differences within groups with males in group A experiencing significantly faster rates of progression of RPE-loss (2.48 {+/-}1.40 mm2/yr vs 0.87 {+/-}0.62 mm2/yr, p=0.047), but lower rates in groups B, C, and D. ConclusionsAIRDetect-OCT, a novel deep learning algorithm, enables large-scale OCT feature quantification in IRD patients uncovering cross-sectional and longitudinal phenotype correlations with demographic and genotypic parameters.

De-speckling of medical ultrasound image using metric-optimized knowledge distillation.

Khalifa M, Hamza HM, Hosny KM

pubmed logopapersJul 3 2025
Ultrasound imaging provides real-time views of internal organs, which are essential for accurate diagnosis and treatment. However, speckle noise, caused by wave interactions with tissues, creates a grainy texture that hides crucial details. This noise varies with image intensity, which limits the effectiveness of traditional denoising methods. We introduce the Metric-Optimized Knowledge Distillation (MK) model, a deep-learning approach that utilizes Knowledge Distillation (KD) for denoising ultrasound images. Our method transfers knowledge from a high-performing teacher network to a smaller student network designed for this task. By leveraging KD, the model removes speckle noise while preserving key anatomical details needed for accurate diagnosis. A key innovation of our paper is the metric-guided training strategy. We achieve this by repeatedly computing evaluation metrics used to assess our model. Incorporating them into the loss function enables the model to reduce noise and enhance image quality optimally. We evaluate our proposed method against state-of-the-art despeckling techniques, including DNCNN and other recent models. The results demonstrate that our approach performs superior noise reduction and image quality preservation, making it a valuable tool for enhancing the diagnostic utility of ultrasound images.

CT-Mamba: A hybrid convolutional State Space Model for low-dose CT denoising.

Li L, Wei W, Yang L, Zhang W, Dong J, Liu Y, Huang H, Zhao W

pubmed logopapersJul 3 2025
Low-dose CT (LDCT) significantly reduces the radiation dose received by patients, however, dose reduction introduces additional noise and artifacts. Currently, denoising methods based on convolutional neural networks (CNNs) face limitations in long-range modeling capabilities, while Transformer-based denoising methods, although capable of powerful long-range modeling, suffer from high computational complexity. Furthermore, the denoised images predicted by deep learning-based techniques inevitably exhibit differences in noise distribution compared to normal-dose CT (NDCT) images, which can also impact the final image quality and diagnostic outcomes. This paper proposes CT-Mamba, a hybrid convolutional State Space Model for LDCT image denoising. The model combines the local feature extraction advantages of CNNs with Mamba's strength in capturing long-range dependencies, enabling it to capture both local details and global context. Additionally, we introduce an innovative spatially coherent Z-shaped scanning scheme to ensure spatial continuity between adjacent pixels in the image. We design a Mamba-driven deep noise power spectrum (NPS) loss function to guide model training, ensuring that the noise texture of the denoised LDCT images closely resembles that of NDCT images, thereby enhancing overall image quality and diagnostic value. Experimental results have demonstrated that CT-Mamba performs excellently in reducing noise in LDCT images, enhancing detail preservation, and optimizing noise texture distribution, and exhibits higher statistical similarity with the radiomics features of NDCT images. The proposed CT-Mamba demonstrates outstanding performance in LDCT denoising and holds promise as a representative approach for applying the Mamba framework to LDCT denoising tasks.

Joint Shape Reconstruction and Registration via a Shared Hybrid Diffeomorphic Flow.

Shi H, Wang P, Zhang S, Zhao X, Yang B, Zhang C

pubmed logopapersJul 3 2025
Deep implicit functions (DIFs) effectively represent shapes by using a neural network to map 3D spatial coordinates to scalar values that encode the shape's geometry, but it is difficult to establish correspondences between shapes directly, limiting their use in medical image registration. The recently presented deformation field-based methods achieve implicit templates learning via template field learning with DIFs and deformation field learning, establishing shape correspondence through deformation fields. Although these approaches enable joint learning of shape representation and shape correspondence, the decoupled optimization for template field and deformation field, caused by the absence of deformation annotations lead to a relatively accurate template field but an underoptimized deformation field. In this paper, we propose a novel implicit template learning framework via a shared hybrid diffeomorphic flow (SHDF), which enables shared optimization for deformation and template, contributing to better deformations and shape representation. Specifically, we formulate the signed distance function (SDF, a type of DIFs) as a one-dimensional (1D) integral, unifying dimensions to match the form used in solving ordinary differential equation (ODE) for deformation field learning. Then, SDF in 1D integral form is integrated seamlessly into the deformation field learning. Using a recurrent learning strategy, we frame shape representations and deformations as solving different initial value problems of the same ODE. We also introduce a global smoothness regularization to handle local optima due to limited outside-of-shape data. Experiments on medical datasets show that SHDF outperforms state-of-the-art methods in shape representation and registration.

MedFormer: Hierarchical Medical Vision Transformer with Content-Aware Dual Sparse Selection Attention

Zunhui Xia, Hongxing Li, Libin Lan

arxiv logopreprintJul 3 2025
Medical image recognition serves as a key way to aid in clinical diagnosis, enabling more accurate and timely identification of diseases and abnormalities. Vision transformer-based approaches have proven effective in handling various medical recognition tasks. However, these methods encounter two primary challenges. First, they are often task-specific and architecture-tailored, limiting their general applicability. Second, they usually either adopt full attention to model long-range dependencies, resulting in high computational costs, or rely on handcrafted sparse attention, potentially leading to suboptimal performance. To tackle these issues, we present MedFormer, an efficient medical vision transformer with two key ideas. First, it employs a pyramid scaling structure as a versatile backbone for various medical image recognition tasks, including image classification and dense prediction tasks such as semantic segmentation and lesion detection. This structure facilitates hierarchical feature representation while reducing the computation load of feature maps, highly beneficial for boosting performance. Second, it introduces a novel Dual Sparse Selection Attention (DSSA) with content awareness to improve computational efficiency and robustness against noise while maintaining high performance. As the core building technique of MedFormer, DSSA is explicitly designed to attend to the most relevant content. In addition, a detailed theoretical analysis has been conducted, demonstrating that MedFormer has superior generality and efficiency in comparison to existing medical vision transformers. Extensive experiments on a variety of imaging modality datasets consistently show that MedFormer is highly effective in enhancing performance across all three above-mentioned medical image recognition tasks. The code is available at https://github.com/XiaZunhui/MedFormer.

Topological Signatures vs. Gradient Histograms: A Comparative Study for Medical Image Classification

Faisal Ahmed, Mohammad Alfrad Nobel Bhuiyan

arxiv logopreprintJul 2 2025
We present the first comparative study of two fundamentally distinct feature extraction techniques: Histogram of Oriented Gradients (HOG) and Topological Data Analysis (TDA), for medical image classification using retinal fundus images. HOG captures local texture and edge patterns through gradient orientation histograms, while TDA, using cubical persistent homology, extracts high-level topological signatures that reflect the global structure of pixel intensities. We evaluate both methods on the large APTOS dataset for two classification tasks: binary detection (normal versus diabetic retinopathy) and five-class diabetic retinopathy severity grading. From each image, we extract 26244 HOG features and 800 TDA features, using them independently to train seven classical machine learning models with 10-fold cross-validation. XGBoost achieved the best performance in both cases: 94.29 percent accuracy (HOG) and 94.18 percent (TDA) on the binary task; 74.41 percent (HOG) and 74.69 percent (TDA) on the multi-class task. Our results show that both methods offer competitive performance but encode different structural aspects of the images. This is the first work to benchmark gradient-based and topological features on retinal imagery. The techniques are interpretable, applicable to other medical imaging domains, and suitable for integration into deep learning pipelines.

CareAssist GPT improves patient user experience with a patient centered approach to computer aided diagnosis.

Algarni A

pubmed logopapersJul 2 2025
The rapid integration of artificial intelligence (AI) into healthcare has enhanced diagnostic accuracy; however, patient engagement and satisfaction remain significant challenges that hinder the widespread acceptance and effectiveness of AI-driven clinical tools. This study introduces CareAssist-GPT, a novel AI-assisted diagnostic model designed to improve both diagnostic accuracy and the patient experience through real-time, understandable, and empathetic communication. CareAssist-GPT combines high-resolution X-ray images, real-time physiological vital signs, and clinical notes within a unified predictive framework using deep learning. Feature extraction is performed using convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformer-based NLP modules. Model performance was evaluated in terms of accuracy, precision, recall, specificity, and response time, alongside patient satisfaction through a structured user feedback survey. CareAssist-GPT achieved a diagnostic accuracy of 95.8%, improving by 2.4% over conventional models. It reported high precision (94.3%), recall (93.8%), and specificity (92.7%), with an AUC-ROC of 0.97. The system responded within 500 ms-23.1% faster than existing tools-and achieved a patient satisfaction score of 9.3 out of 10, demonstrating its real-time usability and communicative effectiveness. CareAssist-GPT significantly enhances the diagnostic process by improving accuracy and fostering patient trust through transparent, real-time explanations. These findings position it as a promising patient-centered AI solution capable of transforming healthcare delivery by bridging the gap between advanced diagnostics and human-centered communication.

SealPrint: The Anatomically Replicated Seal-and-Support Socket Abutment Technique A Proof-of-Concept with 12 months follow-up.

Lahoud P, Castro A, Walter E, Jacobs W, De Greef A, Jacobs R

pubmed logopapersJul 2 2025
This study aimed at investigating a novel technique for designing and manufacturing a sealing socket abutment (SSA) using artificial intelligence (AI)-driven tooth segmentation and 3D printing technologies. A validated AI-powered module was used to segment the tooth to be replaced on the presurgical Cone Beam Computed Tomography (CBCT) scan. Following virtual surgical planning, the CBCT and intraoral scan (IOS) were imported into Mimics software. The AI-segmented tooth was aligned with the IOS, sliced horizontally at the temporary abutment's neck, and further trimmed 2 mm above the gingival margin to capture the emergence profile. A conical cut, 2 mm wider than the temporary abutment with a 5° taper, was applied for a passive fit. This process produced a custom sealing socket abutment, which was then 3D-printed. After atraumatic tooth extraction and immediate implant placement, the temporary abutment was positioned, followed by the SealPrint atop. A flowable composite was used to fill the gap between the temporary abutment and the SealPrint; the whole structure sealing the extraction socket, providing by design support for the interdental papilla and protecting the implant and (bio)materials used. True to planning, the SealPrint passively fits on the temporary abutment. It provides an optimal seal over the entire surface of the extraction socket, preserving the emergence profile of the extracted tooth, protecting the dental implant and stabilizing the graft material and blood clot. The SealPrint technique provides a reliable and fast solution for protection and preservation of the soft-, hard-tissues and emergence profile following immediate implant placement.
Page 36 of 82813 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.