Sort by:
Page 112 of 2212205 results

AI-powered segmentation of bifid mandibular canals using CBCT.

Gumussoy I, Demirezer K, Duman SB, Haylaz E, Bayrakdar IS, Celik O, Syed AZ

pubmed logopapersJun 4 2025
Accurate segmentation of the mandibular and bifid canals is crucial in dental implant planning to ensure safe implant placement, third molar extractions and other surgical interventions. The objective of this study is to develop and validate an innovative artificial intelligence tool for the efficient, and accurate segmentation of the mandibular and bifid canals on CBCT. CBCT data were screened to identify patients with clearly visible bifid canal variations, and their DICOM files were extracted. These DICOM files were then imported into the 3D Slicer<sup>®</sup> open-source software, where bifid canals and mandibular canals were annotated. The annotated data, along with the raw DICOM files, were processed using the nnU-Netv2 training model by CranioCatch AI software team. 69 anonymized CBCT volumes in DICOM format were converted to NIfTI file format. The method, utilizing nnU-Net v2, accurately predicted the voxels associated with the mandibular canal, achieving an intersection of over 50% in nearly all samples. The accuracy, Dice score, precision, and recall scores for the mandibular canal/bifid canal were determined to be 0.99/0.99, 0.82/0.46, 0.85/0.70, and 0.80/0.42, respectively. Despite the bifid canal segmentation not meeting the expected level of success, the findings indicate that the proposed method shows promising and has the potential to be utilized as a supplementary tool for mandibular canal segmentation. Due to the significance of accurately evaluating the mandibular canal before surgery, the use of artificial intelligence could assist in reducing the burden on practitioners by automating the complicated and time-consuming process of tracing and segmenting this structure. Being able to distinguish bifid channels with artificial intelligence will help prevent neurovascular problems that may occur before or after surgery.

Latent space reconstruction for missing data problems in CT.

Kabelac A, Eulig E, Maier J, Hammermann M, Knaup M, Kachelrieß M

pubmed logopapersJun 4 2025
The reconstruction of a computed tomography (CT) image can be compromised by artifacts, which, in many cases, reduce the diagnostic value of the image. These artifacts often result from missing or corrupt regions in the projection data, for example, by truncation, metal, or limited angle acquisitions. In this work, we introduce a novel deep learning-based framework, latent space reconstruction (LSR), which enables correction of various types of artifacts arising from missing or corrupted data. First, we train a generative neural network on uncorrupted CT images. After training, we iteratively search for the point in the latent space of this network that best matches the compromised projection data we measured. Once an optimal point is found, forward-projection of the generated CT image can be used to inpaint the corrupted or incomplete regions of the measured raw data. We used LSR to correct for truncation and metal artifacts. For the truncation artifact correction, images corrected by LSR show effective artifact suppression within the field of measurement (FOM), alongside a substantial high-quality extension of the FOM compared to other methods. For the metal artifact correction, images corrected by LSR demonstrate effective artifact reduction, providing a clearer view of the surrounding tissues and anatomical details. The results indicate that LSR is effective in correcting metal and truncation artifacts. Furthermore, the versatility of LSR allows its application to various other types of artifacts resulting from missing or corrupt data.

Advancing prenatal healthcare by explainable AI enhanced fetal ultrasound image segmentation using U-Net++ with attention mechanisms.

Singh R, Gupta S, Mohamed HG, Bharany S, Rehman AU, Ghadi YY, Hussen S

pubmed logopapersJun 4 2025
Prenatal healthcare development requires accurate automated techniques for fetal ultrasound image segmentation. This approach allows standardized evaluation of fetal development by minimizing time-exhaustive processes that perform poorly due to human intervention. This research develops a segmentation framework through U-Net++ with ResNet backbone features which incorporates attention components for enhancing extraction of features in low contrast, noisy ultrasound data. The model leverages the nested skip connections of U-Net++ and the residual learning of ResNet-34 to achieve state-of-the-art segmentation accuracy. Evaluations of the developed model against the vast fetal ultrasound image collection yielded superior results by reaching 97.52% Dice coefficient as well as 95.15% Intersection over Union (IoU), and 3.91 mm Hausdorff distance. The pipeline integrated Grad-CAM++ allows explanations of the model decisions for clinical utility and trust enhancement. The explainability component enables medical professionals to study how the model functions, which creates clear and proven segmentation outputs for better overall reliability. The framework fills in the gap between AI automation and clinical interpretability by showing important areas which affect predictions. The research shows that deep learning combined with Explainable AI (XAI) operates to generate medical imaging solutions that achieve high accuracy. The proposed system demonstrates readiness for clinical workflows due to its ability to deliver a sophisticated prenatal diagnostic instrument that enhances healthcare results.

Digital removal of dermal denticle layer using geometric AI from 3D CT scans of shark craniofacial structures enhances anatomical precision.

Kim SW, Yuen AHL, Kim HW, Lee S, Lee SB, Lee YM, Jung WJ, Poon CTC, Park D, Kim S, Kim SG, Kang JW, Kwon J, Jo SJ, Giri SS, Park H, Seo JP, Kim DS, Kim BY, Park SC

pubmed logopapersJun 4 2025
Craniofacial morphometrics in sharks provide crucial insights into evolutionary history, geographical variation, sexual dimorphism, and developmental patterns. However, the fragile cartilaginous nature of shark craniofacial skeleton poses significant challenges for traditional specimen preparation, often resulting in damaged cranial landmarks and compromised measurement accuracy. While computed tomography (CT) offers a non-invasive alternative for anatomical observation, the high electron density of dermal denticles in sharks creates a unique challenge, obstructing clear visualization of internal structures in three-dimensional volume-rendered images (3DVRI). This study presents an artificial intelligence (AI)-based solution using machine-learning algorithms for digitally removing dermal denticle layer from CT scans of shark craniofacial skeleton. We developed a geometric AI-driven software (SKINPEELER) that selectively removes high-intensity voxels corresponding to dermal denticle layer while preserving underlying anatomical structures. We evaluated this approach using CT scans from 20 sharks (16 Carcharhinus brachyurus, 2 Alopias vulpinus, 1 Sphyrna lewini, and 1 Prionace glauca), applying our AI-driven software to process the Digital Imaging and Communications in Medicine (DICOM) images. The processed scans were reconstructed using bone reconstruction algorithms to enable precise craniofacial measurements. We assessed the accuracy of our method by comparing measurements from the processed 3DVRIs with traditional manual measurements. The AI-assisted approach demonstrated high accuracy (86.16-98.52%) relative to manual measurements. Additionally, we evaluated reproducibility and repeatability using intraclass correlation coefficients (ICC), finding high reproducibility (ICC: 0.456-0.998) and repeatability (ICC: 0.985-1.000 for operator 1 and 0.882-0.999 for operator 2). Our results indicate that this AI-enhanced digital denticle removal technique, combined with 3D CT reconstruction, provides a reliable and non-destructive alternative to traditional specimen preparation methods for investigating shark craniofacial morphology. This novel approach enhances measurement precision while preserving specimen integrity, potentially advancing various aspects of shark research including evolutionary studies, conservation efforts, and anatomical investigations.

Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 2: Analysis of DALL-E 3.

Currie G, Hewis J, Hawk E, Rohren E

pubmed logopapersJun 4 2025
Disparity among gender and ethnicity remains an issue across medicine and health science. Only 26%-35% of trainee radiologists are female, despite more than 50% of medical students' being female. Similar gender disparities are evident across the medical imaging professions. Generative artificial intelligence text-to-image production could reinforce or amplify gender biases. <b>Methods:</b> In March 2024, DALL-E 3 was utilized via GPT-4 to generate a series of individual and group images of medical imaging professionals: radiologist, nuclear medicine physician, radiographer, nuclear medicine technologist, medical physicist, radiopharmacist, and medical imaging nurse. Multiple iterations of images were generated using a variety of prompts. Collectively, 120 images were produced for evaluation of 524 characters. All images were independently analyzed by 3 expert reviewers from medical imaging professions for apparent gender and skin tone. <b>Results:</b> Collectively (individual and group images), 57.4% (<i>n</i> = 301) of medical imaging professionals were depicted as male, 42.4% (<i>n</i> = 222) as female, and 91.2% (<i>n</i> = 478) as having a light skin tone. The male gender representation was 65% for radiologists, 62% for nuclear medicine physicians, 52% for radiographers, 56% for nuclear medicine technologists, 62% for medical physicists, 53% for radiopharmacists, and 26% for medical imaging nurses. For all professions, this overrepresents men compared with women. There was no representation of persons with a disability. <b>Conclusion:</b> This evaluation reveals a significant overrepresentation of the male gender associated with generative artificial intelligence text-to-image production using DALL-E 3 across the medical imaging professions. Generated images have a disproportionately high representation of white men, which is not representative of the diversity of the medical imaging professions.

Enhanced risk stratification for stage II colorectal cancer using deep learning-based CT classifier and pathological markers to optimize adjuvant therapy decision.

Huang YQ, Chen XB, Cui YF, Yang F, Huang SX, Li ZH, Ying YJ, Li SY, Li MH, Gao P, Wu ZQ, Wen G, Wang ZS, Wang HX, Hong MP, Diao WJ, Chen XY, Hou KQ, Zhang R, Hou J, Fang Z, Wang ZN, Mao Y, Wee L, Liu ZY

pubmed logopapersJun 4 2025
Current risk stratification for stage II colorectal cancer (CRC) has limited accuracy in identifying patients who would benefit from adjuvant chemotherapy, leading to potential over- or under-treatment. We aimed to develop a more precise risk stratification system by integrating artificial intelligence-based imaging analysis with pathological markers. We analyzed 2,992 stage II CRC patients from 12 centers. A deep learning classifier (Swin Transformer Assisted Risk-stratification for CRC, STAR-CRC) was developed using multi-planar CT images from 1,587 patients (training:internal validation=7:3) and validated in 1,405 patients from 8 independent centers, which stratified patients into low-, uncertain-, and high-risk groups. To further refine the uncertain-risk group, a composite score based on pathological markers (pT4 stage, number of lymph nodes sampled, perineural invasion, and lymphovascular invasion) was applied, forming the intelligent risk integration system for stage II CRC (IRIS-CRC). IRIS-CRC was compared against the guideline-based risk stratification system (GRSS-CRC) for prediction performance and validated in the validation dataset. IRIS-CRC stratified patients into four prognostic groups with distinct 3-year disease-free survival rates (≥95%, 95-75%, 75-55%, ≤55%). Upon external validation, compared to GRSS-CRC, IRIS-CRC downstaged 27.1% of high-risk patients into Favorable group, while upstaged 6.5% of low-risk patients into Very Poor prognosis group who might require more aggressive treatment. In the GRSS-CRC intermediate-risk group of the external validation dataset, IRIS-CRC reclassified 40.1% as Favorable prognosis and 7.0% as Very Poor prognosis. IRIS-CRC's performance maintained generalized in both chemotherapy and non-chemotherapy cohorts. IRIS-CRC offers a more precise and personalized risk assessment than current guideline-based risk factors, potentially sparing low-risk patients from unnecessary adjuvant chemotherapy while identifying high-risk individuals for more aggressive treatment. This novel approach holds promise for improving clinical decision-making and outcomes in stage II CRC.

Validation study comparing Artificial intelligence for fully automatic aortic aneurysms Segmentation and diameter Measurements On contrast and non-contrast enhanced computed Tomography (ASMOT).

Gatinot A, Caradu C, Stephan L, Foret T, Rinckenbach S

pubmed logopapersJun 4 2025
Accurate aortic diameter measurements are essential for diagnosis, surveillance, and procedural planning in aortic disease. Semi-automatic methods remain widely used but require manual corrections, which can be time-consuming and operator-dependent. Artificial intelligence (AI)-driven fully automatic methods may offer improved efficiency and measurement accuracy. This study aims to validate a fully automatic method against a semi-automatic approach using computed tomography angiography (CTA) and non-contrast CT scans. A monocentric retrospective comparative study was conducted on patients who underwent endovascular aortic repair (EVAR) for infrarenal, juxta-renal or thoracic aneurysms and a control group. Maximum aortic wall-to-wall diameters were measured before and after repair using a fully automatic software (PRAEVAorta2®, Nurea, Bordeaux, France) and compared to measurements performed by two vascular surgeons using a semi-automatic approach on CTA and non-contrast CT scans. Correlation coefficients (Pearson's R) and absolute differences were calculated to assess agreement. A total of 120 CT scans (60 CTA and 60 non-contrast CT) were included, comprising 23 EVAR, 4 thoracic EVAR, 1 fenestrated EVAR, and 4 control cases. Strong correlations were observed between the fully automatic and semi-automatic measurements in both CTA and non-contrast CT. For CTA, correlation coefficients ranged from 0.94 to 0.96 (R<sup>2</sup> = 0.88-0.92), while for non-contrast CT, they ranged from 0.87 to 0.89 (R<sup>2</sup> = 0.76-0.79). Median absolute differences in aortic diameter measurements varied between 1.1 mm and 4.2 mm across the different anatomical locations. The fully automatic method demonstrated a significantly faster processing time, with a median execution time of 73 seconds (IQR: 57-91) compared to 700 (IQR: 613-800) for the semi-automatic method (p < 0.001). The fully automatic method demonstrated strong agreement with semi-automatic measurements for both CTA and non-contrast CT, before and after endovascular repair in different aortic locations, with significantly reduced analysis time. This method could improve workflow efficiency in clinical practice and research applications.

Deep learning model for differentiating thyroid eye disease and orbital myositis on computed tomography (CT) imaging.

Ha SK, Lin LY, Shi M, Wang M, Han JY, Lee NG

pubmed logopapersJun 3 2025
To develop a deep learning model using orbital computed tomography (CT) imaging to accurately distinguish thyroid eye disease (TED) and orbital myositis, two conditions with overlapping clinical presentations. Retrospective, single-center cohort study spanning 12 years including normal controls, TED, and orbital myositis patients with orbital imaging and examination by an oculoplastic surgeon. A deep learning model employing a Visual Geometry Group-16 network was trained on various binary combinations of TED, orbital myositis, and controls using single slices of coronal orbital CT images. A total of 1628 images from 192 patients (110 TED, 51 orbital myositis, 31 controls) were included. The primary model comparing orbital myositis and TED had accuracy of 98.4% and area under the receiver operating characteristic curve (AUC) of 0.999. In detecting orbital myositis, it had a sensitivity, specificity, and F1 score of 0.964, 0.994, and 0.984, respectively. Deep learning models can differentiate TED and orbital myositis based on a single, coronal orbital CT image with high accuracy. Their ability to distinguish these conditions based not only on extraocular muscle enlargement but also other salient features suggests potential applications in diagnostics and treatment beyond these conditions.

High-Throughput Phenotyping of the Symptoms of Alzheimer Disease and Related Dementias Using Large Language Models: Cross-Sectional Study.

Cheng Y, Malekar M, He Y, Bommareddy A, Magdamo C, Singh A, Westover B, Mukerji SS, Dickson J, Das S

pubmed logopapersJun 3 2025
Alzheimer disease and related dementias (ADRD) are complex disorders with overlapping symptoms and pathologies. Comprehensive records of symptoms in electronic health records (EHRs) are critical for not only reaching an accurate diagnosis but also supporting ongoing research studies and clinical trials. However, these symptoms are frequently obscured within unstructured clinical notes in EHRs, making manual extraction both time-consuming and labor-intensive. We aimed to automate symptom extraction from the clinical notes of patients with ADRD using fine-tuned large language models (LLMs), compare its performance to regular expression-based symptom recognition, and validate the results using brain magnetic resonance imaging (MRI) data. We fine-tuned LLMs to extract ADRD symptoms across the following 7 domains: memory, executive function, motor, language, visuospatial, neuropsychiatric, and sleep. We assessed the algorithm's performance by calculating the area under the receiver operating characteristic curve (AUROC) for each domain. The extracted symptoms were then validated in two analyses: (1) predicting ADRD diagnosis using the counts of extracted symptoms and (2) examining the association between ADRD symptoms and MRI-derived brain volumes. Symptom extraction across the 7 domains achieved high accuracy with AUROCs ranging from 0.97 to 0.99. Using the counts of extracted symptoms to predict ADRD diagnosis yielded an AUROC of 0.83 (95% CI 0.77-0.89). Symptom associations with brain volumes revealed that a smaller hippocampal volume was linked to memory impairments (odds ratio 0.62, 95% CI 0.46-0.84; P=.006), and reduced pallidum size was associated with motor impairments (odds ratio 0.73, 95% CI 0.58-0.90; P=.04). These results highlight the accuracy and reliability of our high-throughput ADRD phenotyping algorithm. By enabling automated symptom extraction, our approach has the potential to assist with differential diagnosis, as well as facilitate clinical trials and research studies of dementia.

Upper Airway Volume Predicts Brain Structure and Cognition in Adolescents.

Kanhere A, Navarathna N, Yi PH, Parekh VS, Pickle J, Cloak CC, Ernst T, Chang L, Li D, Redline S, Isaiah A

pubmed logopapersJun 3 2025
One in ten children experiences sleep-disordered breathing (SDB). Untreated SDB is associated with poor cognition, but the underlying mechanisms are less understood. We assessed the relationship between magnetic resonance imaging (MRI)-derived upper airway volume and children's cognition and regional cortical gray matter volumes. We used five-year data from the Adolescent Brain Cognitive Development study (n=11,875 children, 9-10 years at baseline). Upper airway volumes were derived using a deep learning model applied to 5,552,640 brain MRI slices. The primary outcome was the Total Cognition Composite score from the National Institutes of Health Toolbox (NIH-TB). Secondary outcomes included other NIH-TB measures and cortical gray matter volumes. The habitual snoring group had significantly smaller airway volumes than non-snorers (mean difference=1.2 cm<sup>3</sup>; 95% CI, 1.0-1.4 cm<sup>3</sup>; P<0.001). Deep learning-derived airway volume predicted the Total Cognition Composite score (estimated mean difference=3.68 points; 95% CI, 2.41-4.96; P<0.001) per one-unit increase in the natural log of airway volume (~2.7-fold raw volume increase). This airway volume increase was also associated with an average 0.02 cm<sup>3</sup> increase in right temporal pole volume (95% CI, 0.01-0.02 cm<sup>3</sup>; P<0.001). Similar airway volume predicted most NIH-TB domain scores and multiple frontal and temporal gray matter volumes. These brain volumes mediated the relationship between airway volume and cognition. We demonstrate a novel application of deep learning-based airway segmentation in a large pediatric cohort. Upper airway volume is a potential biomarker for cognitive outcomes in pediatric SDB, offers insights into neurobiological mechanisms, and informs future studies on risk stratification. This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Page 112 of 2212205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.