Sort by:
Page 22 of 47463 results

Age-dependent changes in CT vertebral attenuation values in opportunistic screening for osteoporosis: a nationwide multi-center study.

Kim Y, Kim HY, Lee S, Hong S, Lee JW

pubmed logopapersJun 1 2025
To examine how vertebral attenuation changes with aging, and to establish age-adjusted CT attenuation value cutoffs for diagnosing osteoporosis. This multi-center retrospective study included 11,246 patients (mean age ± standard deviation, 50 ± 13 years; 7139 men) who underwent CT and dual-energy X-ray absorptiometry (DXA) in six health-screening centers between 2022 and 2023. Using deep-learning-based software, attenuation values of L1 vertebral bodies were measured. Segmented linear regression in women and simple linear regression in men were used to assess how attenuation values change with aging. A multivariable linear regression analysis was performed to determine whether age is associated with CT attenuation values independently of the DXA T-score. Age-adjusted cutoffs targeting either 90% sensitivity or 90% specificity were derived using quantile regression. Performance of both age-adjusted and age-unadjusted cutoffs was measured, where the target sensitivity or specificity was considered achieved if a 95% confidence interval encompassed 90%. While attenuation values declined consistently with age in men, they declined abruptly in women aged > 42 years. Such decline occurred independently of the DXA T-score (p < 0.001). Age adjustment seemed critical for age ≥ 65 years, where the age-adjusted cutoffs achieved the target (sensitivity of 91.5% (86.3-95.2%) when targeting 90% sensitivity and specificity of 90.0% (88.3-91.6%) when targeting 90% specificity), but age-unadjusted cutoffs did not (95.5% (91.2-98.0%) and 73.8% (71.4-76.1%), respectively). Age-adjusted cutoffs provided a more reliable diagnosis of osteoporosis than age-unadjusted cutoffs since vertebral attenuation values decrease with age, regardless of DXA T-scores. Question How does vertebral CT attenuation change with age? Findings Independent of dual-energy X-ray absorptiometry T-score, vertebral attenuation values on CT declined at a constant rate in men and abruptly in women over 42 years of age. Clinical relevance Age adjustments are needed in opportunistic osteoporosis screening, especially among the elderly.

Parapharyngeal Space: Diagnostic Imaging and Intervention.

Vogl TJ, Burck I, Stöver T, Helal R

pubmed logopapersJun 1 2025
Diagnosis of lesions of the parapharyngeal space (PPS) often poses a diagnostic and therapeutic challenge due to its deep location. As a result of the topographical relationship to nearby neck spaces, a very precise differential diagnosis is possible based on imaging criteria. When in doubt, imaging-guided - usually CT-guided - biopsy and even drainage remain options.Through a precise analysis of the literature including the most recent publications, this review precisely describes the basic and most recent imaging applications for various PPS pathologies and the differential diagnostic scheme for assigning the respective lesions in addition to the possibilities of using interventional radiology.The different pathologies of PPS from congenital malformations and inflammation to tumors are discussed according to frequency. Characteristic criteria and, more recently, the use of advanced imaging procedures and the introduction of artificial intelligence (AI) allow a very precise differential diagnosis and support further diagnosis and therapy. After precise access planning, almost all pathologies of the PPS can be biopsied or, if necessary, drained using CT-assisted procedures.Radiological procedures play an important role in the diagnosis and treatment planning of PPS pathologies. · Lesions of the PPS account for about 1-2% of all pathologies of the head and neck region. The majority are benign lesions and inflammatory processes.. · If differential diagnostic questions remain unanswered, material can - if necessary - be obtained via a CT-guided biopsy. Exclusion criteria are hypervascularized processes, especially paragangliomas and angiomas.. · The use of artificial intelligence (AI) in head and neck imaging of various pathologies, such as tumor segmentation, pathological TMN classification, detection of lymph node metastases, and extranodal extension, has significantly increased in recent years.. · Vogl TJ, Burck I, Stöver T et al. Parapharyngeal Space: Diagnostic Imaging and Intervention. Rofo 2025; 197: 638-646.

Comparing fully automated AI body composition biomarkers at differing virtual monoenergetic levels using dual-energy CT.

Toia GV, Garret JW, Rose SD, Szczykutowicz TP, Pickhardt PJ

pubmed logopapersJun 1 2025
To investigate the behavior of artificial intelligence (AI) CT-based body composition biomarkers at different virtual monoenergetic imaging (VMI) levels using dual-energy CT (DECT). This retrospective study included 88 contrast-enhanced abdominopelvic CTs acquired with rapid-kVp switching DECT. Images were reconstructed into five VMI levels (40, 55, 70, 85, 100 keV). Fully automated algorithms for quantifying CT number (HU) in abdominal fat (subcutaneous and visceral), skeletal muscle, bone, calcium (abdominal Agatston score), and organ size (area or volume) were applied. Biomarker median difference relative to 70 keV and interquartile range were reported by energy level to characterize variation. Linear regression was performed to calibrate non-70 keV data and to estimate their equivalent 70 keV biomarker attenuation values. Relative to 70 keV, absolute median differences in attenuation-based biomarkers (excluding Agatston score) ranged 39-358, 12-102, 5-48, 9-75 HU for 40, 55, 85, 100 keV, respectively. For area-based biomarkers, differences ranged 6-15, 3-4, 2-7, 0-5 cm<sup>2</sup> for 40, 55, 85, 100 keV. For volume-based biomarkers, differences ranged 12-34, 8-68, 12-52, 1-57 cm<sup>3</sup> for 40, 55, 85, 100 keV. Agatston score behavior was more spurious with median differences ranging 70-204 HU. In general, VMI < 70 keV showed more variation in median biomarker measurement than VMI > 70 keV. This study characterized the behavior of a fully automated AI CT biomarker toolkit across varying VMI levels obtained with DECT. The data showed relatively little biomarker value change when measured at or greater than 70 keV. Lower VMI datasets should be avoided due to larger deviations in measured value as compared to 70 keV, a level considered equivalent to conventional 120 kVp exams.

CNS-CLIP: Transforming a Neurosurgical Journal Into a Multimodal Medical Model.

Alyakin A, Kurland D, Alber DA, Sangwon KL, Li D, Tsirigos A, Leuthardt E, Kondziolka D, Oermann EK

pubmed logopapersJun 1 2025
Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications. We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification. CNS-CLIP demonstrated superior performance in neurosurgical information retrieval with a Top-1 accuracy of 24.56%, compared with 8.61% for the baseline. The average area under receiver operating characteristic across 6 neuroradiology tasks achieved by CNS-CLIP was 0.95, slightly superior to OpenAI's Contrastive Language-Image Pretraining at 0.94 and significantly outperforming a vanilla vision transformer at 0.62. In generalist classification, CNS-CLIP reached a Top-1 accuracy of 47.55%, a decrease from the baseline of 52.37%, demonstrating a catastrophic forgetting phenomenon. This study presents a pioneering effort in building a domain-specific multimodal model using data from a medical society publication. The results indicate that domain-specific models, while less globally versatile, can offer advantages in specialized contexts. This emphasizes the importance of using tailored data and domain-focused development in training foundation models in neurosurgery and general medicine.

Regions of interest in opportunistic computed tomography-based screening for osteoporosis: impact on short-term in vivo precision.

Park J, Kim Y, Hong S, Chee CG, Lee E, Lee JW

pubmed logopapersJun 1 2025
To determine an optimal region of interest (ROI) for opportunistic screening of osteoporosis in terms of short-term in vivo diagnostic precision. We included patients who underwent two CT scans and one dual-energy X-ray absorptiometry scan within a month in 2022. Deep-learning software automatically measured the attenuation in L1 using 54 ROIs (three slice thicknesses × six shapes × three intravertebral levels). To identify factors associated with a lower attenuation difference between the two CT scans, mixed-effect model analysis was performed with ROI-level (slice thickness, shape, intravertebral levels) and patient-level (age, sex, patient diameter, change in CT machine) factors. The root-mean-square standard deviation (RMSSD) and area under the receiver-operating-characteristic curve (AUROC) were calculated. In total, 73 consecutive patients (mean age ± standard deviation, 69 ± 9 years, 38 women) were included. A lower attenuation difference was observed in ROIs in images with slice thicknesses of 1 and 3 mm than that in images with a slice thickness of 5 mm (p < .001), in large elliptical ROIs (p = .007 or < .001, respectively), and in mid- or cranial-level ROIs than that in caudal-level ROIs (p < .001). No patient-level factors were significantly associated with the attenuation difference. Large, elliptical ROIs placed at the mid-level of L1 on images with 1- or 3-mm slice thicknesses yielded RMSSDs of 12.4-12.5 HU and AUROCs of 0.90. The largest possible regions of interest drawn in the mid-level trabecular portion of the L1 vertebra on thin-slice images may yield improvements in the precision of opportunistic screening for osteoporosis via CT.

Impact of deep learning reconstruction on radiation dose reduction and cancer risk in CT examinations: a real-world clinical analysis.

Kobayashi N, Nakaura T, Yoshida N, Nagayama Y, Kidoh M, Uetani H, Sakabe D, Kawamata Y, Funama Y, Tsutsumi T, Hirai T

pubmed logopapersJun 1 2025
The purpose of this study is to estimate the extent to which the implementation of deep learning reconstruction (DLR) may reduce the risk of radiation-induced cancer from CT examinations, utilizing real-world clinical data. We retrospectively analyzed scan data of adult patients who underwent body CT during two periods relative to DLR implementation at our facility: a 12-month pre-DLR phase (n = 5553) using hybrid iterative reconstruction and a 12-month post-DLR phase (n = 5494) with routine CT reconstruction transitioning to DLR. To ensure comparability between two groups, we employed propensity score matching 1:1 based on age, sex, and body mass index. Dose data were collected to estimate organ-specific equivalent doses and total effective doses. We assessed the average dose reduction post-DLR implementation and estimated the Lifetime Attributable Risk (LAR) for cancer per CT exam pre- and post-DLR implementation. The number of radiation-induced cancers before and after the implementation of DLR was also estimated. After propensity score matching, 5247 cases from each group were included in the final analysis. Post-DLR, the total effective body CT dose significantly decreased to 15.5 ± 10.3 mSv from 28.1 ± 14.0 mSv pre-DLR (p < 0.001), a 45% reduction. This dose reduction significantly lowered the radiation-induced cancer risk, especially among younger women, with the estimated annual cancer incidence from 0.247% pre-DLR to 0.130% post-DLR. The implementation of DLR has the possibility to reduce radiation dose by 45% and the risk of radiation-induced cancer from 0.247 to 0.130% as compared with the iterative reconstruction. Question Can implementing deep learning reconstruction (DLR) in routine CT scans significantly reduce radiation dose and the risk of radiation-induced cancer compared to hybrid iterative reconstruction? Findings DLR reduced the total effective body CT dose by 45% (from 28.1 ± 14.0 mSv to 15.5 ± 10.3 mSv) and decreased estimated cancer incidence from 0.247 to 0.130%. Clinical relevance Adopting DLR in clinical practice substantially lowers radiation exposure and cancer risk from CT exams, enhancing patient safety, especially for younger women, and underscores the importance of advanced imaging techniques.

An Adaptive SCG-ECG Multimodal Gating Framework for Cardiac CTA.

Ganesh S, Abozeed M, Aziz U, Tridandapani S, Bhatti PT

pubmed logopapersJun 1 2025
Cardiovascular disease (CVD) is the leading cause of death worldwide. Coronary artery disease (CAD), a prevalent form of CVD, is typically assessed using catheter coronary angiography (CCA), an invasive, costly procedure with associated risks. While cardiac computed tomography angiography (CTA) presents a less invasive alternative, it suffers from limited temporal resolution, often resulting in motion artifacts that degrade diagnostic quality. Traditional ECG-based gating methods for CTA inadequately capture cardiac mechanical motion. To address this, we propose a novel multimodal approach that enhances CTA imaging by predicting cardiac quiescent periods using seismocardiogram (SCG) and ECG data, integrated through a weighted fusion (WF) approach and artificial neural networks (ANNs). We developed a regression-based ANN framework (r-ANN WF) designed to improve prediction accuracy and reduce computational complexity, which was compared with a classification-based framework (c-ANN WF), ECG gating, and US data. Our results demonstrate that the r-ANN WF approach improved overall diastolic and systolic cardiac quiescence prediction accuracy by 52.6% compared to ECG-based predictions, using ultrasound (US) as the ground truth, with an average prediction time of 4.83 ms. Comparative evaluations based on reconstructed CTA images show that both r-ANN WF and c-ANN WF offer diagnostic quality comparable to US-based gating, underscoring their clinical potential. Additionally, the lower computational complexity of r-ANN WF makes it suitable for real-time applications. This approach could enhance CTA's diagnostic quality, offering a more accurate and efficient method for CVD diagnosis and management.

Foundational Segmentation Models and Clinical Data Mining Enable Accurate Computer Vision for Lung Cancer.

Swinburne NC, Jackson CB, Pagano AM, Stember JN, Schefflein J, Marinelli B, Panyam PK, Autz A, Chopra MS, Holodny AI, Ginsberg MS

pubmed logopapersJun 1 2025
This study aims to assess the effectiveness of integrating Segment Anything Model (SAM) and its variant MedSAM into the automated mining, object detection, and segmentation (MODS) methodology for developing robust lung cancer detection and segmentation models without post hoc labeling of training images. In a retrospective analysis, 10,000 chest computed tomography scans from patients with lung cancer were mined. Line measurement annotations were converted to bounding boxes, excluding boxes < 1 cm or > 7 cm. The You Only Look Once object detection architecture was used for teacher-student learning to label unannotated lesions on the training images. Subsequently, a final tumor detection model was trained and employed with SAM and MedSAM for tumor segmentation. Model performance was assessed on a manually annotated test dataset, with additional evaluations conducted on an external lung cancer dataset before and after detection model fine-tuning. Bootstrap resampling was used to calculate 95% confidence intervals. Data mining yielded 10,789 line annotations, resulting in 5403 training boxes. The baseline detection model achieved an internal F1 score of 0.847, improving to 0.860 after self-labeling. Tumor segmentation using the final detection model attained internal Dice similarity coefficients (DSCs) of 0.842 (SAM) and 0.822 (MedSAM). After fine-tuning, external validation showed an F1 of 0.832 and DSCs of 0.802 (SAM) and 0.804 (MedSAM). Integrating foundational segmentation models into the MODS framework results in high-performing lung cancer detection and segmentation models using only mined clinical data. Both SAM and MedSAM hold promise as foundational segmentation models for radiology images.

Prediction of Malignancy and Pathological Types of Solid Lung Nodules on CT Scans Using a Volumetric SWIN Transformer.

Chen H, Wen Y, Wu W, Zhang Y, Pan X, Guan Y, Qin D

pubmed logopapersJun 1 2025
Lung adenocarcinoma and squamous cell carcinoma are the two most common pathological lung cancer subtypes. Accurate diagnosis and pathological subtyping are crucial for lung cancer treatment. Solitary solid lung nodules with lobulation and spiculation signs are often indicative of lung cancer; however, in some cases, postoperative pathology finds benign solid lung nodules. It is critical to accurately identify solid lung nodules with lobulation and spiculation signs before surgery; however, traditional diagnostic imaging is prone to misdiagnosis, and studies on artificial intelligence-assisted diagnosis are few. Therefore, we introduce a volumetric SWIN Transformer-based method. It is a multi-scale, multi-task, and highly interpretable model for distinguishing between benign solid lung nodules with lobulation and spiculation signs, lung adenocarcinomas, and lung squamous cell carcinoma. The technique's effectiveness was improved by using 3-dimensional (3D) computed tomography (CT) images instead of conventional 2-dimensional (2D) images to combine as much information as possible. The model was trained using 352 of the 441 CT image sequences and validated using the rest. The experimental results showed that our model could accurately differentiate between benign lung nodules with lobulation and spiculation signs, lung adenocarcinoma, and squamous cell carcinoma. On the test set, our model achieves an accuracy of 0.9888, precision of 0.9892, recall of 0.9888, and an F1-score of 0.9888, along with a class activation mapping (CAM) visualization of the 3D model. Consequently, our method could be used as a preoperative tool to assist in diagnosing solitary solid lung nodules with lobulation and spiculation signs accurately and provide a theoretical basis for developing appropriate clinical diagnosis and treatment plans for the patients.

Children Are Not Small Adults: Addressing Limited Generalizability of an Adult Deep Learning CT Organ Segmentation Model to the Pediatric Population.

Chatterjee D, Kanhere A, Doo FX, Zhao J, Chan A, Welsh A, Kulkarni P, Trang A, Parekh VS, Yi PH

pubmed logopapersJun 1 2025
Deep learning (DL) tools developed on adult data sets may not generalize well to pediatric patients, posing potential safety risks. We evaluated the performance of TotalSegmentator, a state-of-the-art adult-trained CT organ segmentation model, on a subset of organs in a pediatric CT dataset and explored optimization strategies to improve pediatric segmentation performance. TotalSegmentator was retrospectively evaluated on abdominal CT scans from an external adult dataset (n = 300) and an external pediatric data set (n = 359). Generalizability was quantified by comparing Dice scores between adult and pediatric external data sets using Mann-Whitney U tests. Two DL optimization approaches were then evaluated: (1) 3D nnU-Net model trained on only pediatric data, and (2) an adult nnU-Net model fine-tuned on the pediatric cases. Our results show TotalSegmentator had significantly lower overall mean Dice scores on pediatric vs. adult CT scans (0.73 vs. 0.81, P < .001) demonstrating limited generalizability to pediatric CT scans. Stratified by organ, there was lower mean pediatric Dice score for four organs (P < .001, all): right and left adrenal glands (right adrenal, 0.41 [0.39-0.43] vs. 0.69 [0.66-0.71]; left adrenal, 0.35 [0.32-0.37] vs. 0.68 [0.65-0.71]); duodenum (0.47 [0.45-0.49] vs. 0.67 [0.64-0.69]); and pancreas (0.73 [0.72-0.74] vs. 0.79 [0.77-0.81]). Performance on pediatric CT scans improved by developing pediatric-specific models and fine-tuning an adult-trained model on pediatric images where both methods significantly improved segmentation accuracy over TotalSegmentator for all organs, especially for smaller anatomical structures (e.g., > 0.2 higher mean Dice for adrenal glands; P < .001).
Page 22 of 47463 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.