Sort by:
Page 148 of 2922917 results

Chest CT in the Evaluation of COPD: Recommendations of Asian Society of Thoracic Radiology.

Fan L, Seo JB, Ohno Y, Lee SM, Ashizawa K, Lee KY, Yang Q, Tanomkiat W, Văn CC, Hieu HT, Liu SY, Goo JM

pubmed logopapersJun 6 2025
Chronic Obstructive Pulmonary Disease (COPD) is a significant public health challenge globally, with Asia facing unique burdens due to varying demographics, healthcare access, and socioeconomic conditions. Recognizing the limitations of pulmonary function tests (PFTs) in early detection and comprehensive evaluation, the Asian Society of Thoracic Radiology (ASTR) presents this recommendations to guide the use of chest computed tomography (CT) in COPD diagnosis and management. This document consolidates evidence from an extensive literature review and surveys across Asia, highlighting the need for standardized CT protocols and practices. Key recommendations include adopting low-dose paired respiratory phase CT scans, utilizing qualitative and quantitative assessments for airway, vascular, and parenchymal evaluation, and emphasizing structured reporting to enhance clinical decision-making. Advanced technologies, including dual-energy CT and artificial intelligence, are proposed to refine diagnosis, monitor disease progression, and guide personalized interventions. These recommendations aim to improve the early detection of COPD, address its heterogeneity, and reduce its socioeconomic impact by establishing consistent and effective imaging practices across the region. This recommendations underscore the pivotal role of chest CT in advancing COPD care in Asia, providing a foundation for future research and practice refinement.

Photon-counting detector CT in musculoskeletal imaging: benefits and outlook.

El Sadaney AO, Ferrero A, Rajendran K, Booij R, Marcus R, Sutter R, Oei EHG, Baffour F

pubmed logopapersJun 6 2025
Photon-counting detector CT (PCD-CT) represents a significant advancement in medical imaging, particularly for musculoskeletal (MSK) applications. Its primary innovation lies in enhanced spatial resolution, which facilitates improved detection of small anatomical structures such as trabecular bone, osteophytes, and subchondral cysts. PCD-CT enables high-quality imaging with reduced radiation doses, making it especially beneficial for populations requiring frequent imaging, such as pediatric patients and individuals with multiple myeloma. Additionally, PCD-CT supports advanced applications like bone quality assessment, which correlates well with gold-standard tests, and can aid in diagnosing osteoporosis and assessing fracture risk. Techniques such as spectral shaping and virtual monoenergetic imaging further optimize the technology, minimizing artifacts and enhancing material decomposition. These capabilities extend to conditions like gout and hematologic malignancies, offering improved detection and assessment. The integration of artificial intelligence could enhance PCD-CT's performance by reducing image noise and improving quantitative assessments. Ultimately, PCD-CT's superior resolution, reduced dose protocols, and multi-energy imaging capabilities will likely have a transformative impact on MSK imaging, improving diagnostic accuracy, patient care, and clinical outcomes.

Hypothalamus and intracranial volume segmentation at the group level by use of a Gradio-CNN framework.

Vernikouskaya I, Rasche V, Kassubek J, Müller HP

pubmed logopapersJun 6 2025
This study aimed to develop and evaluate a graphical user interface (GUI) for the automated segmentation of the hypothalamus and intracranial volume (ICV) in brain MRI scans. The interface was designed to facilitate efficient and accurate segmentation for research applications, with a focus on accessibility and ease of use for end-users. We developed a web-based GUI using the Gradio library integrating deep learning-based segmentation models trained on annotated brain MRI scans. The model utilizes a U-Net architecture to delineate the hypothalamus and ICV. The GUI allows users to upload high-resolution MRI scans, visualize the segmentation results, calculate hypothalamic volume and ICV, and manually correct individual segmentation results. To ensure widespread accessibility, we deployed the interface using ngrok, allowing users to access the tool via a shared link. As an example for the universality of the approach, the tool was applied to a group of 90 patients with Parkinson's disease (PD) and 39 controls. The GUI demonstrated high usability and efficiency in segmenting the hypothalamus and the ICV, with no significant difference in normalized hypothalamic volume observed between PD patients and controls, consistent with previously published findings. The average processing time per patient volume was 18 s for the hypothalamus and 44 s for the ICV segmentation on a 6 GB NVidia GeForce GTX 1060 GPU. The ngrok-based deployment allowed for seamless access across different devices and operating systems, with an average connection time of less than 5 s. The developed GUI provides a powerful and accessible tool for applications in neuroimaging. The combination of the intuitive interface, accurate deep learning-based segmentation, and easy deployment via ngrok addresses the need for user-friendly tools in brain MRI analysis. This approach has the potential to streamline workflows in neuroimaging research.

Post-processing steps improve generalisability and robustness of an MRI-based radiogenomic model for human papillomavirus status prediction in oropharyngeal cancer.

Ahmadian M, Bodalal Z, Bos P, Martens RM, Agrotis G, van der Hulst HJ, Vens C, Karssemakers L, Al-Mamgani A, de Graaf P, Jasperse B, Brakenhoff RH, Leemans CR, Beets-Tan RGH, Castelijns JA, van den Brekel MWM

pubmed logopapersJun 6 2025
To assess the impact of image post-processing steps on the generalisability of MRI-based radiogenomic models. Using a human papillomavirus (HPV) status in oropharyngeal squamous cell carcinoma (OPSCC) prediction model, this study examines the potential of different post-processing strategies to increase its generalisability across data from different centres and image acquisition protocols. Contrast-enhanced T1-weighted MR images of OPSCC patients of two cohorts from different centres, with confirmed HPV status, were manually segmented. After radiomic feature extraction, the HPV prediction model trained on a training set with 91 patients was subsequently tested on two independent cohorts: a test set with 62 patients and an externally derived cohort of 157 patients. The data processing options included: data harmonisation, a process to ensure consistency in data from different centres; exclusion of unstable features across different segmentations and scan protocols; and removal of highly correlated features to reduce redundancy. The predictive model, trained without post-processing, showed high performance on the test set, with an AUC of 0.79 (95% CI: 0.66-0.90, p < 0.001). However, when tested on the external data, the model performed less well, resulting in an AUC of 0.52 (95% CI: 0.45-0.58, p = 0.334). The model's generalisability substantially improved after performing post-processing steps. The AUC for the test set reached 0.76 (95% CI: 0.63-0.87, p < 0.001), while for the external cohort, the predictive model achieved an AUC of 0.73 (95% CI: 0.64-0.81, p < 0.001). When applied before model development, post-processing steps can enhance the robustness and generalisability of predictive radiogenomics models. Question How do post-processing steps impact the generalisability of MRI-based radiogenomic prediction models? Findings Applying post-processing steps, i.e., data harmonisation, identification of stable radiomic features, and removal of correlated features, before model development can improve model robustness and generalisability. Clinical relevance Post-processing steps in MRI radiogenomic model generation lead to reliable non-invasive diagnostic tools for personalised cancer treatment strategies.

A Decade of Advancements in Musculoskeletal Imaging.

Wojack P, Fritz J, Khodarahmi I

pubmed logopapersJun 6 2025
The past decade has witnessed remarkable advancements in musculoskeletal radiology, driven by increasing demand for medical imaging and rapid technological innovations. Contrary to early concerns about artificial intelligence (AI) replacing radiologists, AI has instead enhanced imaging capabilities, aiding in automated abnormality detection and workflow efficiency. MRI has benefited from acceleration techniques that significantly reduce scan times while maintaining high-quality imaging. In addition, novel MRI methodologies now support precise anatomic and quantitative imaging across a broad spectrum of field strengths. In CT, dual-energy and photon-counting technologies have expanded diagnostic possibilities for musculoskeletal applications. This review explores these key developments, examining their impact on clinical practice and the future trajectory of musculoskeletal radiology.

CAN TRANSFER LEARNING IMPROVE SUPERVISED SEGMENTATIONOF WHITE MATTER BUNDLES IN GLIOMA PATIENTS?

Riccardi, C., Ghezzi, S., Amorosino, G., Zigiotto, L., Sarubbo, S., Jovicich, J., Avesani, P.

biorxiv logopreprintJun 6 2025
In clinical neuroscience, the segmentation of the main white matter bundles is propaedeutic for many tasks such as pre-operative neurosurgical planning and monitoring of neuro-related diseases. Automating bundle segmentation with data-driven approaches and deep learning models has shown promising accuracy in the context of healthy individuals. The lack of large clinical datasets is preventing the translation of these results to patients. Inference on patients data with models trained on healthy population is not effective because of domain shift. This study aims to carry out an empirical analysis to investigate how transfer learning might be beneficial to overcome these limitations. For our analysis, we consider a public dataset with hundreds of individuals and a clinical dataset of glioma patients. We focus our preliminary investigation on the corticospinal tract. The results show that transfer learning might be effective in partially overcoming the domain shift.

Application of Mask R-CNN for automatic recognition of teeth and caries in cone-beam computerized tomography.

Ma Y, Al-Aroomi MA, Zheng Y, Ren W, Liu P, Wu Q, Liang Y, Jiang C

pubmed logopapersJun 6 2025
Deep convolutional neural networks (CNNs) are advancing rapidly in medical research, demonstrating promising results in diagnosis and prediction within radiology and pathology. This study evaluates the efficacy of deep learning algorithms for detecting and diagnosing dental caries using cone-beam computed tomography (CBCT) with the Mask R-CNN architecture while comparing various hyperparameters to enhance detection. A total of 2,128 CBCT images were divided into training and validation and test datasets in a 7:1:1 ratio. For the verification of tooth recognition, the data from the validation set were randomly selected for analysis. Three groups of Mask R-CNN networks were compared: A scratch-trained baseline using randomly initialized weights (R group); A transfer learning approach with models pre-trained on COCO for object detection (C group); A variant pre-trained on ImageNetfor for object detection (I group). All configurations maintained identical hyperparameter settings to ensure fair comparison. The deep learning model used ResNet-50 as the backbone network and was trained to 300epoch respectively. We assessed training loss, detection and training times, diagnostic accuracy, specificity, positive and negative predictive values, and coverage precision to compare performance across the groups. Transfer learning significantly reduced training times compared to non-transfer learning approach (p < 0.05). The average detection time for group R was 0.269 ± 0.176 s, whereas groups I (0.323 ± 0.196 s) and C (0.346 ± 0.195 s) exhibited significantly longer detection times (p < 0.05). C-group, trained for 200 epochs, achieved a mean average precision (mAP) of 81.095, outperforming all other groups. The mAP for caries recognition in group R, trained for 300 epochs, was 53.328, with detection times under 0.5 s. Overall, C-group demonstrated significantly higher average precision across all epochs (100, 200, and 300) (p < 0.05). Neural networks pre-trained with COCO transfer learning exhibit superior annotation accuracy compared to those pre-trained with ImageNet. This suggests that COCO's diverse and richly annotated images offer more relevant features for detecting dental structures and carious lesions. Furthermore, employing ResNet-50 as the backbone architecture enhances the detection of teeth and carious regions, achieving significant improvements with just 200 training epochs, potentially increasing the efficiency of clinical image interpretation.

A Fully Automatic Pipeline of Identification, Segmentation, and Subtyping of Aortic Dissection from CT Angiography.

Zhuang C, Wu Y, Qi Q, Zhao S, Sun Y, Hou J, Qian W, Yang B, Qi S

pubmed logopapersJun 6 2025
Aortic dissection (AD) is a rare condition with a high mortality rate, necessitating accurate and rapid diagnosis. This study develops an automated deep learning pipeline for identifying, segmenting, and Stanford subtyping AD using computed tomography angiography (CTA) images. This pipeline consists of four interconnected modules: aorta segmentation, AD identification, true lumen (TL) and false lumen (FL) segmentation, and Stanford subtyping. In the aorta segmentation module, a 3D full-resolution nnU-Net is trained. The segmented aorta's boundary is extracted using morphological operations and projected from multiple views in the AD identification module. AD identification is then performed using the multi-view projection data. For AD cases, a 3D nnU-Net is further trained for TL/FL segmentation based on the segmented aorta. Finally, a network is trained for Stanford subtyping using multi-view maximum density projections of the segmented TL/FL. A total of 386 CTA scans were collected for training, validation, and testing of the pipeline. For AD identification, the method achieved an accuracy of 0.979. The TL/FL segmentation for TypeA-AD and Type-B-AD achieved average Dice coefficient of 0.968 for TL and 0.971 for FL. For Stanford subtyping, the multi-view method achieved an accuracy of 0.990. The automated pipeline enables rapid and accurate identification, segmentation, and Stanford subtyping of AD using CTA images, potentially accelerating the diagnosis and treatment. The segmented aorta and TL/FL can also serve as references for physicians. The code, models, and pipeline are publicly available at https://github.com/zhuangCJ/A-pipeline-of-AD.git .

Automatic Segmentation of Ultrasound-Guided Transverse Thoracic Plane Block Using Convolutional Neural Networks.

Liu W, Ma X, Han X, Yu J, Zhang B, Liu L, Liu Y, Chu F, Liu Y, Wei S, Li B, Tang Z, Jiang J, Wang Q

pubmed logopapersJun 6 2025
Ultrasound-guided transverse thoracic plane (TTP) block has been shown to be highly effective in relieving postoperative pain in a variety of surgeries involving the anterior chest wall. Accurate identification of the target structure on ultrasound images is key to the successful implementation of TTP block. Nevertheless, the complexity of anatomical structures in the targeted blockade area coupled with the potential for adverse clinical incidents presents considerable challenges, particularly for anesthesiologists who are less experienced. This study applied deep learning methods to TTP block and developed a deep learning model to achieve real-time region segmentation in ultrasound to assist doctors in the accurate identification of the target nerve. Using 2329 images from 155 patients, we successfully segmented key structures associated with TTP areas and nerve blocks, including the transversus thoracis muscle, lungs, and bones. The achieved IoU (Intersection over Union) scores are 0.7272, 0.9736, and 0.8244 in that order. Recall metrics were 0.8305, 0.9896, and 0.9336 respectively, whilst Dice coefficients reached 0.8421, 0.9866, and 0.9037, particularly with an accuracy surpassing 97% in the identification of perilous lung regions. The real-time segmentation frame rate of the model for ultrasound video was as high as 42.7 fps, thus meeting the exigencies of performing nerve blocks under real-time ultrasound guidance in clinical practice. This study introduces TTP-Unet, a deep learning model specifically designed for TTP block, capable of automatically identifying crucial anatomical structures within ultrasound images of TTP block, thereby offering a practicable solution to attenuate the clinical difficulty associated with TTP block technique.

Foundation versus domain-specific models for left ventricular segmentation on cardiac ultrasound.

Chao CJ, Gu YR, Kumar W, Xiang T, Appari L, Wu J, Farina JM, Wraith R, Jeong J, Arsanjani R, Kane GC, Oh JK, Langlotz CP, Banerjee I, Fei-Fei L, Adeli E

pubmed logopapersJun 6 2025
The Segment Anything Model (SAM) was fine-tuned on the EchoNet-Dynamic dataset and evaluated on external transthoracic echocardiography (TTE) and Point-of-Care Ultrasound (POCUS) datasets from CAMUS (University Hospital of St Etienne) and Mayo Clinic (99 patients: 58 TTE, 41 POCUS). Fine-tuned SAM was superior or comparable to MedSAM. The fine-tuned SAM also outperformed EchoNet and U-Net models, demonstrating strong generalization, especially on apical 2-chamber (A2C) images (fine-tuned SAM vs. EchoNet: CAMUS-A2C: DSC 0.891 ± 0.040 vs. 0.752 ± 0.196, p < 0.0001) and POCUS (DSC 0.857 ± 0.047 vs. 0.667 ± 0.279, p < 0.0001). Additionally, SAM-enhanced workflow reduced annotation time by 50% (11.6 ± 4.5 sec vs. 5.7 ± 1.7 sec, p < 0.0001) while maintaining segmentation quality. We demonstrated an effective strategy for fine-tuning a vision foundation model for enhancing clinical workflow efficiency and supporting human-AI collaboration.
Page 148 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.