Sort by:
Page 75 of 3493486 results

Automated Midline Shift Detection in Head CT Using Localization and Symmetry Techniques Based on User-Selected Slice.

Banayan NE, Shalu H, Hatzoglou V, Swinburne N, Holodny A, Zhang Z, Stember J

pubmed logopapersAug 20 2025
Midline shift (MLS) is an intracranial pathology characterized by the displacement of brain parenchyma across the skull's midsagittal axis, typically caused by mass effect from space-occupying lesions or traumatic brain injuries. Prompt detection of MLS is crucial, because delays in identification and intervention can negatively impact patient outcomes. The gap we have addressed in this work is the development of a deep learning algorithm that encompasses the full severity range from mild to severe cases of MLS. Notably, in more severe cases, the mass effect often effaces the septum pellucidum, rendering it unusable as a fiducial point of reference. We sought to enable rapid and accurate detection of MLS by leveraging advances in artificial intelligence (AI). Using a cohort of 981 patient CT scans with a breadth of cerebral pathologies from our institution, we manually chose an individual slice from each CT scan primarily based on the presence of the lateral ventricles and annotated 400 of these scans for the lateral ventricles and skull-axis midline by using Roboflow. Finally, we trained an AI model based on the You Only Look Once object detection system to identify MLS in the individual slices of the remaining 581 CT scans. When comparing normal and mild cases to moderate and severe cases of MLS detection, our model yielded an area under the curve of 0.79 with a sensitivity of 0.73 and specificity of 0.72 indicating our model is sensitive enough to capture moderate and severe MLS and specific enough to differentiate them from mild and normal cases. We developed an AI model that reliably identifies the lateral ventricles and the cerebral midline across various pathologies in patient CT scans. Most importantly, our model accurately identifies and stratifies clinically significant and emergent MLS from nonemergent cases. This could serve as a foundational element for a future clinically integrated approach that flags urgent studies for expedited review, potentially facilitating more timely treatment when necessary.

Temporal footprint reduction via neural network denoising in 177Lu radioligand therapy.

Nzatsi MC, Varmenot N, Sarrut D, Delpon G, Cherel M, Rousseau C, Ferrer L

pubmed logopapersAug 20 2025
Internal vectorised therapies, particularly with [177Lu]-labelled agents, are increasingly used for metastatic prostate cancer and neuroendocrine tumours. However, routine dosimetry for organs-at-risk and tumours remains limited due to the complexity and time requirements of current protocols. We developed a Generative Adversarial Network (GAN) to transform rapid 6 s SPECT projections into synthetic 30 s-equivalent projections. SPECT data from twenty patients and phantom acquisitions were collected at multiple time-points. The GAN accurately predicted 30 s projections, enabling estimation of time-integrated activities in kidneys and liver with maximum errors below 6 % and 1 %, respectively, compared to standard acquisitions. For tumours and phantom spheres, results were more variable. On phantom data, GAN-inferred reconstructions showed lower biases for spheres of 20, 8, and 1 mL (8.2 %, 6.9 %, and 21.7 %) compared to direct 6 s acquisitions (12.4 %, 20.4 %, and 24.0 %). However, in patient lesions, 37 segmented tumours showed higher median discrepancies in cumulated activity for the GAN (15.4 %) than for the 6 s approach (4.1 %). Our preliminary results indicate that the GAN can provide reliable dosimetry for organs-at-risk, but further optimisation is needed for small lesion quantification. This approach could reduce SPECT acquisition time from 45 to 9 min for standard three-bed studies, potentially facilitating wider adoption of dosimetry in nuclear medicine and addressing challenges related to toxicity and cumulative absorbed doses in personalised radiopharmaceutical therapy.

Detection of neonatal pneumoperitoneum on radiographs using deep multi-task learning.

Park C, Choi J, Hwang J, Jeong H, Kim PH, Cho YA, Lee BS, Jung E, Kwon SH, Kim M, Jun H, Nam Y, Kim N, Yoon HM

pubmed logopapersAug 20 2025
Neonatal pneumoperitoneum is a life-threatening condition requiring prompt diagnosis, yet its subtle radiographic signs pose diagnostic challenges, especially in emergency settings. To develop and validate a deep multi-task learning model for diagnosing neonatal pneumoperitoneum on radiographs and to assess its clinical utility across clinicians of varying experience levels. Retrospective diagnostic study using internal and external datasets. Internal data were collected between January 1995 and August 2018, while external data were sourced from 11 neonatal intensive care units. Tertiary hospital and multicenter validation settings. Internal dataset: 204 neonates (546 radiographs), external dataset: 378 radiographs (125 pneumoperitoneum cases, 214 non-pneumoperitoneum cases). Radiographs were reviewed by two pediatric radiologists. A reader study involved 4 physicians with varying experience levels. A deep multi-task learning model combining classification and segmentation tasks for pneumoperitoneum detection. The primary outcomes included diagnostic accuracy, area under the receiver operating characteristic curve (AUC), and inter-reader agreement. AI-assisted and unassisted reader performance metrics were compared. The AI model achieved an AUC of 0.98 (95 % CI, 0.94-1.00) and accuracy of 94 % (95 % CI, 85.1-99.6) in internal validation, and AUC of 0.89 (95 % CI, 0.85-0.92) with accuracy of 84.1 % (95 % CI, 80.4-87.8) in external validation. AI assistance improved reader accuracy from 82.5 % to 86.6 % (p < .001) and inter-reader agreement (kappa increased from 0.33 to 0.71 to 0.54-0.86). The multi-task learning model demonstrated excellent diagnostic performance and improved clinicians' diagnostic accuracy and agreement, suggesting its potential to enhance care in neonatal intensive care settings. All code is available at https://github.com/brody9512/NEC_MTL.

Automated mitral valve segmentation in PLAX-view transthoracic echocardiography for anatomical assessment and risk stratification.

Jansen GE, Molenaar MA, Schuuring MJ, Bouma BJ, Išgum I

pubmed logopapersAug 20 2025
Accurate segmentation of the mitral valve in transthoracic echocardiography (TTE) enables the extraction of various anatomical parameters that are important for guiding clinical management. However, manual mitral valve segmentation is time-consuming and prone to interobserver variability. To support robust automatic analysis of mitral valve anatomy, we propose a novel AI-based method for mitral valve segmentation and anatomical measurement extraction. We retrospectively collected a set of echocardiographic exams from 1756 consecutive patients with suspected coronary artery disease. For these patients, we retrieved expert-defined scores for mitral regurgitation (MR) severity and follow-up characteristics. PLAX-view videos were automatically identified, and the inside border of the mitral valve leaflets were manually segmented in 182 patients. To automatically segment mitral valve leaflets, we designed a deep neural network that takes a video frame and outputs a distance- and classification-map for each leaflet, supervised by manual segmentations. From the resulting automatic segmentations, we extracted leaflet length, annulus diameter, tenting area, and coaptation length. To demonstrate the clinical relevance of these automatically extracted measurements, we performed univariable and multivariable Cox Regression survival analysis, with the clinical endpoint defined as heart-failure hospitalization or all-cause mortality. We trained the segmentation model on annotated frames of 111 patients, and tested segmentation performance on a set of 71 patients. For the survival analysis, we included 1,117 patients (mean age 64.1 ± 12.4 years, 58% male, median follow-up 3.3 years). The trained model achieved an average surface distance of 0.89 mm, a Hausdorff distance of 3.34 mm, and a temporal consistency score of 97%. Additionally, leaflet coaptation was accurately detected in 93% of annotated frames. In univariable Cox regression, automated annulus diameter (>35 mm, hazard ratio (HR) = 2.38, p<0.001), tenting area (>2.4 cm<sup>2</sup>, HR = 2.48, p<0.001), tenting height (>10 mm, HR = 1.91, p<0.001), and coaptation length (>3 mm, HR = 1.53, p = 0.007) were significantly associated with the defined clinical endpoint. For reference, significant MR by expert assessment resulted in an HR of 2.31 (p<0.001). In multivariable Cox Regression analysis, automated annulus diameter and coaptation length predicted the defined endpoint as independent parameters (p = 0.03 and p = 0.05, respectively). Our method allows accurate segmentation of the mitral valve in TTE, and enables fully automated quantification of key measurements describing mitral valve anatomy. This has the potential to improve risk stratification for cardiac patients.

CUTE-MRI: Conformalized Uncertainty-based framework for Time-adaptivE MRI

Paul Fischer, Jan Nikolas Morshuis, Thomas Küstner, Christian Baumgartner

arxiv logopreprintAug 20 2025
Magnetic Resonance Imaging (MRI) offers unparalleled soft-tissue contrast but is fundamentally limited by long acquisition times. While deep learning-based accelerated MRI can dramatically shorten scan times, the reconstruction from undersampled data introduces ambiguity resulting from an ill-posed problem with infinitely many possible solutions that propagates to downstream clinical tasks. This uncertainty is usually ignored during the acquisition process as acceleration factors are often fixed a priori, resulting in scans that are either unnecessarily long or of insufficient quality for a given clinical endpoint. This work introduces a dynamic, uncertainty-aware acquisition framework that adjusts scan time on a per-subject basis. Our method leverages a probabilistic reconstruction model to estimate image uncertainty, which is then propagated through a full analysis pipeline to a quantitative metric of interest (e.g., patellar cartilage volume or cardiac ejection fraction). We use conformal prediction to transform this uncertainty into a rigorous, calibrated confidence interval for the metric. During acquisition, the system iteratively samples k-space, updates the reconstruction, and evaluates the confidence interval. The scan terminates automatically once the uncertainty meets a user-predefined precision target. We validate our framework on both knee and cardiac MRI datasets. Our results demonstrate that this adaptive approach reduces scan times compared to fixed protocols while providing formal statistical guarantees on the precision of the final image. This framework moves beyond fixed acceleration factors, enabling patient-specific acquisitions that balance scan efficiency with diagnostic confidence, a critical step towards personalized and resource-efficient MRI.

Attention-based deep learning network for predicting World Health Organization meningioma grade and Ki-67 expression based on magnetic resonance imaging.

Cheng X, Li H, Li C, Li J, Liu Z, Fan X, Lu C, Song K, Shen Z, Wang Z, Yang Q, Zhang J, Yin J, Qian C, You Y, Wang X

pubmed logopapersAug 20 2025
Preoperative assessment of World Health Organization (WHO) meningioma grading and Ki-67 expression is crucial for treatment strategies. We aimed to develop a fully automated attention-based deep learning network to predict WHO meningioma grading and Ki-67 expression. This retrospective study included 952 meningioma patients, divided into training (n = 542), internal validation (n = 96), and external test sets (n = 314). For each task, clinical, radiomics, and deep learning models were compared. We used no-new-Unet (nn-Unet) models to construct the segmentation network, followed by four classification models using ResNet50 or Swin Transformer architectures with 2D or 2.5D input strategies. All deep learning models incorporated attention mechanisms. Both the segmentation and 2.5D classification models demonstrated robust performance on the external test set. The segmentation network achieved Dice coefficients of 0.98 (0.97-0.99) and 0.87 (0.83-0.91) for brain parenchyma and tumour segmentation. For predicting meningioma grade, the 2.5D ResNet50 achieved the highest area under the curve (AUC) of 0.90 (0.85-0.93), significantly outperforming the clinical (AUC = 0.77 [0.70-0.83], p < 0.001) and radiomics models (AUC = 0.80 [0.75-0.85], p < 0.001). For Ki-67 expression prediction, the 2.5D Swin Transformer achieved the highest AUC of 0.89 (0.85-0.93), outperforming both the clinical (AUC = 0.76 [0.71-0.81], p < 0.001) and radiomics models (AUC = 0.82 [0.77-0.86], p = 0.002). Our automated deep learning network demonstrated superior performance. This novel network could support more precise treatment planning for meningioma patients. Question Can artificial intelligence accurately assess meningioma WHO grade and Ki-67 expression from preoperative MRI to guide personalised treatment and follow-up strategies? Findings The attention-enhanced nn-Unet segmentation achieved high accuracy, while 2.5D deep learning models with attention mechanisms achieved accurate prediction of grades and Ki-67. Clinical relevance Our fully automated 2.5D deep learning model, enhanced with attention mechanisms, accurately predicts WHO grades and Ki-67 expression levels in meningiomas, offering a robust, objective, and non-invasive solution to support clinical diagnosis and optimise treatment planning.

Deep learning approach for screening neonatal cerebral lesions on ultrasound in China.

Lin Z, Zhang H, Duan X, Bai Y, Wang J, Liang Q, Zhou J, Xie F, Shentu Z, Huang R, Chen Y, Yu H, Weng Z, Ni D, Liu L, Zhou L

pubmed logopapersAug 20 2025
Timely and accurate diagnosis of severe neonatal cerebral lesions is critical for preventing long-term neurological damage and addressing life-threatening conditions. Cranial ultrasound is the primary screening tool, but the process is time-consuming and reliant on operator's proficiency. In this study, a deep-learning powered neonatal cerebral lesions screening system capable of automatically extracting standard views from cranial ultrasound videos and identifying cases with severe cerebral lesions is developed based on 8,757 neonatal cranial ultrasound images. The system demonstrates an area under the curve of 0.982 and 0.944, with sensitivities of 0.875 and 0.962 on internal and external video datasets, respectively. Furthermore, the system outperforms junior radiologists and performs on par with mid-level radiologists, with 55.11% faster examination efficiency. In conclusion, the developed system can automatically extract standard views and make correct diagnosis with efficiency from cranial ultrasound videos and might be useful to deploy in multiple application scenarios.

[The application effect of Generative Pre-Treatment Tool of Skeletal Pathology in functional lumbar spine radiographic analysis].

Yilihamu Y, Zhao K, Zhong H, Feng SQ

pubmed logopapersAug 20 2025
<b>Objective:</b> To investigate the application effectiveness of the artificial intelligence(AI) based Generative Pre-treatment tool of Skeletal Pathology (GPTSP) in measuring functional lumbar radiographic examinations. <b>Methods:</b> This is a retrospective case series study,reviewing the clinical and imaging data of 34 patients who underwent lumbar dynamic X-ray radiography at Department of Orthopedics, the Second Hospital of Shandong University from September 2021 to June 2023. Among the patients, 13 were male and 21 were female, with an age of (68.0±8.0) years (range:55 to 88 years). The AI model of the GPTSP system was built upon a multi-dimensional constrained loss function constructed based on the YOLOv8 model, incorporating Kullback-Leibler divergence to quantify the anatomical distribution deviation of lumbar intervertebral space detection boxes, along with the introduction of a global dynamic attention mechanism. It can identify lumbar vertebral body edge points and measure lumbar intervertebral space. Furthermore, spondylolisthesis index, lumbar index, and lumbar intervertebral angles were measured using three methods: manual measurement by doctors, predefined annotated measurement, and AI-assisted measurement. The consistency between the doctors and the AI model was analyzed through intra-class correlation coefficient (ICC) and Kappa coefficient. <b>Results:</b> AI-assisted physician measurement time was (1.5±0.1) seconds (range: 1.3 to 1.7 seconds), which was shorter than the manual measurement time ((2 064.4±108.2) seconds,range: 1 768.3 to 2 217.6 seconds) and the pre-defined annotation measurement time ((602.0±48.9) seconds,range: 503.9 to 694.4 seconds). Kappa values between physicians' diagnoses and AI model's diagnoses (based on GPTSP platform) for the lumbar slip index, lumbar index, and intervertebral angles measured by three methods were 0.95, 0.92, and 0.82 (all <i>P</i><0.01), with ICC values consistently exceeding 0.90, indicating high consistency. Based on the doctor's manual measurement, compared with the predefined label measurement, altering AI assistance, doctors measurement with average annotation errors reduced from 2.52 mm (range: 0.01 to 6.78 mm) to 1.47 mm(range: 0 to 5.03 mm). <b>Conclusions:</b> The GPTSP system enhanced efficiency in functional lumbar analysis. AI model demonstrated high consistency in annotation and measurement results, showing strong potential to serve as a reliable clinical auxiliary tool.

[Preoperative discrimination of colorectal mucinous adenocarcinoma using enhanced CT-based radiomics and deep learning fusion model].

Wang BZ, Zhang X, Wang YL, Wang XY, Wang QG, Luo Z, Xu SL, Huang C

pubmed logopapersAug 20 2025
<b>Objective:</b> To develop a preoperative differentiation model for colorectal mucinous adenocarcinoma and non-mucinous adenocarcinoma using a combination of contrast-enhanced CT radiomics and deep learning methods. <b>Methods:</b> This is a retrospective case series study. Clinical data of colorectal cancer patients confirmed by postoperative pathological examination were retrospectively collected from January 2016 to December 2023 at Shanghai General Hospital Affiliated to Shanghai Jiao Tong University School of Medicine (Center 1, <i>n</i>=220) and the First Affiliated Hospital of Bengbu Medical University (Center 2, <i>n=</i>51). Among them, there were 108 patients diagnosed with mucinous adenocarcinoma, including 55 males and 53 females, with an age of (68.4±12.2) years (range: 38 to 96 years); and 163 patients diagnosed with non-mucinous adenocarcinoma, including 96 males and 67 females, with an age of (67.9±11.0) years (range: 43 to 94 years). The cases from Center 1 were divided into a training set (<i>n</i>=156) and an internal validation set (<i>n</i>=64) using stratified random sampling in a 7︰3 ratio, and the cases from Center 2 were used as an independent external validation set (<i>n</i>=51). Three-dimensional tumor volume of interest was manually segmented on venous-phase contrast-enhanced CT images. Radiomics features were extracted using PyRadiomics, and deep learning features were extracted using the ResNet-18 network. The two sets of features were then combined to form a joint feature set. The consistency of manual segmentation was assessed using the intraclass correlation coefficient. Feature dimensionality reduction was performed using the Mann-Whitney <i>U</i> test and the least absolute shrinkage and selection operator regression. Six machine learning algorithms were used to construct models based on radiomics features, deep learning features, and combined features, including support vector machine, Logistic regression, random forest, extreme gradient boosting, k-nearest neighbors, and decision tree. The discriminative performance of each model was evaluated using receiver operating characteristic curves, the area under the curve (AUC), DeLong test, and decision curve analysis. <b>Results:</b> After feature selection, 22 features with the most discriminative value were finally retained, among which 12 were traditional radiomics features and 10 were deep learning features. In the internal validation set, the Random Forest algorithm based on the combined features model achieved the best performance (AUC=0.938, 95%<i>CI:</i> 0.875 to 0.984), which was superior to the single-modality radiomics feature model (AUC=0.817, 95%<i>CI:</i> 0.702 to 0.913,<i>P</i>=0.048) and the deep learning feature model (AUC=0.832, 95%<i>CI:</i> 0.727 to 0.926,<i>P=</i>0.087); in the independent external validation set, the Random Forest algorithm with the combined features model maintained the highest discriminative performance (AUC=0.891, 95%<i>CI:</i> 0.791 to 0.969), which was superior to the single-modality radiomics feature model (AUC=0.770, 95%<i>CI:</i> 0.636 to 0.890,<i>P</i>=0.045) and the deep learning feature model (AUC=0.799, 95%<i>CI:</i> 0.652 to 0.911,<i>P</i>=0.169). <b>Conclusion:</b> The combined model based on radiomics and deep learning features from venous-phase enhanced CT demonstrates good performance in the preoperative differentiation of colorectal mucinous from non-mucinous adenocarcinoma.

From Slices to Structures: Unsupervised 3D Reconstruction of Female Pelvic Anatomy from Freehand Transvaginal Ultrasound

Max Krähenmann, Sergio Tascon-Morales, Fabian Laumer, Julia E. Vogt, Ece Ozkan

arxiv logopreprintAug 20 2025
Volumetric ultrasound has the potential to significantly improve diagnostic accuracy and clinical decision-making, yet its widespread adoption remains limited by dependence on specialized hardware and restrictive acquisition protocols. In this work, we present a novel unsupervised framework for reconstructing 3D anatomical structures from freehand 2D transvaginal ultrasound (TVS) sweeps, without requiring external tracking or learned pose estimators. Our method adapts the principles of Gaussian Splatting to the domain of ultrasound, introducing a slice-aware, differentiable rasterizer tailored to the unique physics and geometry of ultrasound imaging. We model anatomy as a collection of anisotropic 3D Gaussians and optimize their parameters directly from image-level supervision, leveraging sensorless probe motion estimation and domain-specific geometric priors. The result is a compact, flexible, and memory-efficient volumetric representation that captures anatomical detail with high spatial fidelity. This work demonstrates that accurate 3D reconstruction from 2D ultrasound images can be achieved through purely computational means, offering a scalable alternative to conventional 3D systems and enabling new opportunities for AI-assisted analysis and diagnosis.
Page 75 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.