Sort by:
Page 282 of 3463455 results

Clinically Interpretable Deep Learning via Sparse BagNets for Epiretinal Membrane and Related Pathology Detection

Ofosu Mensah, S., Neubauer, J., Ayhan, M. S., Djoumessi Donteu, K. R., Koch, L. M., Uzel, M. M., Gelisken, F., Berens, P.

medrxiv logopreprintJun 6 2025
Epiretinal membrane (ERM) is a vitreoretinal interface disease that, if not properly addressed, can lead to vision impairment and negatively affect quality of life. For ERM detection and treatment planning, Optical Coherence Tomography (OCT) has become the primary imaging modality, offering non-invasive, high-resolution cross-sectional imaging of the retina. Deep learning models have also led to good ERM detection performance on OCT images. Nevertheless, most deep learning models cannot be easily understood by clinicians, which limits their acceptance in clinical practice. Post-hoc explanation methods have been utilised to support the uptake of models, albeit, with partial success. In this study, we trained a sparse BagNet model, an inherently interpretable deep learning model, to detect ERM in OCT images. It performed on par with a comparable black-box model and generalised well to external data. In a multitask setting, it also accurately predicted other changes related to the ERM pathophysiology. Through a user study with ophthalmologists, we showed that the visual explanations readily provided by the sparse BagNet model for its decisions are well-aligned with clinical expertise. We propose potential directions for clinical implementation of the sparse BagNet model to guide clinical decisions in practice.

Quantitative and automatic plan-of-the-day assessment to facilitate adaptive radiotherapy in cervical cancer.

Mason SA, Wang L, Alexander SE, Lalondrelle S, McNair HA, Harris EJ

pubmed logopapersJun 5 2025
To facilitate implementation of plan-of-the-day (POTD) selection for treating locally advanced cervical cancer (LACC), we developed a POTD assessment tool for CBCT-guided radiotherapy (RT). A female pelvis segmentation model (U-Seg3) is combined with a quantitative standard operating procedure (qSOP) to identify optimal and acceptable plans. 

Approach: The planning CT[i], corresponding structure set[ii], and manually contoured CBCTs[iii] (n=226) from 39 LACC patients treated with POTD (n=11) or non-adaptive RT (n=28) were used to develop U-Seg3, an algorithm incorporating deep-learning and deformable image registration techniques to segment the low-risk clinical target volume (LR-CTV), high-risk CTV (HR-CTV), bladder, rectum, and bowel bag. A single-channel input model (iii only, U-Seg1) was also developed. Contoured CBCTs from the POTD patients were (a) reserved for U-Seg3 validation/testing, (b) audited to determine optimal and acceptable plans, and (c) used to empirically derive a qSOP that maximised classification accuracy. 

Main Results: The median [interquartile range] DSC between manual and U-Seg3 contours was 0.83 [0.80], 0.78 [0.13], 0.94 [0.05], 0.86[0.09], and 0.90 [0.05] for the LR-CTV, HR-CTV, bladder, rectum, and bowel bag. These were significantly higher than U-Seg1 in all structures but bladder. The qSOP classified plans as acceptable if they met target coverage thresholds (LR-CTV≧99%, HR-CTV≧99.8%), with lower LR-CTV coverage (≧95%) sometimes allowed. The acceptable plan minimising bowel irradiation was considered optimal unless substantial bladder sparing could be achieved. With U-Seg3 embedded in the qSOP, optimal and acceptable plans were identified in 46/60 and 57/60 cases. 

Significance: U-Seg3 outperforms U-Seg1 and all known CBCT-based female pelvis segmentation models. The tool combining U-Seg3 and the qSOP identifies optimal plans with equivalent accuracy as two observers. In an implementation strategy whereby this tool serves as the second observer, plan selection confidence and decision-making time could be improved whilst simultaneously reducing the required number of POTD-trained radiographers by 50%.

&#xD.

Automatic cervical tumors segmentation in PET/MRI by parallel encoder U-net.

Liu S, Tan Z, Gong T, Tang X, Sun H, Shang F

pubmed logopapersJun 5 2025
Automatic segmentation of cervical tumors is important in quantitative analysis and radiotherapy planning. A parallel encoder U-Net (PEU-Net) integrating the multi-modality information of PET/MRI was proposed to segment cervical tumor, which consisted of two parallel encoders with the same structure for PET and MR images. The features of the two modalities were extracted separately and fused at each layer of the decoder. Res2Net module on skip connection aggregated the features of various scales and refined the segmentation performance. PET/MRI images of 165 patients with cervical cancer were included in this study. U-Net, TransUNet, and nnU-Net with single or multi-modality (PET or/and T2WI) input were used for comparison. The Dice similarity coefficient (DSC) with volume data, DSC and the 95th percentile of Hausdorff distance (HD95) with tumor slices were calculated to evaluate the performance. The proposed PEU-Net exhibited the best performance (DSC<sub>3d</sub>: 0.726 ± 0.204, HD<sub>95</sub>: 4.603 ± 4.579 mm), DSC<sub>2d</sub> (0.871 ± 0.113) was comparable to the best result of TransUNet with PET/MRI (0.873 ± 0.125). The networks with multi-modality input outperformed those with single-modality images as input. The results showed that the proposed PEU-Net could use multi-modality information more effectively through the redesigned structure and achieved competitive performance.

Multitask deep learning model based on multimodal data for predicting prognosis of rectal cancer: a multicenter retrospective study.

Ma Q, Meng R, Li R, Dai L, Shen F, Yuan J, Sun D, Li M, Fu C, Li R, Feng F, Li Y, Tong T, Gu Y, Sun Y, Shen D

pubmed logopapersJun 5 2025
Prognostic prediction is crucial to guide individual treatment for patients with rectal cancer. We aimed to develop and validated a multitask deep learning model for predicting prognosis in rectal cancer patients. This retrospective study enrolled 321 rectal cancer patients (training set: 212; internal testing set: 53; external testing set: 56) who directly received total mesorectal excision from five hospitals between March 2014 to April 2021. A multitask deep learning model was developed to simultaneously predict recurrence/metastasis and disease-free survival (DFS). The model integrated clinicopathologic data and multiparametric magnetic resonance imaging (MRI) images including diffusion kurtosis imaging (DKI), without performing tumor segmentation. The receiver operating characteristic (ROC) curve and Harrell's concordance index (C-index) were used to evaluate the predictive performance of the proposed model. The deep learning model achieved good discrimination capability of recurrence/metastasis, with area under the curve (AUC) values of 0.885, 0.846, and 0.797 in the training, internal testing and external testing sets, respectively. Furthermore, the model successfully predicted DFS in the training set (C-index: 0.812), internal testing set (C-index: 0.794), and external testing set (C-index: 0.733), and classified patients into significantly distinct high- and low-risk groups (p < 0.05). The multitask deep learning model, incorporating clinicopathologic data and multiparametric MRI, effectively predicted both recurrence/metastasis and survival for patients with rectal cancer. It has the potential to be an essential tool for risk stratification, and assist in making individualized treatment decisions. Not applicable.

Epistasis regulates genetic control of cardiac hypertrophy.

Wang Q, Tang TM, Youlton M, Weldy CS, Kenney AM, Ronen O, Hughes JW, Chin ET, Sutton SC, Agarwal A, Li X, Behr M, Kumbier K, Moravec CS, Tang WHW, Margulies KB, Cappola TP, Butte AJ, Arnaout R, Brown JB, Priest JR, Parikh VN, Yu B, Ashley EA

pubmed logopapersJun 5 2025
Although genetic variant effects often interact nonadditively, strategies to uncover epistasis remain in their infancy. Here we develop low-signal signed iterative random forests to elucidate the complex genetic architecture of cardiac hypertrophy, using deep learning-derived left ventricular mass estimates from 29,661 UK Biobank cardiac magnetic resonance images. We report epistatic variants near CCDC141, IGF1R, TTN and TNKS, identifying loci deemed insignificant in genome-wide association studies. Functional genomic and integrative enrichment analyses reveal that genes mapped from these loci share biological process gene ontologies and myogenic regulatory factors. Transcriptomic network analyses using 313 human hearts demonstrate strong co-expression correlations among these genes in healthy hearts, with significantly reduced connectivity in failing hearts. To assess causality, RNA silencing in human induced pluripotent stem cell-derived cardiomyocytes, combined with novel microfluidic single-cell morphology analysis, confirms that cardiomyocyte hypertrophy is nonadditively modifiable by interactions between CCDC141, TTN and IGF1R. Our results expand the scope of cardiac genetic regulation to epistasis.

Noise-induced self-supervised hybrid UNet transformer for ischemic stroke segmentation with limited data annotations.

Soh WK, Rajapakse JC

pubmed logopapersJun 5 2025
We extend the Hybrid Unet Transformer (HUT) foundation model, which combines the advantages of the CNN and Transformer architectures with a noisy self-supervised approach, and demonstrate it in an ischemic stroke lesion segmentation task. We introduce a self-supervised approach using a noise anchor and show that it can perform better than a supervised approach under a limited amount of annotated data. We supplement our pre-training process with an additional unannotated CT perfusion dataset to validate our approach. Compared to the supervised version, the noisy self-supervised HUT (HUT-NSS) outperforms its counterpart by a margin of 2.4% in terms of dice score. HUT-NSS, on average, gained a further margin of 7.2% dice score and 28.1% Hausdorff Distance score over the state-of-the-art network USSLNet on the CT perfusion scans of the Ischemic Stroke Lesion Segmentation (ISLES2018) dataset. In limited annotated data sets, we show that HUT-NSS gained 7.87% of the dice score over USSLNet when we used 50% of the annotated data sets for training. HUT-NSS gained 7.47% of the dice score over USSLNet when we used 10% of the annotated datasets, and HUT-NSS gained 5.34% of the dice score over USSLNet when we used 1% of the annotated datasets for training. The code is available at https://github.com/vicsohntu/HUTNSS_CT .

Matrix completion-informed deep unfolded equilibrium models for self-supervised <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space interpolation in MRI.

Luo C, Wang H, Liu Y, Xie T, Chen G, Jin Q, Liang D, Cui ZX

pubmed logopapersJun 5 2025
Self-supervised methods for magnetic resonance imaging (MRI) reconstruction have garnered significant interest due to their ability to address the challenges of slow data acquisition and scarcity of fully sampled labels. Current regularization-based self-supervised techniques merge the theoretical foundations of regularization with the representational strengths of deep learning and enable effective reconstruction under higher acceleration rates, yet often fall short in interpretability, leaving their theoretical underpinnings lacking. In this paper, we introduce a novel self-supervised approach that provides stringent theoretical guarantees and interpretable networks while circumventing the need for fully sampled labels. Our method exploits the intrinsic relationship between convolutional neural networks and the null space within structural low-rank models, effectively integrating network parameters into an iterative reconstruction process. Our network learns gradient descent steps of the projected gradient descent algorithm without changing its convergence property, which implements a fully interpretable unfolded model. We design a non-expansive mapping for the network architecture, ensuring convergence to a fixed point. This well-defined framework enables complete reconstruction of missing <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space data grounded in matrix completion theory, independent of fully sampled labels. Qualitative and quantitative experimental results on multi-coil MRI reconstruction demonstrate the efficacy of our self-supervised approach, showing marked improvements over existing self-supervised and traditional regularization methods, achieving results comparable to supervised learning in selected scenarios. Our method surpasses existing self-supervised approaches in reconstruction quality and also delivers competitive performance under supervised settings. This work not only advances the state-of-the-art in MRI reconstruction but also enhances interpretability in deep learning applications for medical imaging.

Prenatal detection of congenital heart defects using the deep learning-based image and video analysis: protocol for Clinical Artificial Intelligence in Fetal Echocardiography (CAIFE), an international multicentre multidisciplinary study.

Patey O, Hernandez-Cruz N, D'Alberti E, Salovic B, Noble JA, Papageorghiou AT

pubmed logopapersJun 5 2025
Congenital heart defect (CHD) is a significant, rapidly emerging global problem in child health and a leading cause of neonatal and childhood death. Prenatal detection of CHDs with the help of ultrasound allows better perinatal management of such pregnancies, leading to reduced neonatal mortality, morbidity and developmental complications. However, there is a wide variation in reported fetal heart problem detection rates from 34% to 85%, with some low- and middle-income countries detecting as low as 9.3% of cases before birth. Research has shown that deep learning-based or more general artificial intelligence (AI) models can support the detection of fetal CHDs more rapidly than humans performing ultrasound scan. Progress in this AI-based research depends on the availability of large, well-curated and diverse data of ultrasound images and videos of normal and abnormal fetal hearts. Currently, CHD detection based on AI models is not accurate enough for practical clinical use, in part due to the lack of ultrasound data available for machine learning as CHDs are rare and heterogeneous, the retrospective nature of published studies, the lack of multicentre and multidisciplinary collaboration, and utilisation of mostly standard planes still images of the fetal heart for AI models. Our aim is to develop AI models that could support clinicians in detecting fetal CHDs in real time, particularly in nonspecialist or low-resource settings where fetal echocardiography expertise is not readily available. We have designed the Clinical Artificial Intelligence Fetal Echocardiography (CAIFE) study as an international multicentre multidisciplinary collaboration led by a clinical and an engineering team at the University of Oxford. This study involves five multicountry hospital sites for data collection (Oxford, UK (n=1), London, UK (n=3) and Southport, Australia (n=1)). We plan to curate 14 000 retrospective ultrasound scans of fetuses with normal hearts (n=13 000) and fetuses with CHDs (n=1000), as well as 2400 prospective ultrasound cardiac scans, including the proposed research-specific CAIFE 10 s video sweeps, from fetuses with normal hearts (n=2000) and fetuses diagnosed with major CHDs (n=400). This gives a total of 16 400 retrospective and prospective ultrasound scans from the participating hospital sites. We will build, train and validate computational models capable of differentiating between normal fetal hearts and those diagnosed with CHDs and recognise specific types of CHDs. Data will be analysed using statistical metrics, namely, sensitivity, specificity and accuracy, which include calculating positive and negative predictive values for each outcome, compared with manual assessment. We will disseminate the findings through regional, national and international conferences and through peer-reviewed journals. The study was approved by the Health Research Authority, Care Research Wales and the Research Ethics Committee (Ref: 23/EM/0023; IRAS Project ID: 317510) on 8 March 2023. All collaborating hospitals have obtained the local trust research and development approvals.

Enhancing image quality in fast neutron-based range verification of proton therapy using a deep learning-based prior in LM-MAP-EM reconstruction.

Setterdahl LM, Skjerdal K, Ratliff HN, Ytre-Hauge KS, Lionheart WRB, Holman S, Pettersen HES, Blangiardi F, Lathouwers D, Meric I

pubmed logopapersJun 5 2025
This study investigates the use of list-mode (LM) maximum a posteriori (MAP) expectation maximization (EM) incorporating prior information predicted by a convolutional neural network for image reconstruction in fast neutron (FN)-based proton therapy range verification.&#xD;Approach. A conditional generative adversarial network (pix2pix) was trained on progressively noisier data, where detector resolution effects were introduced gradually to simulate realistic conditions. FN data were generated using Monte Carlo simulations of an 85 MeV proton pencil beam in a computed tomography (CT)-based lung cancer patient model, with range shifts emulating weight gain and loss. The network was trained to estimate the expected two-dimensional (2D) ground truth FN production distribution from simple back-projection images. Performance was evaluated using mean squared error (MSE), structural similarity index (SSIM), and the correlation between shifts in predicted distributions and true range shifts. &#xD;Main results. Our results show that pix2pix performs well on noise-free data but suffers from significant degradation when detector resolution effects are introduced. Among the LM-MAP-EM approaches tested, incorporating a mean prior estimate into the reconstruction process improved performance, with LM-MAP-EM using a mean prior estimate outperforming naïve LM maximum likelihood EM (LM-MLEM) and conventional LM-MAP-EM with a smoothing quadratic energy function in terms of SSIM. &#xD;Significance. Findings suggest that deep learning techniques can enhance iterative reconstruction for range verification in proton therapy. However, the effectiveness of the model is highly dependent on data quality, limiting its robustness in high-noise scenarios.&#xD.

Preliminary analysis of AI-based thyroid nodule evaluation in a non-subspecialist endocrinology setting.

Fernández Velasco P, Estévez Asensio L, Torres B, Ortolá A, Gómez Hoyos E, Delgado E, de Luís D, Díaz Soto G

pubmed logopapersJun 5 2025
Thyroid nodules are commonly evaluated using ultrasound-based risk stratification systems, which rely on subjective descriptors. Artificial intelligence (AI) may improve assessment, but its effectiveness in non-subspecialist settings is unclear. This study evaluated the impact of an AI-based decision support system (AI-DSS) on thyroid nodule ultrasound assessments by general endocrinologists (GE) without subspecialty thyroid imaging training. A prospective cohort study was conducted on 80 patients undergoing thyroid ultrasound in GE outpatient clinics. Thyroid ultrasound was performed based on clinical judgment as part of routine care by GE. Images were retrospectively analyzed using an AI-DSS (Koios DS), independently of clinician assessments. AI-DSS results were compared with initial GE evaluations and, when referred, with expert evaluations at a subspecialized thyroid nodule clinic (TNC). Agreement in ultrasound features, risk classification by the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) and American Thyroid Association guidelines, and referral recommendations was assessed. AI-DSS differed notably from GE, particularly assessing nodule composition (solid: 80%vs.36%,p < 0.01), echogenicity (hypoechoic:52%vs.16%,p < 0.01), and echogenic foci (microcalcifications:10.7%vs.1.3%,p < 0.05). AI-DSS classification led to a higher referral rate compared to GE (37.3%vs.30.7%, not statistically significant). Agreement between AI-DSS and GE in ACR TI-RADS scoring was moderate (r = 0.337;p < 0.001), but improved when comparing GE to AI-DSS and TNC subspecialist (r = 0.465;p < 0.05 and r = 0.607;p < 0.05, respectively). In a non-subspecialist setting, non-adjunct AI-DSS use did not significantly improve risk stratification or reduce hypothetical referrals. The system tended to overestimate risk, potentially leading to unnecessary procedures. Further optimization is required for AI to function effectively in low-prevalence environment.
Page 282 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.