Sort by:
Page 33 of 72720 results

Development of a Deep Learning Model for the Volumetric Assessment of Osteonecrosis of the Femoral Head on Three-Dimensional Magnetic Resonance Imaging.

Uemura K, Takashima K, Otake Y, Li G, Mae H, Okada S, Hamada H, Sugano N

pubmed logopapersJun 6 2025
Although volumetric assessment of necrotic lesions using the Steinberg classification predicts future collapse in osteonecrosis of the femoral head (ONFH), quantifying these lesions using magnetic resonance imaging (MRI) generally requires time and effort, allowing the Steinberg classification to be routinely used in clinical investigations. Thus, this study aimed to use deep learning to develop a method for automatically segmenting necrotic lesions using MRI and for automatically classifying them according to the Steinberg classification. A total of 63 hips from patients who had ONFH and did not have collapse were included. An orthopaedic surgeon manually segmented the femoral head and necrotic lesions on MRI acquired using a spoiled gradient-echo sequence. Based on manual segmentation, 22 hips were classified as Steinberg grade A, 23 as Steinberg grade B, and 18 as Steinberg grade C. The manually segmented labels were used to train a deep learning model that used a 5-layer Dynamic U-Net system. A four-fold cross-validation was performed to assess segmentation accuracy using the Dice coefficient (DC) and average symmetric distance (ASD). Furthermore, hip classification accuracy according to the Steinberg classification was evaluated along with the weighted Kappa coefficient. The median DC and ASD for the femoral head region were 0.95 (interquartile range [IQR], 0.95 to 0.96) and 0.65 mm (IQR, 0.59 to 0.75), respectively. For necrotic lesions, the median DC and ASD were 0.89 (IQR, 0.85 to 0.92) and 0.76 mm (IQR, 0.58 to 0.96), respectively. Based on the Steinberg classification, the grading matched in 59 hips (accuracy: 93.7%), with a weighted Kappa coefficient of 0.98. The proposed deep learning model exhibited high accuracy in segmenting and grading necrotic lesions according to the Steinberg classification using MRI. This model can be used to assist clinicians in the volumetric assessment of ONFH.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markiian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

Magnetic resonance imaging and the evaluation of vestibular schwannomas: a systematic review

Lee, K. S., Wijetilake, N., Connor, S., Vercauteren, T., Shapey, J.

medrxiv logopreprintJun 6 2025
IntroductionThe assessment of vestibular schwannoma (VS) requires a standardized measurement approach as growth is a key element in defining treatment strategy for VS. Volumetric measurements offer higher sensitivity and precision, but existing methods of segmentation, are labour-intensive, lack standardisation and are prone to variability and subjectivity. A new core set of measurement indicators reported consistently, will support clinical decision-making and facilitate evidence synthesis. This systematic review aimed to identify indicators used in 1) magnetic resonance imaging (MRI) acquisition and 2) measurement or 3) growth of VS. This work is expected to inform a Delphi consensus. MethodsSystematic searches of Medline, Embase and Cochrane Central were undertaken on 4th October 2024. Studies that assessed the evaluation of VS with MRI, between 2014 and 2024 were included. ResultsThe final dataset consisted of 102 studies and 19001 patients. Eighty-six (84.3%) studies employed post contrast T1 as the MRI acquisition of choice for evaluating VS. Nine (8.8%) studies additionally employed heavily weighted T2 sequences such as constructive interference in steady state (CISS) and FIESTA-C. Only 45 (44.1%) studies reported the slice thickness with the majority 38 (84.4%) choosing <3mm in thickness. Fifty-eight (56.8%) studies measured volume whilst 49 (48.0%) measured the largest linear dimension; 14 (13.7%) studies used both measurements. Four studies employed semi-automated or automated segmentation processes to measure the volumes of VS. Of 68 studies investigating growth, 54 (79.4%) provided a threshold. Significant variation in volumetric growth was observed but the threshold for significant percentage change reported by most studies was 20% (n = 18). ConclusionSubstantial variation in MRI acquisition, and methods for evaluating measurement and growth of VS, exists across the literature. This lack of standardization is likely attributed to resource constraints and the fact that currently available volumetric segmentation methods are very labour-intensive. Following the identification of the indicators employed in the literature, this study aims to develop a Delphi consensus for the standardized measurement of VS and uptake in employing a data-driven artificial intelligence-based measuring tools.

Deep learning-enabled MRI phenotyping uncovers regional body composition heterogeneity and disease associations in two European population cohorts

Mertens, C. J., Haentze, H., Ziegelmayer, S., Kather, J. N., Truhn, D., Kim, S. H., Busch, F., Weller, D., Wiestler, B., Graf, M., Bamberg, F., Schlett, C. L., Weiss, J. B., Ringhof, S., Can, E., Schulz-Menger, J., Niendorf, T., Lammert, J., Molwitz, I., Kader, A., Hering, A., Meddeb, A., Nawabi, J., Schulze, M. B., Keil, T., Willich, S. N., Krist, L., Hadamitzky, M., Hannemann, A., Bassermann, F., Rueckert, D., Pischon, T., Hapfelmeier, A., Makowski, M. R., Bressem, K. K., Adams, L. C.

medrxiv logopreprintJun 6 2025
Body mass index (BMI) does not account for substantial inter-individual differences in regional fat and muscle compartments, which are relevant for the prevalence of cardiometabolic and cancer conditions. We applied a validated deep learning pipeline for automated segmentation of whole-body MRI scans in 45,851 adults from the UK Biobank and German National Cohort, enabling harmonized quantification of visceral (VAT), gluteofemoral (GFAT), and abdominal subcutaneous adipose tissue (ASAT), liver fat fraction (LFF), and trunk muscle volume. Associations with clinical conditions were evaluated using compartment measures adjusted for age, sex, height, and BMI. Our analysis demonstrates that regional adiposity and muscle volume show distinct associations with cardiometabolic and cancer prevalence, and that substantial disease heterogeneity exists within BMI strata. The analytic framework and reference data presented here will support future risk stratification efforts and facilitate the integration of automated MRI phenotyping into large-scale population and clinical research.

Detecting neurodegenerative changes in glaucoma using deep mean kurtosis-curve-corrected tractometry

Kasa, L. W., Schierding, W., Kwon, E., Holdsworth, S., Danesh-Meyer, H. V.

medrxiv logopreprintJun 6 2025
Glaucoma is increasingly recognized as a neurodegenerative condition involving both retinal and central nervous system structures. Here, we present an integrated framework that combines MK-Curve-corrected diffusion kurtosis imaging (DKI), tractometry, and deep autoencoder-based normative modeling to detect localized white matter abnormalities associated with glaucoma. Using UK Biobank diffusion MRI data, we show that MK-Curve approach corrects anatomically implausible values and improves the reliability of DKI metrics - particularly mean (MK), radial (RK), and axial kurtosis (AK) - in regions of complex fiber architecture. Tractometry revealed reduced MK in glaucoma patients along the optic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus, but not in a non-visual control tract, supporting disease specificity. These abnormalities were spatially localized, with significant changes observed at multiple points along the tracts. MK demonstrated greater sensitivity than MD and exhibited altered distributional features, reflecting microstructural heterogeneity not captured by standard metrics. Node-wise MK values in the right optic radiation showed weak but significant correlations with retinal OCT measures (ganglion cell layer and retinal nerve fiber layer thickness), reinforcing the biological relevance of these findings. Deep autoencoder-based modeling further enabled subject-level anomaly detection that aligned spatially with group-level changes and outperformed traditional approaches. Together, our results highlight the potential of advanced diffusion modeling and deep learning for sensitive, individualized detection of glaucomatous neurodegeneration and support their integration into future multimodal imaging pipelines in neuro-ophthalmology.

High-definition motion-resolved MRI using 3D radial kooshball acquisition and deep learning spatial-temporal 4D reconstruction.

Murray V, Wu C, Otazo R

pubmed logopapersJun 5 2025
&#xD;To develop motion-resolved volumetric MRI with 1.1mm isotropic resolution and scan times <5 minutes using a combination of 3D radial kooshball acquisition and spatial-temporal deep learning 4D reconstruction for free-breathing high-definition lung MRI. &#xD;Approach: &#xD;Free-breathing lung MRI was conducted on eight healthy volunteers and ten patients with lung tumors on a 3T MRI scanner using a 3D radial kooshball sequence with half-spoke (ultrashort echo time, UTE, TE=0.12ms) and full-spoke (T1-weighted, TE=1.55ms) acquisitions. Data were motion-sorted using amplitude-binning on a respiratory motion signal. Two high-definition Movienet (HD-Movienet) deep learning models were proposed to reconstruct 3D radial kooshball data: slice-by-slice reconstruction in the coronal orientation using 2D convolutional kernels (2D-based HD-Movienet) and reconstruction on blocks of eight coronal slices using 3D convolutional kernels (3D-based HD-Movienet). Two applications were considered: (a) anatomical imaging at expiration and inspiration with four motion states and a scan time of 2 minutes, and (b) dynamic motion imaging with 10 motion states and a scan time of 4 minutes. The training was performed using XD-GRASP 4D images reconstructed from 4.5-minute and 6.5-minute acquisitions as references. &#xD;Main Results: &#xD;2D-based HD-Movienet achieved a reconstruction time of <6 seconds, significantly faster than the iterative XD-GRASP reconstruction (>10 minutes with GPU optimization) while maintaining comparable image quality to XD-GRASP with two extra minutes of scan time. The 3D-based HD-Movienet improved reconstruction quality at the expense of longer reconstruction times (<11 seconds). &#xD;Significance: &#xD;HD-Movienet demonstrates the feasibility of motion-resolved 4D MRI with isotropic 1.1mm resolution and scan times of only 2 minutes for four motion states and 4 minutes for 10 motion states, marking a significant advancement in clinical free-breathing lung MRI.

Multitask deep learning model based on multimodal data for predicting prognosis of rectal cancer: a multicenter retrospective study.

Ma Q, Meng R, Li R, Dai L, Shen F, Yuan J, Sun D, Li M, Fu C, Li R, Feng F, Li Y, Tong T, Gu Y, Sun Y, Shen D

pubmed logopapersJun 5 2025
Prognostic prediction is crucial to guide individual treatment for patients with rectal cancer. We aimed to develop and validated a multitask deep learning model for predicting prognosis in rectal cancer patients. This retrospective study enrolled 321 rectal cancer patients (training set: 212; internal testing set: 53; external testing set: 56) who directly received total mesorectal excision from five hospitals between March 2014 to April 2021. A multitask deep learning model was developed to simultaneously predict recurrence/metastasis and disease-free survival (DFS). The model integrated clinicopathologic data and multiparametric magnetic resonance imaging (MRI) images including diffusion kurtosis imaging (DKI), without performing tumor segmentation. The receiver operating characteristic (ROC) curve and Harrell's concordance index (C-index) were used to evaluate the predictive performance of the proposed model. The deep learning model achieved good discrimination capability of recurrence/metastasis, with area under the curve (AUC) values of 0.885, 0.846, and 0.797 in the training, internal testing and external testing sets, respectively. Furthermore, the model successfully predicted DFS in the training set (C-index: 0.812), internal testing set (C-index: 0.794), and external testing set (C-index: 0.733), and classified patients into significantly distinct high- and low-risk groups (p < 0.05). The multitask deep learning model, incorporating clinicopathologic data and multiparametric MRI, effectively predicted both recurrence/metastasis and survival for patients with rectal cancer. It has the potential to be an essential tool for risk stratification, and assist in making individualized treatment decisions. Not applicable.

Epistasis regulates genetic control of cardiac hypertrophy.

Wang Q, Tang TM, Youlton M, Weldy CS, Kenney AM, Ronen O, Hughes JW, Chin ET, Sutton SC, Agarwal A, Li X, Behr M, Kumbier K, Moravec CS, Tang WHW, Margulies KB, Cappola TP, Butte AJ, Arnaout R, Brown JB, Priest JR, Parikh VN, Yu B, Ashley EA

pubmed logopapersJun 5 2025
Although genetic variant effects often interact nonadditively, strategies to uncover epistasis remain in their infancy. Here we develop low-signal signed iterative random forests to elucidate the complex genetic architecture of cardiac hypertrophy, using deep learning-derived left ventricular mass estimates from 29,661 UK Biobank cardiac magnetic resonance images. We report epistatic variants near CCDC141, IGF1R, TTN and TNKS, identifying loci deemed insignificant in genome-wide association studies. Functional genomic and integrative enrichment analyses reveal that genes mapped from these loci share biological process gene ontologies and myogenic regulatory factors. Transcriptomic network analyses using 313 human hearts demonstrate strong co-expression correlations among these genes in healthy hearts, with significantly reduced connectivity in failing hearts. To assess causality, RNA silencing in human induced pluripotent stem cell-derived cardiomyocytes, combined with novel microfluidic single-cell morphology analysis, confirms that cardiomyocyte hypertrophy is nonadditively modifiable by interactions between CCDC141, TTN and IGF1R. Our results expand the scope of cardiac genetic regulation to epistasis.

Matrix completion-informed deep unfolded equilibrium models for self-supervised <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space interpolation in MRI.

Luo C, Wang H, Liu Y, Xie T, Chen G, Jin Q, Liang D, Cui ZX

pubmed logopapersJun 5 2025
Self-supervised methods for magnetic resonance imaging (MRI) reconstruction have garnered significant interest due to their ability to address the challenges of slow data acquisition and scarcity of fully sampled labels. Current regularization-based self-supervised techniques merge the theoretical foundations of regularization with the representational strengths of deep learning and enable effective reconstruction under higher acceleration rates, yet often fall short in interpretability, leaving their theoretical underpinnings lacking. In this paper, we introduce a novel self-supervised approach that provides stringent theoretical guarantees and interpretable networks while circumventing the need for fully sampled labels. Our method exploits the intrinsic relationship between convolutional neural networks and the null space within structural low-rank models, effectively integrating network parameters into an iterative reconstruction process. Our network learns gradient descent steps of the projected gradient descent algorithm without changing its convergence property, which implements a fully interpretable unfolded model. We design a non-expansive mapping for the network architecture, ensuring convergence to a fixed point. This well-defined framework enables complete reconstruction of missing <math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mi>k</mi> <annotation>$k$</annotation></semantics> </math> -space data grounded in matrix completion theory, independent of fully sampled labels. Qualitative and quantitative experimental results on multi-coil MRI reconstruction demonstrate the efficacy of our self-supervised approach, showing marked improvements over existing self-supervised and traditional regularization methods, achieving results comparable to supervised learning in selected scenarios. Our method surpasses existing self-supervised approaches in reconstruction quality and also delivers competitive performance under supervised settings. This work not only advances the state-of-the-art in MRI reconstruction but also enhances interpretability in deep learning applications for medical imaging.

Automated Brain Tumor Classification and Grading Using Multi-scale Graph Neural Network with Spatio-Temporal Transformer Attention Through MRI Scans.

Srivastava S, Jain P, Pandey SK, Dubey G, Das NN

pubmed logopapersJun 5 2025
The medical field uses Magnetic Resonance Imaging (MRI) as an essential diagnostic tool which provides doctors non-invasive images of brain structures and pathological conditions. Brain tumor detection stands as a vital application that needs specific and effective approaches for both medical diagnosis and treatment procedures. The challenges from manual examination of MRI scans stem from inconsistent tumor features including heterogeneity and irregular dimensions which results in inaccurate assessments of tumor size. To address these challenges, this paper proposes an Automated Classification and Grading Diagnosis Model (ACGDM) using MRI images. Unlike conventional methods, ACGDM introduces a Multi-Scale Graph Neural Network (MSGNN), which dynamically captures hierarchical and multi-scale dependencies in MRI data, enabling more accurate feature representation and contextual analysis. Additionally, the Spatio-Temporal Transformer Attention Mechanism (STTAM) effectively models both spatial MRI patterns and temporal evolution by incorporating cross-frame dependencies, enhancing the model's sensitivity to subtle disease progression. By analyzing multi-modal MRI sequences, ACGDM dynamically adjusts its focus across spatial and temporal dimensions, enabling precise identification of salient features. Simulations are conducted using Python and standard libraries to evaluate the model on the BRATS 2018, 2019, 2020 datasets and the Br235H dataset, encompassing diverse MRI scans with expert annotations. Extensive experimentation demonstrates 99.8% accuracy in detecting various tumor types, showcasing its potential to revolutionize diagnostic practices and improve patient outcomes.
Page 33 of 72720 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.