Sort by:
Page 43 of 3053046 results

Optimization strategy for fat-suppressed T2-weighted images in liver imaging: The combined application of AI-assisted compressed sensing and respiratory triggering.

Feng M, Li S, Song X, Mao W, Liu Y, Yuan Z

pubmed logopapersAug 1 2025
This study aimed to optimize the imaging time and image quality of T2WI-FS through the integration of Artificial Intelligence-Assisted Compressed Sensing (ACS) and respiratory triggering (RT). A prospective cohort study was conducted on one hundred thirty-four patients (99 males, 35 females; average age: 57.93 ± 9.40 years) undergoing liver MRI between March and July 2024. All patients were scanned using both breath-hold ACS-assisted T2WI (BH-ACS-T2WI) and respiratory-triggered ACS-assisted T2WI (RT-ACS-T2WI) sequences. Two experienced radiologists retrospectively analyzed regions of interest (ROIs), recorded primary lesions, and assessed key metrics including signal intensity (SI), standard deviation (SD), signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), motion artifacts, hepatic vessel clarity, liver edge sharpness, lesion conspicuity, and overall image quality. Statistical comparisons were conducted using Mann-Whitney U test, Wilcoxon signed-rank test and intraclass correlation coefficient (ICC). Compared to BH-ACS-T2WI, RT-ACS-T2WI significantly reduced average imaging time from 38 s to 22.91 ± 3.36 s, achieving a 40 % reduction in scan duration. Additionally, RT-ACS-T2WI demonstrated superior performance across multiple parameters, including SI, SD, SNR, CNR, motion artifact reduction, hepatic vessel clarity, liver edge sharpness, lesion conspicuity (≤5 mm), and overall image quality (P < 0.05). Notably, the lesion detection rate was slightly higher with RT-ACS-T2WI (94 %) compared to BH-ACS-T2WI (90 %). The RT-ACS-T2WI sequence not only enhanced image quality but also reduced imaging time to approximately 23 s, making it particularly beneficial for patients unable to perform prolonged breath-holding maneuvers. This approach represents a promising advancement in optimizing liver MRI protocols.

Evaluation of calcaneal inclusion angle in the diagnosis of pes planus with pretrained deep learning networks: An observational study.

Aktas E, Ceylan N, Yaltirik Bilgin E, Bilgin E, Ince L

pubmed logopapersAug 1 2025
Pes planus is a common postural deformity related to the medial longitudinal arch of the foot. Radiographic examinations are important for reproducibility and objectivity; the most commonly used methods are the calcaneal inclusion angle and Mery angle. However, there may be variations in radiographic measurements due to human error and inexperience. In this study, a deep learning (DL)-based solution is proposed to solve this problem. Lateral radiographs of the right and left foot of 289 patients were taken and saved. The study population is a homogeneous group in terms of age and gender, and does not provide sufficient heterogeneity to represent the general population. These radiography (X-ray) images were measured by 2 different experts and the measurements were recorded. According to these measurements, each X-ray image is labeled as pes planus or non-pes planus. These images were then filtered and resized using Gaussian blurring and median filtering methods. As a result of these processes, 2 separate data sets were created. Generally accepted DL models (AlexNet, GoogleNet, SqueezeNet) were reconstructed to classify these images. The 2-category (pes planus/no pes planus) data in the 2 preprocessed and resized datasets were classified by fine-tuning these reconstructed transfer learning networks. The GoogleNet and SqueezeNet models achieved 100% accuracy, while AlexNet achieved 92.98% accuracy. These results show that the predictions of the models and the measurements of expert radiologists overlap to a large extent. DL-based diagnostic methods can be used as a decision support system in the diagnosis of pes planus. DL algorithms enhance the consistency of the diagnostic process by reducing measurement variations between different observers. DL systems accelerate diagnosis by automatically performing angle measurements from X-ray images, which is particularly beneficial in busy clinical settings by saving time. DL models integrated with smartphone cameras can facilitate the diagnosis of pes planus and serve as a screening tool, especially in regions with limited access to healthcare.

Light Convolutional Neural Network to Detect Chronic Obstructive Pulmonary Disease (COPDxNet): A Multicenter Model Development and External Validation Study.

Rabby ASA, Chaudhary MFA, Saha P, Sthanam V, Nakhmani A, Zhang C, Barr RG, Bon J, Cooper CB, Curtis JL, Hoffman EA, Paine R, Puliyakote AK, Schroeder JD, Sieren JC, Smith BM, Woodruff PG, Reinhardt JM, Bhatt SP, Bodduluri S

pubmed logopapersAug 1 2025
Approximately 70% of adults with chronic obstructive pulmonary disease (COPD) remain undiagnosed. Opportunistic screening using chest computed tomography (CT) scans, commonly acquired in clinical practice, may be used to improve COPD detection through simple, clinically applicable deep-learning models. We developed a lightweight, convolutional neural network (COPDxNet) that utilizes minimally processed chest CT scans to detect COPD. We analyzed 13,043 inspiratory chest CT scans from the COPDGene participants, (9,675 standard-dose and 3,368 low-dose scans), which we randomly split into training (70%) and test (30%) sets at the participant level to no individual contributed to both sets. COPD was defined by postbronchodilator FEV /FVC < 0.70. We constructed a simple, four-block convolutional model that was trained on pooled data and validated on the held-out standard- and low-dose test sets. External validation was performed using standard-dose CT scans from 2,890 SPIROMICS participants and low-dose CT scans from 7,893 participants in the National Lung Screening Trial (NLST). We evaluated performance using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, Brier scores, and calibration curves. On COPDGene standard-dose CT scans, COPDxNet achieved an AUC of 0.92 (95% CI: 0.91 to 0.93), sensitivity of 80.2%, and specificity of 89.4%. On low-dose scans, AUC was 0.88 (95% CI: 0.86 to 0.90). When the COPDxNet model was applied to external validation datasets, it showed an AUC of 0.92 (95% CI: 0.91 to 0.93) in SPIROMICS and 0.82 (95% CI: 0.81 to 0.83) on NLST. The model was well-calibrated, with Brier scores of 0.11 for standard- dose and 0.13 for low-dose CT scans in COPDGene, 0.12 in SPIROMICS, and 0.17 in NLST. COPDxNet demonstrates high discriminative accuracy and generalizability for detecting COPD on standard- and low-dose chest CT scans, supporting its potential for clinical and screening applications across diverse populations.

MR-AIV reveals <i>in vivo</i> brain-wide fluid flow with physics-informed AI.

Toscano JD, Guo Y, Wang Z, Vaezi M, Mori Y, Karniadakis GE, Boster KAS, Kelley DH

pubmed logopapersAug 1 2025
The circulation of cerebrospinal and interstitial fluid plays a vital role in clearing metabolic waste from the brain, and its disruption has been linked to neurological disorders. However, directly measuring brain-wide fluid transport-especially in the deep brain-has remained elusive. Here, we introduce magnetic resonance artificial intelligence velocimetry (MR-AIV), a framework featuring a specialized physics-informed architecture and optimization method that reconstructs three-dimensional fluid velocity fields from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). MR-AIV unveils brain-wide velocity maps while providing estimates of tissue permeability and pressure fields-quantities inaccessible to other methods. Applied to the brain, MR-AIV reveals a functional landscape of interstitial and perivascular flow, quantitatively distinguishing slow diffusion-driven transport (∼ 0.1 µm/s) from rapid advective flow (∼ 3 µm/s). This approach enables new investigations into brain clearance mechanisms and fluid dynamics in health and disease, with broad potential applications to other porous media systems, from geophysics to tissue mechanics.

Enhanced Detection, Using Deep Learning Technology, of Medial Meniscal Posterior Horn Ramp Lesions in Patients with ACL Injury.

Park HJ, Ham S, Shim E, Suh DH, Kim JG

pubmed logopapersJul 31 2025
Meniscal ramp lesions can impact knee stability, particularly when associated with anterior cruciate ligament (ACL) injuries. Although magnetic resonance imaging (MRI) is the primary diagnostic tool, its diagnostic accuracy remains suboptimal. We aimed to determine whether deep learning technology could enhance MRI-based ramp lesion detection. We reviewed the records of 236 patients who underwent arthroscopic procedures documenting ACL injuries and the status of the medial meniscal posterior horn. A deep learning model was developed using MRI data for ramp lesion detection. Ramp lesion risk factors among patients who underwent ACL reconstruction were analyzed using logistic regression, extreme gradient boosting (XGBoost), and random forest models and were integrated into a final prediction model using Swin Transformer Large architecture. The deep learning model using MRI data demonstrated superior overall diagnostic performance to the clinicians' assessment (accuracy of 73.3% compared with 68.1%, specificity of 78.0% compared with 62.9%, and sensitivity of 64.7% compared with 76.4%). Incorporating risk factors (age, posteromedial tibial bone marrow edema, and lateral meniscal tears) improved the model's accuracy to 80.7%, with a sensitivity of 81.8% and a specificity of 80.9%. Integrating deep learning with MRI data and risk factors significantly enhanced diagnostic accuracy for ramp lesions, surpassing that of the model using MRI alone and that of clinicians. This study highlights the potential of artificial intelligence to provide clinicians with more accurate diagnostic tools for detecting ramp lesions, potentially enhancing treatment and patient outcomes. Diagnostic Level III. See Instructions for Authors for a complete description of levels of evidence.

SAM-Med3D: A Vision Foundation Model for General-Purpose Segmentation on Volumetric Medical Images.

Wang H, Guo S, Ye J, Deng Z, Cheng J, Li T, Chen J, Su Y, Huang Z, Shen Y, zzzzFu B, Zhang S, He J

pubmed logopapersJul 31 2025
Existing volumetric medical image segmentation models are typically task-specific, excelling at specific targets but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this article, we introduce segment anything model (SAM)-Med3D, a vision foundation model (VFM) for general-purpose segmentation on volumetric medical images. Given only a few 3-D prompt points, SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities. To achieve this, we gather and preprocess a large-scale 3-D medical image segmentation dataset, SA-Med3D-140K, from 70 public datasets and 8K licensed private cases from hospitals. This dataset includes 22K 3-D images and 143K corresponding masks. SAM-Med3D, a promptable segmentation model characterized by its fully learnable 3-D structure, is trained on this dataset using a two-stage procedure and exhibits impressive performance on both seen and unseen segmentation targets. We comprehensively evaluate SAM-Med3D on 16 datasets covering diverse medical scenarios, including different anatomical structures, modalities, targets, and zero-shot transferability to new/unseen tasks. The evaluation demonstrates the efficiency and efficacy of SAM-Med3D, as well as its promising application to diverse downstream tasks as a pretrained model. Our approach illustrates that substantial medical resources can be harnessed to develop a general-purpose medical AI for various potential applications. Our dataset, code, and models are available at: https://github.com/uni-medical/SAM-Med3D.

A Trust-Guided Approach to MR Image Reconstruction with Side Information.

Atalik A, Chopra S, Sodickson DK

pubmed logopapersJul 31 2025
Reducing MRI scan times can improve patient care and lower healthcare costs. Many acceleration methods are designed to reconstruct diagnostic-quality images from sparse k-space data, via an ill-posed or ill-conditioned linear inverse problem (LIP). To address the resulting ambiguities, it is crucial to incorporate prior knowledge into the optimization problem, e.g., in the form of regularization. Another form of prior knowledge less commonly used in medical imaging is the readily available auxiliary data (a.k.a. side information) obtained from sources other than the current acquisition. In this paper, we present the Trust-Guided Variational Network (TGVN), an end-to-end deep learning framework that effectively and reliably integrates side information into LIPs. We demonstrate its effectiveness in multi-coil, multi-contrast MRI reconstruction, where incomplete or low-SNR measurements from one contrast are used as side information to reconstruct high-quality images of another contrast from heavily under-sampled data. TGVN is robust across different contrasts, anatomies, and field strengths. Compared to baselines utilizing side information, TGVN achieves superior image quality while preserving subtle pathological features even at challenging acceleration levels, drastically speeding up acquisition while minimizing hallucinations. Source code and dataset splits are available on github.com/sodicksonlab/TGVN.

Navigating the AI revolution: will radiology sink or soar?

Schlemmer HP

pubmed logopapersJul 31 2025
The rapid acceleration of digital transformation and artificial intelligence (AI) is fundamentally reshaping medicine. Much like previous technological revolutions, AI-driven by advances in computer technology and software including machine learning, computer vision, and generative models-is redefining cognitive work in healthcare. Radiology, as one of the first fully digitized medical specialties, is at the forefront of this transformation. AI is automating workflows, enhancing image acquisition and interpretation, and improving diagnostic precision, which collectively boost efficiency, reduce costs, and elevate patient care. Global data networks and AI-powered platforms are enabling borderless collaboration, empowering radiologists to focus on complex decision-making and patient interaction. Despite these profound opportunities, widespread AI adoption in radiology remains limited, often confined to specific use cases, such as chest, neuro, and musculoskeletal imaging. Concerns persist regarding transparency, explainability, and the ethical use of AI systems, while unresolved questions about workload, liability, and reimbursement present additional hurdles. Psychological and cultural barriers, including fears of job displacement and diminished professional autonomy, also slow acceptance. However, history shows that disruptive innovations often encounter initial resistance. Just as the discovery of X-rays over a century ago ushered in a new era, today, digitalization and artificial intelligence will drive another paradigm shift-this time through cognitive automation. To realize AI's full potential, radiologists must maintain clinical oversight and safeguard their professional identity, viewing AI as a supportive tool rather than a threat. Embracing AI will allow radiologists to elevate their profession, enhance interdisciplinary collaboration, and help shape the future of medicine. Achieving this vision requires not only technological readiness but also early integration of AI education into medical training. Ultimately, radiology will not be replaced by AI, but by radiologists who effectively harness its capabilities.

Generative artificial intelligence for counseling of fetal malformations following ultrasound diagnosis.

Grünebaum A, Chervenak FA

pubmed logopapersJul 31 2025
To explore the potential role of generative artificial intelligence (GenAI) in enhancing patient counseling following prenatal ultrasound diagnosis of fetal malformations, with an emphasis on clinical utility, patient comprehension, and ethical implementation. The detection of fetal anomalies during the mid-trimester ultrasound is emotionally distressing for patients and presents significant challenges in communication and decision-making. Generative AI tools, such as GPT-4 and similar models, offer novel opportunities to support clinicians in delivering accurate, empathetic, and accessible counseling while preserving the physician's central role. We present a narrative review and applied framework illustrating how GenAI can assist obstetricians before, during, and after the fetal anomaly scan. Use cases include lay summaries, visual aids, anticipatory guidance, multilingual translation, and emotional support. Tables and sample prompts demonstrate practical applications across a range of anomalies.

Technological advancements in sports injury: diagnosis and treatment.

Zhong Z, DI W

pubmed logopapersJul 31 2025
Sports injuries are a significant concern for athletes at all levels of competition, ranging from acute traumas to chronic conditions. Prompt diagnosis and effective treatment are crucial for an athlete's recovery and quality of life. Traditionally, sports injury diagnosis has relied on clinical assessments, patient history, and basic imaging techniques such as X-rays, ultrasound, and magnetic resonance imaging (MRI). However, recent technological advancements have revolutionized the field of sports medicine, offering more accurate diagnoses and targeted treatment strategies. High-resolution MRI and CT scans provide detailed images of deep tissue injuries, while advanced ultrasound technology enables on-field diagnostics. Wearable sensor devices and machine learning algorithms allow real-time monitoring of an athlete's movements and physical loads, facilitating early intervention and injury risk prediction. Regenerative medicine, including stem cell therapy and tissue engineering, has emerged as a transformative approach to healing damaged tissues and reducing treatment time. Despite the challenges of high costs, lack of skilled personnel, and ethical considerations, the integration of artificial intelligence and machine learning into sports medicine holds immense potential for revolutionizing injury prevention and management. As these advancements continue to evolve, they are expected to extend athletes' careers and enhance their overall quality of life. This review summarizes conventional methods to diagnose and manage injuries and provides insights into the recent advancements in the field of sports science and medicine. It also states future outlook on the diagnosis and treatment of sports injuries.
Page 43 of 3053046 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.