Sort by:
Page 12 of 66652 results

Pulmonary Biomechanics in COPD: Imaging Techniques and Clinical Applications.

Aguilera SM, Chaudhary MFA, Gerard SE, Reinhardt JM, Bodduluri S

pubmed logopapersSep 1 2025
The respiratory system depends on complex biomechanical processes to enable gas exchange. The mechanical properties of the lung parenchyma, airways, vasculature, and surrounding structures play an essential role in overall ventilation efficacy. These complex biomechanical processes however are significantly altered in chronic obstructive pulmonary disease (COPD) due to emphysematous destruction of lung parenchyma, chronic airway inflammation, and small airway obstruction. Recent advancements computed tomography (CT) and magnetic resonance imaging (MRI) acquisition techniques, combined with sophisticated image post-processing algorithms and deep neural network integration, have enabled comprehensive quantitative assessment of lung structure, tissue deformation, and lung function at the tissue level. These methods have led to better phenotyping, therapeutic strategies and refined our understanding of pathological processes that compromise pulmonary function in COPD. In this review, we discuss recent developments in imaging and image processing methods for studying pulmonary biomechanics with specific focus on clinical applications for chronic obstructive pulmonary disease (COPD) including the assessment of regional ventilation, planning of endobronchial valve treatment, prediction of disease onset and progression, sizing of lungs for transplantation, and guiding mechanical ventilation. These advanced image-based biomechanical measurements when combined with clinical expertise play a critical role in disease management and personalized therapeutic interventions for patients with COPD.

RibPull: Implicit Occupancy Fields and Medial Axis Extraction for CT Ribcage Scans

Emmanouil Nikolakakis, Amine Ouasfi, Julie Digne, Razvan Marinescu

arxiv logopreprintSep 1 2025
We present RibPull, a methodology that utilizes implicit occupancy fields to bridge computational geometry and medical imaging. Implicit 3D representations use continuous functions that handle sparse and noisy data more effectively than discrete methods. While voxel grids are standard for medical imaging, they suffer from resolution limitations, topological information loss, and inefficient handling of sparsity. Coordinate functions preserve complex geometrical information and represent a better solution for sparse data representation, while allowing for further morphological operations. Implicit scene representations enable neural networks to encode entire 3D scenes within their weights. The result is a continuous function that can implicitly compesate for sparse signals and infer further information about the 3D scene by passing any combination of 3D coordinates as input to the model. In this work, we use neural occupancy fields that predict whether a 3D point lies inside or outside an object to represent CT-scanned ribcages. We also apply a Laplacian-based contraction to extract the medial axis of the ribcage, thus demonstrating a geometrical operation that benefits greatly from continuous coordinate-based 3D scene representations versus voxel-based representations. We evaluate our methodology on 20 medical scans from the RibSeg dataset, which is itself an extension of the RibFrac dataset. We will release our code upon publication.

Predicting radiation pneumonitis in lung cancer patients using robust 4DCT-ventilation and perfusion imaging.

Neupane T, Castillo E, Chen Y, Pahlavian SH, Castillo R, Vinogradskiy Y, Choi W

pubmed logopapersSep 1 2025
Methods have been developed that apply image processing to 4-Dimension computed tomography (4DCT) to generate lung ventilation (4DCT-ventilation). Traditional methods for 4DCT-ventilation rely on density-change methods and lack reproducibility and do not provide 4DCT-perfusion data. Novel 4DCT-ventilation/perfusion methods have been developed that are robust and provide 4DCT-perfusion information. The purpose of this study was to use prospective clinical trial data to evaluate the ability of novel 4DCT-based lung function imaging methods to predict pneumonitis. Sixty-three advanced-stage lung cancer patients enrolled in a multi-institutional, phase 2 clinical trial on 4DCT-based functional avoidance radiation therapy were used. 4DCTs were used to generate four lung function images: 1) 4DCT-ventilation using the traditional HU approach ('4DCT-vent-HU'), and 3 methods using the novel statistically robust methods: 2) 4DCT-ventilation based on the Mass Conserving Volume Change ('4DCT-vent-MCVC'), 3) 4DCT-ventilation using the Integrated Jacobian Formulation ('4DCT-vent-IJF') and 4) 4DCT-perfusion. Dose-function metrics including mean functional lung dose (fMLD), and percentage of functional lung receiving ≥ 5 Gy (fV5), and ≥ 20 Gy (fV20) were calculated using various structure-based thresholds. The ability of dose-function metrics to predict for ≥ grade 2 RP was assessed using logistic regression and machine learning. Model performance was evaluated using the area under the curve (AUC) and validated through 10-fold cross-validation. 10/63 (15.9 %) patients developed grade ≥2 RP. Logistic regression yielded mean AUCs of 0.70 ± 0.02 (p = 0.04), 0.64 ± 0.04 (p = 0.13), 0.60 ± 0.03 (p = 0.27), and 0.63 ± 0.03 (p = 0.20) for 4DCT-vent-MCVC, 4DCT-perfusion, 4DCT-vent-IJF, and 4DCT-vent-HU, respectively, compared to 0.65 ± 0.10 (p > 0.05) for standard lung metrics. Machine learning modeling resulted in AUCs 0.83 ± 0.04, 0.82 ± 0.05, 0.76 ± 0.05, 0.74 ± 0.06, and 0.75 ± 0.02 for 4DCT-vent-MCVC, 4DCT-perfusion, 4DCT-vent-IJF, and 4DCT-vent-HU, and standard lung metrics respectively, with an accuracy of 75-85 %. This is the first study to comprehensively evaluate 4DCT-perfusion and robust 4DCT-ventilation in predicting clinical outcomes. The data showed that on the presented 63-patient study and using classis logistic regression and ML methods, 4DCT-vent-MCVC was the best predictors of RP.

Improving lung cancer detection with enhanced convolutional sequential networks.

Haziq U, Uddin J, Rahman S, Yaseen M, Khan I, Khan J, Jung Y

pubmed logopapersSep 1 2025
Lung cancer is the most common cause of cancer-related deaths worldwide, and early detection is extremely important for improving survival. According to the National Institute of Health Sciences, lung cancer has the highest rate of cancer mortality, according to the National Institute of Health Sciences. Medical professionals are usually based on clinical imaging methods such as MRI, X-ray, biopsy, ultrasound, and CT scans. However, these imaging techniques often face challenges including false positives, false negatives, and sensitivity. Deep learning approaches, particularly folding networks (CNNS), have arisen as they tackle these issues. However, traditional CNN models often suffer from high computing complexity, slow inference times and over adaptation in real-world clinical data. To overcome these limitations, we propose an optimized sequential folding network (SCNN) that maintains a high level of classification accuracy, simultaneously reducing processing time and computing load. The SCNN model consists of three folding layers, three maximum pooling layers, flat layers and dense layers, allowing for efficient and accurate classification. In the histological imaging dataset, three categories of lung cancer models are adenocarcinoma, benign and squamous cell carcinoma. Our SCNN achieves an average accuracy of 95.34%, an accuracy of 95.66%, a recall of 95.33%, and an F1 score of over 60 epochs within 1000 seconds. These results go beyond traditional CNN, R-CNN, and custom inception classifiers, indicating superior speed and robustness in histological image classification. Therefore, SCNN offers a practical and scalable solution to improve lung cancer awareness in clinical practice.

YOLOv8-BCD: a real-time deep learning framework for pulmonary nodule detection in computed tomography imaging.

Zhu W, Wang X, Xing J, Xu XS, Yuan M

pubmed logopapersSep 1 2025
Lung cancer remains one of the malignant tumors with the highest global morbidity and mortality rates. Detecting pulmonary nodules in computed tomography (CT) images is essential for early lung cancer screening. However, traditional detection methods often suffer from low accuracy and efficiency, limiting their clinical effectiveness. This study aims to devise an advanced deep-learning framework capable of achieving high-precision, rapid identification of pulmonary nodules in CT imaging, thereby facilitating earlier and more accurate diagnosis of lung cancer. To address these issues, this paper proposes an improved deep-learning framework named YOLOv8-BCD, based on YOLOv8 and integrating the BiFormer attention mechanism, Content-Aware ReAssembly of Features (CARAFE) up-sampling method, and Depth-wise Over-Parameterized Depth-wise Convolution (DO-DConv) enhanced convolution. To overcome common challenges such as low resolution, noise, and artifacts in lung CT images, the model employs Super-Resolution Generative Adversarial Network (SRGAN)-based image enhancement during preprocessing. The BiFormer attention mechanism is introduced into the backbone to enhance feature extraction capabilities, particularly for small nodules, while CARAFE and DO-DConv modules are incorporated into the head to optimize feature fusion efficiency and reduce computational complexity. Experimental comparisons using 550 CT images from the LUng Nodule Analysis 2016 dataset (LUNA16 dataset) demonstrated that the proposed YOLOv8-BCD achieved detection accuracy and mean average precision (mAP) at an intersection over union (IoU) threshold of 0.5 (mAP<sub>0.5</sub>) of 86.4% and 88.3%, respectively, surpassing YOLOv8 by 2.2% in accuracy, 4.5% in mAP<sub>0.5</sub>. Additional evaluation on the external TianChi lung nodule dataset further confirmed the model's generalization capability, achieving an mAP<sub>0.5</sub> of 83.8% and mAP<sub>0.5-0.95</sub> of 43.9% with an inference speed of 98 frames per second (FPS). The YOLOv8-BCD model effectively assists clinicians by significantly reducing interpretation time, improving diagnostic accuracy, and minimizing the risk of missed diagnoses, thereby enhancing patient outcomes.

Pulmonary T2* quantification of fetuses with congenital diaphragmatic hernia: a retrospective, case-controlled, MRI pilot study.

Avena-Zampieri CL, Uus A, Egloff A, Davidson J, Hutter J, Knight CL, Hall M, Deprez M, Payette K, Rutherford M, Greenough A, Story L

pubmed logopapersSep 1 2025
Advanced MRI techniques, motion-correction and T2*-relaxometry, may provide information regarding functional properties of pulmonary tissue. We assessed whether lung volumes and pulmonary T2* values in fetuses with congenital diaphragmatic hernia (CDH) were lower than controls and differed between survivors and non-survivors. Women with uncomplicated pregnancies (controls) and those with a CDH had a fetal MRI on a 1.5 T imaging system encompassing T2 single shot fast spin echo sequences and gradient echo single shot echo planar sequences providing T2* data. Motion-correction was performed using slice-to-volume reconstruction, T2* maps were generated using in-house pipelines. Lungs were segmented separately using a pre-trained 3D-deep-learning pipeline. Datasets from 33 controls and 12 CDH fetuses were analysed. The mean ± SD gestation at scan was 28.3 ± 4.3 for controls and 27.6 ± 4.9 weeks for CDH cases. CDH lung volumes were lower than controls in both non-survivors and survivors for both lungs combined (5.76 ± 3.59 [cc], mean difference = 15.97, 95% CI: -24.51--12.9, p < 0.001 and 5.73 ± 2.96 [cc], mean difference = 16, 95% CI: 1.91-11.53, p = 0.008) and for the ipsilateral lung (1.93 ± 2.09 [cc], mean difference = 19.8, 95% CI: -28.48--16.45, p < 0.001 1.58 ± 1.18 [cc], mean difference=20.15, 95% CI: 5.96-15.97, p < 0.001). Mean pulmonary T2* values were lower in non-survivors in both lungs, the ipsilateral and contralateral lungs compared with the control group (81.83 ± 26.21 ms, mean difference = 31.13, 95% CI: -58.14--10.32, p = 0.006; 81.05 ± 26.84 ms, mean difference = 31.91, 95% CI: -59.02--10.82, p = 0.006; 82.62 ± 36.31 ms, mean difference = 30.34, 95% CI: -58.84--8.25, p = 0.011) but no difference was observed between controls and CDH cases that survived. Mean pulmonary T2* values were lower in CDH fetuses compared to controls and CDH cases who died compared to survivors. Mean pulmonary T2* values may have a prognostic function in CDH fetuses. This study provides original motion-corrected assessment of the morphologic and functional properties of the ipsilateral and contralateral fetal lungs in the context of CDH. Mean pulmonary T2* values were lower in CDH fetuses compared to controls and in cases who died compared to survivors. Mean pulmonary T2* values may have a role in prognostication. Reduction in pulmonary T2* values in CDH fetuses suggests altered pulmonary development, contributing new insights into antenatal assessment.

Challenges in diagnosis of sarcoidosis.

Bączek K, Piotrowski WJ, Bonella F

pubmed logopapersSep 1 2025
Diagnosing sarcoidosis remains challenging. Histology findings and a variable clinical presentation can mimic other infectious, malignant, and autoimmune diseases. This review synthesizes current evidence on histopathology, sampling techniques, imaging modalities, and biomarkers and explores how emerging 'omics' and artificial intelligence tools may sharpen diagnostic accuracy. Within the typical granulomatous lesions, limited or 'burned-out' necrosis is an ancillary finding, which can be present in up to one-third of sarcoid biopsies, and demands a careful differential diagnostic work-up. Endobronchial ultrasound-guided transbronchial needle aspiration of lymph nodes has replaced mediastinoscopy as first-line sampling tool, while cryobiopsy is still under validation. Volumetric PET metrics such as total lung glycolysis and somatostatin-receptor tracers refine activity assessment; combined FDG PET/MRI improves detection of occult cardiac disease. Advanced bronchoalveolar lavage (BAL) immunophenotyping via flow cytometry and serum, BAL, and genetic biomarkers show to correlate with inflammatory burden but have low diagnostic value. Multi-omics signatures and Positron Emission Tomography with Computer Tomography radiomics, supported by deep-learning algorithms, show promising results for noninvasive diagnostic confirmation, phenotyping, and disease monitoring. No single test is conclusive for diagnosing sarcoidosis. An integrated, multidisciplinary strategy is needed. Large, multicenter, and multiethnic studies are essential to translate and validate data from emerging AI tools and -omics research into clinical routine.

Pulmonary Embolism Survival Prediction Using Multimodal Learning Based on Computed Tomography Angiography and Clinical Data.

Zhong Z, Zhang H, Fayad FH, Lancaster AC, Sollee J, Kulkarni S, Lin CT, Li J, Gao X, Collins S, Greineder CF, Ahn SH, Bai HX, Jiao Z, Atalay MK

pubmed logopapersSep 1 2025
Pulmonary embolism (PE) is a significant cause of mortality in the United States. The objective of this study is to implement deep learning (DL) models using computed tomography pulmonary angiography (CTPA), clinical data, and PE Severity Index (PESI) scores to predict PE survival. In total, 918 patients (median age 64 y, range 13 to 99 y, 48% male) with 3978 CTPAs were identified via retrospective review across 3 institutions. To predict survival, an AI model was used to extract disease-related imaging features from CTPAs. Imaging features and clinical variables were then incorporated into independent DL models to predict survival outcomes. Cross-modal fusion CoxPH models were used to develop multimodal models from combinations of DL models and calculated PESI scores. Five multimodal models were developed as follows: (1) using CTPA imaging features only, (2) using clinical variables only, (3) using both CTPA and clinical variables, (4) using CTPA and PESI score, and (5) using CTPA, clinical variables, and PESI score. Performance was evaluated using the concordance index (c-index). Kaplan-Meier analysis was performed to stratify patients into high-risk and low-risk groups. Additional factor-risk analysis was conducted to account for right ventricular (RV) dysfunction. For both data sets, the multimodal models incorporating CTPA features, clinical variables, and PESI score achieved higher c-indices than PESI alone. Following the stratification of patients into high-risk and low-risk groups by models, survival outcomes differed significantly (both P <0.001). A strong correlation was found between high-risk grouping and RV dysfunction. Multiomic DL models incorporating CTPA features, clinical data, and PESI achieved higher c-indices than PESI alone for PE survival prediction.

CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.

Reddy KD, Patil A

pubmed logopapersAug 31 2025
Chest X-ray (CXR) is a challenging problem in automated medical diagnosis, where complex visual patterns of thoracic diseases must be precisely identified through multi-label classification and lesion localization. Current approaches typically consider classification and localization in isolation, resulting in a piecemeal system that does not exploit common representations and is often not clinically interpretable, as well as limited in handling multi-label diseases. Although multi-task learning frameworks, such as DeepChest and CLN, appear to meet this goal, they suffer from task interference and poor explainability, which limits their practical application in real-world clinical workflows. To address these limitations, we present a unified multi-task deep learning framework, CXR-MultiTaskNet, for simultaneously classifying thoracic diseases and localizing lesions in chest X-rays. Our framework comprises a standard ResNet50 feature extractor, two task-specific heads for multi-task learning, and a Grad-CAM-based explainability module that provides accurate predictions and enhances clinical explainability. We formulate a joint loss that weighs the relative importance of representation extraction, which is large due to class variations, and the final loss, which is larger in the detection loss that occurs in extreme class imbalances between days and the detectability of varying disease manifestation types. Recent advances made by deep learning methods in the identification of disease in chest X-ray images are promising; however, there are limitations in their performance for complete analysis due to the lack of interpretability, some inherent weaknesses of convolutional neural networks (CNN), and prior learning of classification at the image level before localization of the disease. In this paper, we propose a dual-attention-based hierarchical feature extraction approach, which addresses the challenges of deep learning in detecting diseases in chest X-ray images. Through the use of visual attention maps, the detection steps can be better tracked, and therefore, the entire process is made more interpretable than with a traditional CNN-embedding model. We also manage to obtain both disease-level and pixel-level predictions, which enable explainable and comprehensive analysis of each image and aid in localizing each detected abnormality area. The proposed approach was further optimized for X-ray images by computing the objective losses during training, which ultimately gives higher significance to smaller lesions. Experimental evaluations on a benchmark chest X-ray dataset demonstrate the potential of the proposed approach achieving a macro F1-score of 0.965 (0.968 micro F1-score) for disease classification and mean IoU of 0.851 ([email protected]) for localization of diseases Content: Model intepretability, Chest X-ray image disease detection, Detection region localization, Weakly supervised transfer learning Lesion localization → 5 of 0.927 Compared to state-of-the-art single-task and multi-task baselines, these results are consistently better. The presented framework provides an integrated, method-based approach to chest X-ray analysis that is clinically useful, interpretable, and scalable for automation, allowing for efficient diagnostic pathways and enhanced clinical decision-making. This single framework can serve as a router for next-gen explainable AI in radiology.

Automated quantification of lung pathology on micro-CT in diverse disease models using deep learning.

Belmans F, Seldeslachts L, Vanhoffelen E, Tielemans B, Vos W, Maes F, Vande Velde G

pubmed logopapersAug 30 2025
Micro-CT significantly enhances the efficiency, predictive power and translatability of animal studies to human clinical trials for respiratory diseases. However, the analysis of large micro-CT datasets remains a bottleneck. We developed a generic deep learning (DL)-based lung segmentation model using longitudinal micro-CT images from studies of Down syndrome, viral and fungal infections, and exacerbation with variable lung pathology and degree of disease burden. 2D models were trained with cross-validation on axial, coronal and sagittal slices. Predictions from these single-orientation models were combined to create a 2.5D model using majority voting or probability averaging. The generalisability of these models to other studies (COVID-19, lung inflammation and fibrosis), scanner configurations and rodent species (rats, hamsters, degus) was tested, including a publicly available database. On the internal validation data, the highest mean Dice Similarity Coefficient (DSC) was found for the 2.5D probability averaging model (0.953 ± 0.023), further improving the output of the 2D models by removing erroneous voxels outside the lung region. The models demonstrated good generalisability with average DSC values ranging from 0.89 to 0.94 across different lung pathologies and scanner configurations. The biomarkers extracted from manual and automated segmentations are well in agreement and proved that our proposed solution effectively monitors longitudinal lung pathology development and response to treatment in real-world preclinical studies. Our DL-based pipeline for lung pathology quantification offers efficient analysis of large micro-CT datasets, is widely applicable across rodent disease models and acquisition protocols and enables real-time insights into therapy efficacy. This research was supported by the Service Public de Wallonie (AEROVID grant to FB, WV) and The Flemish Research Foundation (FWO, doctoral mandate 1SF2224N to EV and 1186121N/1186123N to LS, infrastructure grant I006524N to GVV).
Page 12 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.