Publicación:
Application of YOLOv8 and a model based on vision transformers and UNet for LVNC diagnosis: advantages and limitations

Fecha

2025-05-25

Título de la revista

ISSN de la revista

Título del volumen

Editor

Springer

Proyectos de investigación

Unidades organizativas

Número de la revista

Resumen

Hypertrabeculation or left ventricular non-compaction (LVNC) is a cardiac condition that has recently been recognized. While several methods exist for accurately measuring the trabeculae in the ventricle, there is still no consensus within the medical community regarding the optimal approach. In previous work, we introduced DL-LVTQ, a tool based on a UNet convolutional neural network designed to quantify the trabeculae in the left ventricle. In this paper, we present an expanded dataset that includes new patients affected by a cardiomyopathy known as Titin, necessitating the retraining of the models involved in our study on this updated dataset to accurately infer future patients with this condition. We also introduce ViTUNet, a hybrid architecture that aims to merge the benefits of UNet and Vision Transformers for precise segmentation of the left ventricle. Furthermore, we train a YOLOv8 model to detect the left ventricle and integrate it with the hybrid model to focus segmentation on a region of interest around the ventricle. Regarding the precision quality achieved by ViTUNet using YOLOv8, results are quite similar to those obtained by the DL-LVTQ tool, suggesting that the dataset is a limiting factor in our improvement. To substantiate this, we conduct a detailed analysis of the MRI slices in the current dataset. By identifying and removing problematic slices, results significantly improve. The introduction of a YOLOv8 model alongside a deep learning model presents a promising approach.

Descripción

Palabras clave

Left ventricular non-compaction diagnosis, Transformers, UNet, MRI Image segmentation, Vision, Data analysis, YOLOv8

Citación

Practical Applications of Computational Biology and Bioinformatics, 18th International Conference (PACBB 2024), p.p. 132–142