Abstract:
Depth cues are an essential part of navigation and device positioning tasks
during clinical interventions. Yet, many minimally-invasive procedures, such
as catheterizations, are usually performed under X-ray guidance only
depicting a 2D projection of the anatomy, which lacks depth information.
Previous attempts to integrate pre-operative 3D data of the patient by
registering these to intra-operative data have led to virtual 3D renderings
independent of the original X-ray appearance and planar 2D color overlays
(e.g. roadmaps). A major drawback associated to these solutions is the
trade-off between X-ray attenuation values that is completely neglected
during 3D renderings, and depth perception not being incorporated into the 2D
roadmaps. This paper presents a novel technique for enhancing depth
perception of interventional X-ray images preserving the original attenuation
appearance. Starting from patient-specific pre-operative 3D data, our method
relies on GPU ray casting to compute a colored depth map, which assigns a
predefined color to the first incidence of gradient magnitude value above a
predefined threshold along the ray. The colored depth map values are
carefully integrated into the X-Ray image while maintaining its original
grayscale intensities. The presented method was tested and analysed for three
relevant clinical scenarios covering different anatomical aspects and
targeting different levels of interventional expertise. Results demonstrate
that improving depth perception of X-ray images has the potential to lead to
safer and more efficient clinical interventions.