Introduction: We developed a fully integrated medical imaging system that performs 3D reconstructions of pre- and intra-operative images, translates these reconstructed images into holograms, projects holograms onto the surgical field via wearable head-up display, and provides feedback to the surgeon (via the head-up display) in real time.
Methods: We built a fully integrated system that performs 3D reconstruction of CT and MRI images from 2D z-stacks and relays these into holograms that a surgeon sees with a head-up display. We leveraged deep learning to extract surgical regions of interest (e.g., brain tumors, anatomical landmarks) from pre-operative MRI scans, and perform 3-dimensional reconstructions of them into holograms that are projected onto the surgical field (Figure 1). Specifically, a deep convolutional neural network was used for the segmentation process. The architecture of this deep learning algorithm included embedding layer and 4 convolutional layers with dropout between each of the first two, and softmax layer to classify each individual pixel in the image. After segmentation and post-image processing, we trained our software to recognize certain signals within images (i.e. contrast enhancing portion of brain tumors) to automate the reconstruction process. The training is done on a large training set of past MRI images from the Mayo Clinic. Next, we implemented the software on a wearable that projects holographic images, and built interaction software to let the surgeon interact with the 3D holographic images.
Results: The feedback loop is a connection between the head-up display, the navigation probe, the navigation registration receiver, and the navigation module. All communications are performed wirelessly through Bluetooth and WiFi and tested in real-time on a phantom head (Figure 2).
Conclusions: This study lays the groundwork and the next phase will be the expansion to surgeries involving spine, extremities and skull base. This can also improve pre-operative surgical simulation and resident education.
Patient Care: Our research focus on building a system that integrates current surgical navigation system (and associated surgical tools), registers locations of the surgical tools in relation to surgical site, and provides visual feedback to the surgeon accordingly. If successful, our device will dramatically increase safety and efficacy of image-guided surgeries, enhance surgical workflow, and decrease surgeon anxiety while making significant ergonomic improvements on current image guidance systems. The software module used to enable holographic navigation system can also be seamlessly integrated with augmented reality (AR) and virtual reality (VR) headsets for neurosurgical training due to the fact that the software building environment, such as Unity and Microsoft Visual Studio, for VR and holographic navigation system are similar. VR headset for neurosurgical training will be useful, as surgeons will be able to actually immerse themselves in anatomical landmarks of their interests and visualize them for thorough understanding before the actual operations.
Learning Objectives: By the conclusion of this session, participants should be able to: 1) Describe the current unmet needs in pre- and intra-operative imaging modality, 2) Discuss, in small groups the importance of more intuitive and seamless visualization of key anatomic landmarks, 3) Identify areas of improvements needed to transform image-guided surgeries