At first glance, a gamer playing Pokémon Go has little in common with a surgeon saving lives in an operating theatre. But dig a little deeper and you’ll discover that might not be the case for much longer, writes Douglas Bruey, electrical engineering program lead at Synapse
Augmented reality and virtual reality technologies are poised to open up a whole new world of opportunities. We’re already seeing the effects of VR when it comes to gaming. But in future, could AR add a new dimension to surgery? AR and VR both have the ability to alter our perception of the world. AR takes our current reality and adds something to it – virtual objects or information. VR, on the other hand, immerses us in a different – virtual – world.
For many people, AR started in 2013 with Google Glass. The heads-up display delivered two-dimensional content to one eye via a prism projector. However, aside from detecting head movement, it lacked context awareness – limiting its use. The Sony SmartEyeglass came next and provided an increased field of view and improved performance, which allowed the development of applications that recognised objects and provided context relevant content.
Fast forward to 2016 and Microsoft’s HoloLens appeared – with Kinect-style sensing powering rock-solid anchoring of virtual content. We’ve harnessed this technology to demonstrate what it could mean for the operating theatre of the future, more of which later. It was also in 2016 that VR achieved its first commercial successes with the introduction of the Samsung Gear VR, the Sony PlayStation VR, Oculus Rift and the Valve-powered HTC Vive.
So why has VR come of age now?
In a word – speed. The challenge of VR development is to fool the brain into accepting what is being seen as real. Any delay between actually moving and the image moving confuses our senses and can result in a loss of balance or even nausea. The accuracy and latency of visual input relative to a change in head position is critical. Hardware and software technology needed to reach a point that they could work that magic.
Today, displays are small enough to sit on a user’s nose and serve up three times more frames per second than a fi lm – and all at very low latencies. VR systems couple these technologies with ‘pose-tracking’ solutions that quickly and accurately detect the position of a user’s head to close the loop with the visual system. It is the speed and accuracy of the pose-tracking technology that truly differentiates the VR experience.
Solutions targeting the mobile phone market use the inertial measurement unit in the phone to detect the motion of the user’s head and scroll visuals accordingly – but cannot track a user’s absolute position in space. External camera-based tracking is another popular technology, which identifies points on an object and computes the pose using computer vision algorithms – but camera resolution and depth of field limit its range.
State of the art today is Valve’s SteamVR Tracking technology which uses scanning lasers to triangulate the position of tracked objects with sub-millimetre accuracy in a 25 square metre room. Outside of gaming, SteamVR Tracking is poised to make an impact on training simulations, physical and psychological therapy, industrial control, architecture and design. But VR is not the only application that will benefit from SteamVR Tracking. The user experience in AR systems is also tied to the speed and accuracy of pose tracking. Internal cameras and other systems can solve many of AR’s challenges – including depth perception, object recognition and pose tracking.
However, relying on the ‘inside-out’ tracking of a headset’s camera may not be sufficient in all environments. In a typical operating theatre today, for example, there are numerous charts, consoles and displays to support a surgical procedure – with new sensing and imaging technologies adding to this all the time. Add in lab results, patient history and a surgical plan, and presentation of information becomes a challenge. How can we make all this available to the surgeon and yet not distracting?
AR could replace physical screens with floating monitors, improving visibility, reducing clutter and enabling ‘weightless’ reconfiguration – positioning with a simple gesture. Virtual screens could be summoned to show information as and when it is needed, with CT scan slices visualised as a three-dimensional model. Now imagine if the 3D scan was overlaid on to a patient. Such a step could quite literally provide a surgeon with X-ray vision, allowing a CT scan to be reviewed relative to actual anatomy.
Laparoscopic tool handles could be ‘pose tracked’ to give the surgeon ‘virtual sight’ of the tool tips hidden from normal view inside the patient. We’ve used today’s technology to bring the augmented operating theatre to life as never before in a demonstration of what is becoming possible. Of course, the clinical reality is still some way off – but it’s an interesting glimpse into the future. Despite great technological strides, AR is still an infant technology. And visualisation is only part of the problem – it is of limited use without a convincing, natural user interface.
With improvements in augmented hardware guaranteed, the next battleground will surely be the user interface. We see the technologies that drive AR and VR as very complementary and potentially convergent. There are many uses outside of gaming that these will enable and we’re already working on diverse applications with clients. Moore’s law, as it applies here, says headsets will get smaller, lighter and faster – so ultimately we will only be limited by our imagination.