- Original research article
- Open Access
Gesture-controlled interactive three dimensional anatomy: a novel teaching tool in head and neck surgery
Journal of Otolaryngology - Head & Neck Surgery volume 43, Article number: 38 (2014)
There is a need for innovative anatomic teaching tools. This paper describes a three dimensional (3D) tool employing the Microsoft Kinect ™. Using this instrument, 3D temporal bone anatomy can be manipulated with the use of hand gestures, in the absence of mouse or keyboard.
CT Temporal bone data is imported into an image processing program and segmented. This information is then exported in polygonal mesh format to an in-house designed 3D graphics engine with an integrated Microsoft Kinect™. Motion in the virtual environment is controlled by tracking hand position relative to the user’s left shoulder.
The tool successfully tracked scene depth and user joint locations. This permitted gesture-based control over the entire 3D environment. Stereoscopy was deemed appropriate with significant object projection, while still maintaining the operator’s ability to resolve image details. Specific anatomical structures can be selected from within the larger virtual environment. These structures can be extracted and rotated at the discretion of the user. Voice command employing the Kinect’s™ intrinsic speech library was also implemented, but is easily confounded by environmental noise.
There is a need for the development of virtual anatomy models to complement traditional education. Initial development is time intensive. Nonetheless, our novel gesture-controlled interactive 3D model of the temporal bone represents a promising interactive teaching tool utilizing a novel interface.
Three-dimensional (3D) virtual imagery can be an important tool for understanding the spatial relationships between distinct anatomical structures. This is particularly relevant in regions for which the classical dissection technique has limitations. For example, the complexity and microscopic nature of head and neck anatomy has proven to be an ongoing challenge for learners . Within the temporal bone, there are considerable soft tissue structures, densely situated in bone, making severe demands on visuo-spatial capabilities. New learners and Senior residents must grapple with complex normative and pathologic conditions, some which occur only infrequently. Here, novel tools are needed to facilitate spatial anatomic learning and to adequately prepare the professional trainee for the practical demands of surgery. Previous research has indicated that the learning experience of students is positively affected when 3D teaching tools are used in parallel with traditional teaching methods . 3D computer simulations have been introduced in the teaching of the middle and inner ear , the orbital anatomy , and dental anatomy , with encouraging results.
Medical students still learn the anatomy of this region primarily through illustrated texts, many of which have been in print for decades -, but the dissection of the temporal bone itself is usually limited to senior trainees, largely due to the relative scarcity of available samples for practicing operative approaches.
With the advent of high-speed computing, 3D graphical models of complex anatomy have become possible ,-. Actual interaction with 3D anatomical models can occur at several levels. In the simplest form they may involve allowing the user to examine an object in 3D or from different viewpoints ,-. In more complex cases, a user may be able to select components for closer study, move them about and examine supplementary data such as labels, radiographs and animations ,,-. At the highest levels, users may interact in a natural way with the model, moving it by grasping it with a hand or altering it by cutting or drilling with a tool ,. The addition of gesture-based interaction to stereoscopic models combines intuitive interaction with immersive visualization. It is postulated that such a system could alleviate cognitive overload by providing a learner with an environment in which their natural actions act on objects, without the need for complex input devices.
While the technology and accompanying literature surrounding 3D imagery develops, education needs to continue to advance in the setting of both time and fiscal constraints. In this paper we describe a novel gesture-controlled 3D teaching tool in which the three dimensional temporal bone anatomy is manipulated with the use of hand gestures through a Microsoft Kinect™, in the absence of mouse and keyboard. Key structures are easily maneuvered and can be removed and better examined in reference to the whole. This novel tool provides a learning environment in which the physical involvement of the user may enhance the learning experience and increase motivation.
In order to take advantage of recent advances in technology we have developed a 3D stereoscopic display which uses the Microsoft Kinect™ (Microsoft Corporation, Redmond, Washington, USA) to allow gesture control of anatomical images. Images can be selected, translated, magnified and rotated with simple body motions. The system uses 3D models extracted from CT data by segmentation of anatomical structures of interest. The models are then displayed stereoscopically by a 3D graphics engine which incorporates gesture control from the Microsoft Kinect™. What follows is a description of the system and the process by which anatomical information is converted from tomographic data to a gesture-based anatomy teaching tool.
Our aim is to provide a teaching tool for patient-specific anatomy. To facilitate this, we use actual CT images as the basis. In our prototype, 0.15 mm slice thickness cadaveric temporal bone images (General Electric MicroCT - eXplore speCZT, 0.150 mm thickness) are acquired and imported to a 3D image processing program (Mimics v. 11.02, Materialise NV, Leuven, Belgium). The dataset is resampled to a slice interval of 0.1 mm to help volume interpolation. Anatomical regions of interest, such as the temporal bone, internal carotid artery and facial nerve are identified by segmentation. Initial segmentation is carried out by thresholding CT data by density. For example, the temporal bone is identified by retaining all voxels with densities between 382 and 3071 Hounsfield units (HU). Soft tissue regions and ossicles are manually segmented by visual inspection of the data while varying the density threshold; an expert then inspects the margins of the rough segmentation and adds or removes voxels as needed, based on knowledge of the anatomy. For example, with the contrast set to HU less than -50, the tympanic membrane can be partly resolved and the margins of the membrane extrapolated by estimation. To ensure that the membrane will appear intact in the final model, it is thickened to 2-3 voxels.
The segmented anatomical models are converted to 3D polygonal mesh format and exported in stereolithography file format (STL) (Figure 1). The resulting models can be displayed in 3D, using a commercially available 3D graphics card (Nvidia GeForce GTX560 - Santa Clara, California, USA), active shutter glasses and either a 3D capable monitor or projector. We have developed our own 3D anatomical graphics engine which loads and renders multiple large polygonal mesh models in 3D and allows users to manipulate camera positions as well as select and manipulate individual models.
Our graphics engine is developed in Microsoft Visual Studios 2008 using the Microsoft Foundation Class software library and the C++ programming language. The Microsoft Kinect' Software Development Kit (MKSDK) and the NVidia Application Programming Interface (API) were integrated. To render in 3D with stereoscopy (Nvidia's 3D vision) the DirectX 11.0 API is employed. 3D vision is automatically engaged when an application is set to full screen. The hardware and software requirements needed to run our engine are widely available and accessible to the general user.
The MKSDK uses input from a colour camera and infrared depth sensor to detect human motion. It provides information on scene depth and color (Figure 2) based on the joint locations (Figure 3). It also contains an intrinsic speech library that facilitates speech recognition using a built-in microphone. Using the MKSDK, the software is able to integrate user body motions detected by the Kinect™ into our anatomical graphics engine.
Our software uses the Kinect™ to allow an operator to navigate in 3D space and to select specific anatomical structures of interest from within the larger virtual environment (Figure 4). These structures can then be extracted and rotated in all planes at the discretion of the user.
To move in 3D space, both the left and right hand are tracked relative to the position of the left shoulder. The left hand controls translational movement, and the right hand controls rotation and orientation. Two cubes, shown at the bottom of both Figures 2 and 4, are used to visualize hand locations. A preset distance from the hand to the shoulder is defined as the center of each cube. When the hand, represented by a small sphere, is centered in a cube, no movement or rotation occurs. As the hand moves away from the center, camera movement or rotation is proportional to the hand’s distance from the center. When the user’s hand lies outside of the cube for several seconds, motion control of the scene is disabled. Motion control can be re-enabled by again placing one’s hand in the center reference position.
The NVidia API allows the software to control depth and convergence of 3D vision in our system. Depth settings control the illusion of depth in the 3D image; convergence settings control the distance from the camera and at which objects appear to "pop" out of the screen. If these settings are too low then 3D stereoscopy may not be noticeable, however if too large, there can be divergence and the stereoscopy may not be resolved as a single image, resulting in eye-strain.
When the camera is at a desired location, the user can switch modes to select objects of interest for closer inspection. The operator switches modes by either tapping their left shoulder with their right hand, or employing an audio command. When the selection mode is activated, the left cube controls a sphere that can move within the 3D scene to highlight any desired structure. Once an object is highlighted it can then be selected by another shoulder tap or an audio command. Once an object is selected (Figure 4), the left hand controls the location of the structure while the right hand controls its orientation. The 3D vision effect is set to bring the selected object, towards the user, enabling a “pop out” so the anatomy can be observed more closely and manipulated separately from the larger model.
New technologies are advocated, not to replace but rather, to complement classic learning. These modalities are best perceived as fueling a renaissance in anatomy learning as opposed to supplanting cadaveric education. They represent a promising opportunity in medical education. Successful integration into standard training and patient care requires a significant interplay between anatomists, clinicians and engineering. Collaborative development of educational and manipulative tools needs to advance before global acceptance is assured.
Requisite to any teaching model is the recognition that anatomy is fundamental for responsible and effective medical education and patient management and the deconstruction of anatomic education and the associated undermining of crucial knowledge and skills may lead to under-qualified doctors. Medical education needs to be enduring and not solely pertinent to exam purposes. Patient oriented and safe care includes a sound anatomical basis provided during formative years in association with lifelong regular learning.
Initial costs in setup and design of 3D digital medical education tools may seem prohibitive. A cost comparison between physical and digital dissection was undertaken by Hisley et al. in 2007 . Physical dissection appeared more economical when a singular cadaver was compared to initial setup of a virtual dissected specimen. However, even accounting for multiple work stations and the accrual of a broad anatomic library, digital dissection quickly becomes a less expensive option when considered longitudinally.
Unfortunately the development of three dimensional models is time intensive. The constructed images are highly accurate and drawn from real anatomy but ultimately remain a stylized abstraction. Additionally, it is difficult to determine the appropriate level of detail to include, as a teaching module may be used by disparate learners. Dissimilar file formats are employed by different institutions and the sharing of information/crafted modules are complicated for proprietary programs . If the data is obtained from histologic samples, difficulties inherent in embalming, freezing and slicing may cause irregularities within the data sets and ultimate inaccuracies in the anatomy.
Case-specific three dimensional visualization is now possible. The process is limited by the requisite time for segmentation. However, complex, variant and unusual cases may dictate such an investment. The near future holds the promise of automated segmentation ,, further encouraging these newer technologies. The current iteration of the Kinect™ can also be employed in the operative theatre allowing the user to maintain sterility while providing valuable spatial information on the relationship between normal and pathologic anatomical structures, with an aim of preserving the former.
There is a great need for the development of advanced virtual anatomy models to complement traditional education. Our novel gesture-controlled interactive 3D model of temporal bone anatomy comprises a promising teaching tool, not only for the early learner, but in particular for the advanced learner with an aim to better prepare professionals for advanced spatial comprehension in surgical practice.
JH provided the literature review and was responsible for the study design and was the major contributor to the written manuscript. BU supplied engineering expertise on the test equipment and contributed to the study design and data analysis. JK offered engineering expertise on testing equipment and the study protocol. JP carried out data analysis and contributed to writing the manuscript. SHK contributed to the literature review, study design and editing of the manuscript. All authors read and approved of the final manuscript.
Yeung JC, Fung K, Wilson TD: Development of a computer-assisted cranial nerve simulation from the visible human dataset. Anat Sci Educ. 2011, 4 (2): 92-97. 10.1002/ase.190.
Venail F, Deveze A, Lallemant B, Guevara N, Mondain M: Enhancement of temporal bone anatomy learning with computer 3D rendered imaging software. Med Teach. 2010, 32 (7): e282-e288. 10.3109/0142159X.2010.490280.
Nicholson DT, Chalk C, Funnell WR, Daniel SJ: Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model. Med Educ. 2006, 40 (11): 1081-1087. 10.1111/j.1365-2929.2006.02611.x.
Glittenberg C, Binder S: Using 3D computer simulations to enhance ophthalmic training. Ophthalmic Physiol Opt. 2006, 26 (1): 40-49. 10.1111/j.1475-1313.2005.00358.x.
Nance ET, Lanning SK, Gunsolley JC: Dental anatomy carving computer-assisted instruction program: an assessment of student performance and perceptions. J Dent Educ. 2009, 73 (8): 972-979.
Agur AMR, Lee MJ, Anderson JE: Grant's Atlas of Anatomy. 1991, Williams & Wilkins, Baltimore
Netter FH, Colacino S: Atlas of Human Anatomy. 1997, Novartis, East Hanover
Gray H, Williams PL, Bannister LH: Gray's Anatomy: The Anatomical Basis of Medicine and Surgery. 1995, Churchill Livingstone, New York
Garg AX, Norman G, Sperotable L: How medical students learn spatial anatomy. Lancet. 2001, 357 (9253): 363-364. 10.1016/S0140-6736(00)03649-7.
Temkin B, Acosta E, Malvankar A, Vaidyanath S: An interactive three-dimensional virtual body structures system for anatomical training over the internet. Clin Anat. 2006, 19 (3): 267-274. 10.1002/ca.20230.
George AP, De R: Review of temporal bone dissection teaching: how it was, is and will be. J Laryngol Otol. 2010, 124 (2): 119-125. 10.1017/S0022215109991617.
Fried MP, Uribe JI, Sadoughi B: The role of virtual reality in surgical training in otorhinolaryngology. Curr Opin Otolaryngol Head Neck Surg. 2007, 15 (3): 163-169. 10.1097/MOO.0b013e32814b0802.
Schubert O, Sartor K, Forsting M, Reisser C: Three-dimensional computed display of otosurgical operation sites by spiral CT. Neuroradiology. 1996, 38 (7): 663-668. 10.1007/s002340050330.
Rodt T, Sartor K, Forsting M, Reisser C: 3D visualisation of the middle ear and adjacent structures using reconstructed multi-slice CT datasets, correlating 3D images and virtual endoscopy to the 2D cross-sectional images. Neuroradiology. 2002, 44 (9): 783-790. 10.1007/s00234-002-0784-0.
Turmezei TD, Tam MD, Loughna S: A survey of medical students on the impact of a new digital imaging library in the dissection room. Clin Anat. 2009, 22 (6): 761-769. 10.1002/ca.20833.
Lufler RS, Zumwalt AC, Romney CA, Hoagland TM: Incorporating radiology into medical gross anatomy: does the use of cadaver CT scans improve students' academic performance in anatomy?. Anat Sci Educ. 2010, 3 (2): 56-63.
Luursema J-M, Zumwalt AC, Romney CA, Hoagland TM: The role of steropsis in virtual anatomic learning. Interacting with Comput. 2008, 20: 455-460. 10.1016/j.intcom.2008.04.003.
Jacobson S, Epstein SK, Albright S, Ochieng J, Griffiths J, Coppersmith V, Polak JF: Creation of virtual patients from CT images of cadavers to enhance integration of clinical and basic science student learning in anatomy. Med Teach. 2009, 31 (8): 749-751. 10.1080/01421590903124757.
Hisley KC, Anderson LD, Smith SE, Kavic SM, Tracy JK: Coupled physical and digital cadaver dissection followed by a visual test protocol provides insights into the nature of anatomical knowledge and its evaluation. Anat Sci Educ. 2008, 1 (1): 27-40. 10.1002/ase.4.
Petersson H, Sinkvist D, Wang C, Smedby O: Web-based interactive 3D visualization as a tool for improved anatomy learning. Anat Sci Educ. 2009, 2 (2): 61-68. 10.1002/ase.76.
Crossingham JL, Jenkinson J, Woolridge N, Gallinger S, Tait GA, Moulton CA: Interpreting three-dimensional structures from two-dimensional images: a web-based interactive 3D teaching model of surgical liver anatomy. HPB (Oxford). 2009, 11 (6): 523-528. 10.1111/j.1477-2574.2009.00097.x.
Rodt T, Burmeister HP, Bartling S, Kaminsky J, Schwab B, Kikinis R, Backer H: 3D-Visualisation of the middle ear by computer-assisted post-processing of helical multi-slice CT data. Laryngorhinootologie. 2004, 83 (7): 438-444. 10.1055/s-2004-814370.
Gould DJ, Terrell MA, Fleming J: A usability study of users' perceptions toward a multimedia computer-assisted learning tool for neuroanatomy. Anat Sci Educ. 2008, 1 (4): 175-183. 10.1002/ase.36.
Yip GW, Rajendran K: SnapAnatomy, a computer-based interactive tool for independent learning of human anatomy. J Vis Commun Med. 2008, 31 (2): 46-50. 10.1080/17453050802241548.
Trelease RB, Rosset A: Transforming clinical imaging data for virtual reality learning objects. Anat Sci Educ. 2008, 1 (2): 50-55. 10.1002/ase.13.
Nguyen N, Wilson TD: A head in virtual reality: development of a dynamic head and neck model. Anat Sci Educ. 2009, 2 (6): 294-301. 10.1002/ase.115.
Vazquez PP: An interactive 3D framework for anatomical education. Int J Comput-Assist Radiol Surg. 2008, 3: 511-524. 10.1007/s11548-008-0251-4.
Hariri S, Rawn C, Srivastava S, Youngblood P, Ladd A: Evaluation of a surgical simulator for learning clinical anatomy. Med Educ. 2004, 38 (8): 896-902. 10.1111/j.1365-2929.2004.01897.x.
Brenton H: Using multimedia and Web3D to enhance anatomy teaching. Comput Educ. 2007, 49 (1): 32-53. 10.1016/j.compedu.2005.06.005.
McRackan TR, Reda FA, Rivas A, Noble JH, Dietrich MS, Dawant BM, Labadie RF: Comparison of cochlear implant relevant anatomy in children versus adults. Otol Neurotol. 2012, 33 (3): 328-334. 10.1097/MAO.0b013e318245cc9f.
Reda FA, Noble JH, Rivas A, McRackan TR, Labadie RF, Dawant BM: Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans. Med Phys. 2011, 38 (10): 5590-5600. 10.1118/1.3634048.
The authors thank Ms. Sharmin Farzana-Khan for her excellent assistance with the segmentation process.
We are grateful to have received financial support from 1) the Health Sciences Center Foundation, 2) the Virtual Reality Application Fund, Government of Manitoba and 3)Dean's Strategic Research Fund of the Faculty of Medicine, University of Manitoba.
The authors declare that they have no competing interests.