Dental operations such as drilling, polishing and reduce tongue movement by mirror involve materials removals or shape modification. In order to develop realistic training system, the virtual mouth model would require being both interactive mathematically and visually dynamic.
 uses surfacing deformation methodology to simulate drilling both haptically and graphically. The virtual mouth model is a triangle mesh surface model generated from laser scans. There are three interaction status associate with the model. (1) Separation Status refers to the tools not contacting any object in the scene. (2) Contact Status refers to the tools just contact the surface model without exceeding the force threshold. (3) Cutting/Drilling Status refers to the force applying on the surface model is greater than the force threshold, material removals start operate and deformation of the surface model starts happening.
The triangle mesh is generated with information such as density and tissue type. The deformation of the triangle mesh is done by dividing the triangle in the mesh into higher number of triangles. Graphically, the update rate is around 15 to 20Hz, whereas the haptic update rate is around 1kHz. In order to determine the amount of deformation, local surface model is generated at each time loop and remap both the global model and the local model. Using pure surface model could result in haptic discontinuity; complex shape would result in high computation time for high number polygon collision detection. Most importantly, the resolution of the surface mesh requires to be high due to the drill size.
Hybrid Data Structure using Volumetric and Surface Modelling
[5-12] uses both surface and volume mouth model. Similar to , when the tools are in contact status, the haptic force feedback is generated from the surface model. The key difference is that the haptic simulations of material removal are rendered via the volumetric model. Every simulation loop
, volumetric data is required to be converted to surface mesh for contact status use. This gives a more realistic force feedback in situation where the shapes of the object are complex. The graphical rendering is updated at 15-30hz whereas the haptic is updated at least 1kHz.
If using only the volume model as the haptic contact rendering, the force feedback become very discontinue and extreme rough texture (Figure 7). The key challenge in using hybrid data structure is the speed of the volume to surface remapping. In the other word, the transition model between the drilling status and the contact status. Converting the volume data back to surface model allow smooth surface haptics when the tools are in contact status.
Figure 7. Haptic Rendering on Surface Mesh and Volume Model
 introduces a novel methods for solving the problem with conversion speed from volume to surface model. Their method uses an intermediate surface smoothing transition model while drilling. When drilling is operated
, the drill head is represented as a sphere. The amount to be removed is the target model and it is constructed using CSG method. While drilling, user can only feel the shape of the drill head (target model) in the drilling direction. Once the drill head not touching particular parts of surface, these surfaces would be mapped with the CSG data and produces a high graphic quality surface, ready for next contact status. CSG is standard 3D graphic operations
; it allows fast and accurate surface reconstruction without heavy computation. Figure 8 shows drilling direction and the target model update process.
Relationship between the Target Model and Haptic Rendering.
Other surface remapping methods includes marching cubes [10,12]. Marching cube algorithm basically uses voxel to extract the polygon mesh out of a volume model. The idea is that the algorithm would generate a number of unique cubes, which has different gradients of scalar field, and interpolate these unique cubes with the volume model. General algorithm uses 15 unique cubes, which create 256 cube configurations due to the rotation of each 15 cubes. The results surface mesh is polygon based. The downside of this method is the possibility of rough surface finish and haptic discontinuity
Our system uses voxel volume model for haptic drilling or material removal. To solve the conversion problem from volume to surface model, it generates majority of the data before the simulation is started. These data would be stored directly into memory for fast and efficient processing. The conversion algorithm uses these data to remap the surface in every haptic simulation time loop.
Data Structure – Voxel Cube Array
When the system is initiated
, a voxel cube array would be generated. Each voxel is an object created from the class Voxel. The voxel cube array stores the volume model as a array of voxel object. Table 1 shows the information stored on each voxel object.
Neighbour Voxel memory address
Reaction force for robot
Friction of that voxel
N1 to N6 record the 6 neighbour voxel location of the current voxel object. This is shown in figure 9. The volume model are setup in a specific way so that their neighbour 1 voxel is always in the same directions.
. Neighbour (N) Voxel direction
Data Structure – Surface Array
By knowing exactly where each voxel is and what other voxel is surrounding it
, the system can then extract the voxels which are on the surface of the volume model very quickly based on the voxel condition. For example, if the 12th
voxel has an empty field in N4, the system would then know that 12th
voxel is a surface voxel, with side 4 as part of the model surface. Surface Array is an array of pointers which points to the memory address of the surface voxel inside the Voxel Cube Array.
Both Voxel Cube Array and Surface Array are generated while the simulation system is initiating. When the drilling haptic rendering is operated
, the conversion algorithm is constantly altering the data inside the Surface Array based on the information given from the Voxel Cube Array.
The important issues with volume to surface conversion are: (1) Speed, the conversion must be done without searching, all algorithmetic operations must be done by direct indexing; (2) Triangle vertices direction. All triangle must be build in either clockwise or anti clockwise directions because of the normal vector direction; (3) The surface array can only be gone through once only.
The algorithm first start at the top of the Surface Array, build the triangle in anti-clockwise direction. The Voxel Cube Array provide all the neighbouring information so there is no need for searching. When all possible triangles associate with that voxel are built, that particular voxel in Surface Array is changed as built, and the algorithm then would not try to build any triangle with this voxel.
Adding or removing a new item in an array takes up more processing power than editing an existing array data. In the algorithm, dummy array spaces are present. If one surface voxel is to be removed from the scene, the dummy array spaces are there ready for the system to edit it to new surface node. The voxel removed from the scene would be marked as no neighbour in the Surface Array. The algorithm would does not remove that particular voxel pointer in the Surface Array until the user stops drilling. The whole process does not involve any searching because the algorithm has all the knowledge on the voxel.
Conclusion and Future work
The conversion algorithm is currently under additional research and development. Majority of the works currently on hardware research, focusing on solving some of the vision and hand collocation problem. Both Planar and Zalman monitor has been tested by dentist from KCL on the 9th
July 2008. The questionnaire results clearly show that Planar is much better than Zalman mainly because of the viewing angle and image quality.
Thank you to Alistair Barrow and William Harwin.
Novit Falcon, http://home.novint.com/.
Qiansuo Yang, “Numerical analysis of a dual polarization mode-locked laser with a quarter wave plate”. http://www.sciencedirect.com/ (2004).
Daniel Wang, “Cutting on Triangle Mesh: Local Model-Based Haptic Display for Dental Preparation Surgery Simulation”. IEEE Nov/Dec 2005.
Andreas Petersik, Bernhard Pflesser, Ulf Tiede, Karl-Heinz Hohne, “Realistic Haptic Interaction in Volume Sculpting for Surgery Simulation”. http://citeseer.ist.psu.edu/587419.html (2003).
Dan Morris, Christopher Sewell, Nikolas Blevins, Federico Barbagli, Kenneth Salisbury, “A Collaborative Virtual Environment for the Simulation of Tempral Bone Surgery”. http://ai.stanford.edu/~csewell/research (2004).
K C Hui, H C Leung, “Virtual Sculpting and Deformable Volume Modelling”. http://www.cuhk.edu.hk (2002).
Wu Xiaojun, Liu Weijun, Wang Tianran, “A New Method on Converting polygonal Meshes to Volumetric Datasets”. IEEE (2003).
Marco Agus, Andrea Giachetti, Enrico Gobbetti, Gianluigi Zanetti, Antonio Zorcolo, “Adaptive techniques for Real-Time haptic and visual simulation of bone dissection”. IEEE (2003).
I. Marras, L. Papaleontiou, N. Nikolaidis, K. lyroudia and I. Pitas, “Virtual Dental Patient: A System For Virtual Teeth Drilling”. IEEE (2006)
Laehyun Kim, Yoha Hwang, Se Hyung Park and Sungdo Ha, “Dental Training System using Multi-modal interface”. http://www.cadanda.com (2005).
XJ He, YH Chen, “Bone drilling simulation based on six degree-of-freedom haptic rendering”. www.eurohaptics.vision.ee.ethz.ch/2001/ (2001).