Dr. P.W.C. Prasad Charles Sturt University, Sydney

A Novel Rotational Matrix and Translation Vector (RMaTV) algorithms: Geometric Accuracy for Augmented Reality (AR) in Oral and Maxillofacial Surgeries Yahini Prabha Murugesan1 * Abeer Alsadoon1 * Paul Manoranjan 2* P.W.C. Prasad1* 1School of Computing and Mathematics, Charles Sturt University, Sydney, Australia. 2School of Computing and Mathematics, Charles Sturt University, Bathurst, Australia. *Correspondence to: Associate Professor Dr. P.W.C. Prasad Charles Sturt University, Sydney Campus, Sydney, Australia Email: [email protected] Abstract Background: Augmented reality based surgeries have not been successfully implemented in oral and maxillofacial areas due to limitations in geometric accuracy and image registration. This paper aimed to improve the accuracy and depth perception of the augmented video. Methodology: The proposed system consists of a Rotational Matrix and Translation Vector (RMaTV) algorithm to reduce the geometric error and improve the depth perception by including two-stereo cameras and translucent mirror in the operating room. Results: The result on mandible/maxilla area shows that the new algorithm improve the video accuracy by 0.30 ~ 0.40mm (in terms of overlay error), and processing time with 10-13 frames/second compared to 7-10 frames in existing system. The depth perception increased by 90~100mm. Conclusion: The proposed system concentrates on reducing the geometric error. Thus, this study provides an acceptable range of accuracy with less operating time which provides surgeons with a smooth surgical flow. Keywords: Augmented reality; 3D-2D image registration; oral and maxillofacial surgery; image matching; Three-dimensional image. Introduction Oral and maxillofacial surgeries are some of the most commonly performed surgeries in the world, treating defects and diseases in teeth, face, jaws, neck and head areas. In the past, surgeons used traditional methods to perform surgery in maxilla and mandible areas. The patients Computed Tomography (CT) scan report is obtained before the surgery and surgeons pre-plan the surgical scenario manually by analyzing the root and nerve channels from the CT report (1). However, it is often challenging to identify the nerve channels, drilling positions and infected areas with traditional methods during surgery. To overcome the limitations of traditional procedures, they have been enhanced by video guided surgery displays, Two Dimensional (2D) virtual video guides shown on a monitor to the surgeons during the surgery. The latest such technology is, Augmented Reality (AR) navigation is implemented where a virtual scene superimposed to the real-time environment and generates a Three-Dimensional (3D) view of a wearable device (2). The different types of surgery such as Traditional, video-guided and AR guided explained in Figure 1. AR is the process of superimposing or overlaying the pre-processed, segmented CT scan images onto the real-time video and creating a three-dimensional view of the surgical scene. This helps in cutting lines, identifying drilling positions, finding nerve channel as well as, the exact location of the disease and much more during the surgery (3). AR provides this 3-dimestional view by combining virtual images with the real-time environment. This provides a significant advantage for the medical field in terms of performing surgery in sensitive areas such as the brain, heart, maxilla and mandible areas to name just a few. AR guided surgery has been successfully implemented in the majority of surgical zones. But AR guided Oral and maxillofacial surgery is still the subject of research due to the limitations in image registration, lack of accuracy, long processing times, lack of depth perception and poor occlusion handling. Therefore, providing an accurate 3D view with less operating time is a vital part of any surgery (4). The best system can be described as the one which combines the accuracy, processing time and depth perception. Various types of AR implementation technologies are available in current surgical information systems which can be broadly divided into three categories – video based display, see-Through display and projection based display. Video based display superimposes virtually generated images onto a real-time video stream and generates an augmented 3D view which helps to improve the viewer perception in regards to depth, motion and stereo parallax. See-through display technology overlays the images onto a translucent mirror/device for the user’s direct view. Simply, a see-through display is an electronic display that allows the user to see what is shown on the glass screen while still being able to see through it. Fig. 1: (a) Traditional Surgery (b) Video-Guided (c) AR guided Projection based display works similarly to see-through display and overlays the virtual images onto the projectors which can be directly viewed by the users (5) Current studies of AR guided-surgery in oral and maxillofacial procedures use a range of techniques and algorithms to improve the accuracy and processing time of videos. The maximum generated result for accuracy is 0.90 ~ 1.20mm of overlay error (the difference between the projected scene and the actual scene) and processing time is 7-10 frames per second. Such overlay error (reduced accuracy) and lack of depth perception can lead to surgical failure. Similarly, high processing time can delay the surgery. The purpose of this paper is to increase the accuracy of videos by removing the geometric error along with improving the processing time. The Iterative Closet Point (ICP) algorithm may finalize the wrong pose if there are too many chosen points with unmatched regions of data; furthermore, this slows down the algorithm process. This study proposes a new Rotation Matrix and Translation Vector (RMaTV) algorithm to reduce this geometric error (also called alignment error or overlay error). In addition, the system includes some features from Hoshi’s (6) model to increase the view of the deep surgical areas. Furthermore, the initial image registration accuracy and the accuracy of online-offline matching and speed remained stable when compared with the previous solution. To provide a seamless AR surgical video, a range of techniques and algorithms have been introduced by focusing on accuracy and operating time as a major factor. One of these is a head-mounted wearable see-through device, developed by Giovanni (7). It uses reference markers and a point-based registration method to reduce the overlay error although it failed to focus on the removal of fluctuation and haven’t provided any solution if the patient or instrument moves. Ming (8) focused on improving the occlusion removal by presenting an Occlusal splint compounded with a fiducial marker – a registration system, and Bouchard (23) used marker based registration method which helps to track the position after occlusion. However, both failed to improve the processing time (more than 15 frames per second) in the deep surgical area. Therefore, these methods do not offer further possibilities for improvement. Zinser (9) introduced a wafer-less maxillary positioning by developing an image-guided visualization display technique. This solution enables the simulation of soft tissue in an interactive way. It is, however, not focused on overall system accuracy (overlay error of around 1.50mm). Hongbo (10) analysed and identified the best available techniques in AR guided oral and maxillofacial surgery with the help of 104 cases reducing the structural image drift (Deformation of soft tissue in the real surgical site) with the help of a digital reference frame, although he failed to focus on system accuracy same as the above. Masutani (11) uses the same kind of referential marker to improve soft tissue simulation in intravascular neurosurgery but also not focus on accuracy (average overlay error of 1.80 mm). Therefore, these techniques do not offer further possibilities for improvement. Hideyuki (5) carried out a feasibility study in see through a three-dimensional integral videography technique to e
valuate accuracy and processing time. Bruellmann (1) clinically validated a K-nearest neighbor algorithm for detecting and identifying root canals through the treatment of 305 patients. This method is using Euclidean distance-based images segmentation to identify the location of the root canal, thereby improving the depth perception yet failed to focus on accuracy (average overlay error 2mm). From 305 cases, the system was able to detect the root canal for 287 patients only. Heiland (12) analyzed the positives and limitations of intraoperative navigation in the Oral and Maxillofacial Surgery (OMS). This paper proves that appropriate image registration techniques will improve accuracy. Again, these techniques offer no further possibilities for improvement. Hideyuki (12) & (14) proposed a vision based marker-less registration technique that eliminates the burden inherent in fiducial and marker based registration. This system uses a stereo camera and half-silvered mirror to improve depth perception while an integral videography technique leads to a reduction in the processing time. Ferrari (15) introduced a mixed reality system with the help of stereoscopic visualization. This system increases the accuracy but fails to focus on operating time (average of 3 frames per second). Junchen (13) analyzed integral imaging with AR surgical navigation further. This system uses a 3D calibration algorithm which helps to reduce the initial registration error. The processing time is increased as above with this method but accuracy remains the same around 0.90 ~ 1.10 mm overlay error. None of these techniques offer further possibilities for improvement. Yu (10) presented a system for OMS simulation which provides a realistic teaching method for students. This system uses medical image processing and optical image tracking techniques. Optical image tracking improves depth perception and identifies the root channel accurately. But this is just a simulation and does not help to create an AR view in real time. Wagner (16) proposed a system in virtual reality for orthognathic surgery. This uses a marker-based image registration technique and overlays the CT image on a virtually created surgical scene. Pham (19) also introduced an intra-operative system with image registration technique, but failed to provide accuracy (average overlay error is 1.70mm). Christian (20) proposed a digital based dental implant system that can accomplish virtual planning and virtual guidance by combining a range of sources. This provides better image registration results but not the accuracy as mentioned above. Therefore, all of this offers no further possibilities for improvement. Nijmeh (17) proposed an image-guided navigation method using the segmented CT scan to develop images that guide the surgery. Hassfeld (18) introduced an intra-operative navigation system which includes a superior image registration technique but does not focus on occlusion. The resulted video will be paused during the occlusion. Junchen (2) introduced a video-see-through system which uses Tracking Learning Detection (TLD) & ICP algorithm based on Ulrich’s (21) method for image registration techniques, this resolves the initial alignment problem caused by manual adjustment and helps the automatic recovery. The TLD algorithm provides a bounding box tracking method which reduces the matching time by tracking box by box with the segmented CT images. The ICP algorithm (22) helps to impose the segmented images onto the video frames to create an augmented video of the surgical view. This provides accuracy of 1mm overlay error and processing time of 7-10 frames per second. But this system fails to focus on deep surgical area (This system can cover the area only till 60mm deep). This issue was addressed by the same group of researchers in 2014 (6) by including two stereo cameras and a translucent mirror for AR display. Adding these feature to the above model improved the depth perception. This is therefore of significance for the proposed system. In general, the system which is capable of providing the AR display around 15~16 frames per second and an average overlay error of 0.20~0.60 mm with approximately 100~120mm depth perception. Though many of the systems discussed above satisfied one/two aspects but haven’t covered all the three. So, the proposed system focused on providing the better solution that can cover three issues mentioned above. This paper takes the best features of Junchen model (2) and concentrated on a particular stage in ICP algorithm called minimizing error metric. Better results can be achieved by reducing the overlapping errors during the surgery. The reset of the paper is organized as follows. The section called “System Overview” describes the model of the current best method Junchen(2) and the details of the proposed model includes the flowchart and pseudocode of the proposed formula. The section followed by called “Results” discuss the various testing techniques used for this algorithm with different samples on maxilla and mandible areas. The last section called “Discussion” describes the comparison between current and proposed system results and provides the conclusion for the paper. Materials and methods System Overview Current System: This part defines the current system features (highlighted in blue-Figure 2) and limitations (highlighted in red-Figure 2). The model proposed by Junchen (2) provided a way in which AR can be used with minimal change in the current surgical environment. This model provides an accuracy of 1mm overlay error and a processing time of 7-10 frames per second. This model consists of three major system stages (Figure 2) called pre-operative environment, intra-operative environment and final pose refinement. Fig. 2: Current best AR System Pre-operative environment: In the pre-operative environment, the patient’s CT scan data are segmented and a hierarchical model or aspect graph (offline phase) is created as shown in Figure 2. Aspect Graph creates different segmented models from CT scan images which can be matched to the real-time video in this stage, providing different angles and alternative viewpoints. Ulrich’s method is used for image registration, removing the burden in the manual adjustment part during the initial alignment, as this method remove the initial alignment issues automatically. Intra-operative environment: In the intra-operative space, a single camera starts recording the surgical scenario. The hierarchical model creates a list of images on 5 levels, stored in a separate file. The real-time video of the surgical scene is captured by a 4K (4000 pixels resolution) camera. This video is sent through a TLD algorithm (in figure 2) to find and segment the exact location of the surgical area with the help of bounding box tracking to reduce the search area and speed up the process. The result is compared and matched with the segment of images (online & offline matching) from the pre-operative scene to find the exact nerve channels and tissue areas. But this model does not support the deep surgical areas. If fails to provide depth accuracy because it is using normal display screens for the projection and a single video camera in the surgical area, which is not efficient for analysing the deep areas as a single camera is not capable enough to handle both the operations. Pose refinement: The best-matched images help to create an accurate 2D model(2D video) with less processing time. With these refined, selected images, the ICP algorithm is used to do the pose refinement to create an accurate 3D structure (Figure 2). The refined 3D pose and the camera video stream are projected onto the display device thereby creating an AR video. This model proved accuracy with an overlay error of 1mm, which is quite commonly accepted by surgeons universally. But still, the accuracy can be improved by 0.30mm ~ 0.40mm to guide the surgeons more effectively. The geometric error generally occurring during the final pose refinement was removed by the Levenberg Marquardt method shown in equation 1. At t
he point when the current algorithm is a long way from the correct pose, the calculation carries on like a steepest drop strategy. Actually, this slows down the system process and also not guaranteed that the resulted pose will be error free. The Levenberg Marquardt method to reduce the geometric error algorithin is presented in table 1, and the flowchart of it in figure 3. X = Sub-pixel edge-point of 2D image, Z = Geometric error, N = Normal point for the matched i, k = Number of iterations and E = Minimized Error. (Normal point : After the set of requested images defined it is utilized to make normal point, which are characterized as the average intensities among all the images that includes the specific point) (1) Table 1. Levenberg Marquardt method Algorithm: Levenberg Marquardt method to reduce the geometric error Input: Sub-pixel edge-point of 2D image and geometric error Output: Refined 3D pose without geometric error BEGIN Step 1: Once the ICP matching process is completed then go-to step-2 Step 2: Get the number of iterations(k) from online matching. Input image set where 0 < X < k X -> Starting edge of the 2-Dimentional image Step 3: Obtain all the edges throughout the iteration. Then moderate the error metric by, Zi2 (X0,…., Xk-1) where 0 <=i < N Step 4: If an error (Z) found then go to the steepest modest strategy. Get the error metric samples and repeat the ICP matching process again based on the error samples to remove the geometric/alignment error. Step 5: Repeat from step-1 to step-4 until removing the alignment error. END Flowchart for Levenberg method: Fig. 3: Flowchart of Lavenberg method A proposed system: A range of techniques from existing AR guided surgery methods have been reviewed for this article, analysing advantages and disadvantages of each method in-depth. The main issues are accuracy, processing time, image registration, depth perception and occlusion handling. One method is selected as the best model (2) from the collected list and used as a base for the proposed solution. The proposed solution takes useful features from the above model (figure 2) with enhanced ICP No Yes Image Pose Model Start 2D image If iteration<n? get=”” new=”” image=”” predict=”” sensor=”” point=”” final=”” pose=”” analyzation=”” if=”” error=”” found?=”” remove=”” geometric=”” output=”” yes=”” no=”” algorithm=”” to=”” overcome=”” the=”” error.=”” furthermore,=”” a=”” range=”” of=”” features=”” are=”” also=”” adapted=”” from=”” second-best=”” solution=”” improve=”” depth=”” view.=”” i.e.,=”” two=”” stereo=”” cameras=”” will=”” capture=”” real-time=”” and=”” augmented=”” video=”” be=”” displayed=”” on=”” translucent=”” mirror=”” (6)=”” provided=”” an=”” ar=”” with=”” integral=”” imaging=”” technique=”” based=”” see-through=”” display=”” technology.=”” surgical=”” site=”” is=”” equipped=”” two-stereo=”” cameras,=”” workstation.=”” individual=”” images=”” registered=”” for=”” cgii=”” (computed=”” generated=”” imaging)=”” rendering.=”” lens=”” array=”” helps=”” 3d=”” object=”” as=”” images.=”” lcd=”” screen=”” fixed=”” in=”” each=”” every=”” pixel.=”” spatial=”” transformation=”” (21)=”” mapping=”” locations=”” order=”” change=”” relationship=”” pixels=”” between=”” moving=”” view=”” matrix=”” used=”” majority=”” systems=”” transform=”” vertices=”” required=”” view-space.=”” performed=”” matrix.=”” this=”” registration=”” remaining=”” process=”” follows=”” current=”” best=”” (2).=”” can=”” calculated=”” using,=”” where=”” v=”” x,=”” y,=”” x=”” pixel=”” edge=”” co-ordinate=”” system.=”” c=”” world=”” space=”” u,=”” v,=”” w=”” camera=”” positions.=”” we=”” propose=”” rotational=”” translation=”” vector=”” (rmatv)=”” geometric-error.=”” accuracy=”” by=”” reducing=”” overlay=”” 1mm=”” 0.30=”” ~=”” 0.40=”” mm=”” real=”” scene.=”” processing=”” time=”” improved=”” 7-10=”” frames=”” per=”” second=”” 10-13=”” second.=”” icp=”” history:=”” was=”” originally=”” introduced=”” chen=”” medioni=”” 1991.=”” however,=”” may=”” terminates=”” sometimes=”” below-addressed=”” problem.=”” (1)=”” multiple=”” local=”” minimal=”” metric=”” (2)=”” noise=”” outliers=”” (3)=”” partial=”” overlap.=”” these=”” issues=”” original=”” re-developed=”” several=”” researchers=”” through=”” steps.=”” have=”” brought=”” improvements,=”” but=”” continues=”” subject=”” research=”” make=”” it=”” more=”” accurate.=”” variations=”” classified=”” into=”” six=”” stages.=”” selection,=”” matching,=”” weighting,=”” rejection,=”” minimizing=”” metric.=”” area=”” improvement:=”” here=”” proposed=”” modification=”” works=”” stage=”” 6=”” –=”” study,=”” considering=”” minimize=”” (also=”” called=”” alignment=”” error)=”” which=”” commonly=”” occurs=”” during=”” refinement.=”” finalize=”” wrong=”” video,=”” there=”” too=”” many=”” chosen=”” points=”” unmatched=”” regions=”” data.=”” slows=”” down=”” process.=”” solutions=”” been=”” address=”” issue,=”” yet=”” so=”” far=”” accurate=”” minimization=”” has=”” not=”” achieved.=”” system=”” consists=”” three=”” major=”” stages=”” (figure=”” 4)=”” pre-operative=”” environment,=”” intra-operative=”” environment=”” environment:=”” patient’s=”” ct=”” scan=”” data=”” segmented=”” model=”” hierarchy=”” or=”” aspect=”” graph=”” (offline=”” phase)=”” created=”” shown=”” figure=”” 4.=”” creates=”” different=”” models=”” which,=”” stage,=”” matched=”” angles=”” alternative=”” viewpoints.=”” report=”” superior=”” when=”” compared=”” magnetic=”” resonance=”” (mri).=”” mri=”” capabilities=”” identify=”” soft=”” tissue=”” simulation=”” while=”” preferable=”” cancer=”” detection=”” provides=”” details=”” about=”” bone=”” nerve=”” structures.=”” fig.=”” 4:=”” using=”” enhanced=”” cameras:=”” videos=”” introduced,=”” together=”” (6).=”” create=”” list=”” 5=”” levels=”” store=”” separate=”” file.=”” sent=”” tld=”” find=”” segment=”” exact=”” location=”” area.=”” once=”” found=”” precisely=”” matching=”” bounding=”” box=”” online=”” starts=”” analyzing=”” graphs=”” that=”” were=”” offline=”” phase.=”” search=”” until=”” lowest=”” level=”” reached.=”” result=”” scene=”” channels=”” areas.=”” included=”” perception=”” videos.=”” one=”” segments.=”” project=”” onto=”” viewed=”” surgeons.=”” screen.=”” body=”” via=”” mirror.=”” then=”” superimposed=”” results=”” pose.=”” reflect=”” surgeons=”” projection=”” 100mm.=”” simulating=”” temporal=”” protocol=”” matlab=”” systems.=”” subjective=”” quality=”” test=”” 9=”” patient=”” samples=”” total=”” (including=”” maxilla=”” 3=”” mandible=”” area)=”” refinement=”” algorithm:=”” best-matched=”” help=”” 2d=”” less=”” time.=”” refined,=”” selected=”” carryout=”” structure=”” 4).=”” refined=”” stream=”” projected=”” device=”” reality=”” video.=”” produce=”” refines=”” number=”” iterations.=”” initial=”” ulrich=”” method=”” identified=”” (r0,=”” t0)=”” (x,=”” y)=”” sub-pixel=”” edge-point=”” image.=”” rmatv=”” tries=”” eliminate=”” (in=”” equation=”” 2)=”” lead=”” improvement=”” shape=”” (the=”” best-refined=”” pose)=”” end=”” last=”” iteration(k)=”” is,=”” &=”” y=”Sub-pixel” image,=”” r=”” t=”Initial” alignment,=”” z=”Geometric” error,=”” n=”Normal” i,=”” k=”Number” equation:=”” input=”” rotation=”” matrix,=”” starting=”” estimation=”” adjust=”” source=”” reference=”” (discretionary),=”” criteria=”” ceasing=”” emphases=”” y).=”” iteration,=”” ei=”[Xi” *=”” zi]k-1=”” normal=”” indicates=”” multiplication=”” e=”” minimized=”” now,=”” by,=”” rt=”” (22)=”” following=”” formula.=”” ø=”” represents=”” angle=”” rotate.=”” representing=”” vector.=”” above=”” way=”” rectify=”” improper=”” determined=”” multiplying=”” needs=”” move=”” given=”” direction=”” using.=”” finally,=”” produced=”” 3.=”” matches=”” featureless=”” region=”” iteration=”” carried=”” out=”” icp.=”” general,=”” step=”” complete,=”” ar.=”” (3),=”” checks=”” before=”” proceeding=”” refinement,=”” taking=”” consideration=”” steps=”” below.=”” difference=”” edges=”” associated=”” un-matched=”” average=”” point.=”” selection=”” again=”” result.=”” combined=”” workflow,=”” reduces=”” whenever=”” arises.=”” why=”” rmatv?=”” simply=”” basic=”” any=”” axes=”” coordinate=”” dimension.=”” contains=”” information=”” how=”” much=”” w
rongly=”” overlayed=”” registered.=”” details,=”” applied=”” untill=”” becomes=”” null.=”” levenberg=”” marquardt=”” 1)=”” provide=”” moderate=”” at=”” present=”” arrangement=”” nearing=”” optimal=”” outcomes,=”” turns=”” gauss-newton=”” technique.=”” coordinates=”” condition=”” cycle=”” finds=”” data),=”” resumes=”” goes=”” back=”” perform=”” again.=”” actually,=”” guaranteed=”” resulting=”” free.=”” loops=”” gets=”” nearest=”” increasing=”” does=”” similar=”” kind=”” go=”” rather=”” rmtva=”” stage.=”” reduce=”” well=”” removed=”” better=”” than=”” lavenberg=”” method.=”” achieved=”” hoshi’s=”” including=”” assistance=”” method,=”” (table=”” 2=”” algorithm),=”” mistake=”” continuing=”” posture=”” whereas=”” steep=”” drop=”” strategy,=”” refining=”” frame=”” average.=”” presented=”” table=”” 2,=”” flowchart=”” 5.=”” 2.=”” input:=”” output:=”” without=”” begin=”” 1:=”” iterations(n)=”” matching.=”” set=”” 0=”” <=”” -=””> Edges of the 2-Dimentional image Step 2: Obtain the rotation matrix values (rt) from the video stream. The full rotation value will be approximated as, rt ~ rotation matrix (Xi * Zi) where 0 <=i < k Step 3: Obtain the translation vector (v) values from the video stream. The full translation vector will be approximated as, V ~ translation vector. Geometric errori where 0 <=i < k Step 4: Checking the present alignment difference by, (X axis in the 2-Dimentaiton image – Y axis in the 2-Dimentaiton image) * alignment difference Step 5: If difference found from step 4 then reduce the error by, E = Current alignment difference + Approximated rotation value + Approximated translation vector. END Flowchart for proposed algorithm: Fig. 5: Flowchart of Proposed method No Yes Image Pose Model Start 2D images If iteration<n? get=”” new=”” image=”” predict=”” sensor=”” point=”” final=”” pose=”” analyzation=”” if=”” error=”” found?=”” rotation=”” matrix=”” &=”” translation=”” vector=”” output=”” yes=”” no=”” remove=”” geometric=”” results=”” the=”” implementation=”” of=”” model=”” has=”” been=”” carried=”” out=”” using=”” matlab=”” r2017a=”” with=”” 8=”” sample=”” videos=”” and=”” ct=”” scan=”” images=”” different=”” age=”” groups=”” in=”” maxilla=”” mandible=”” areas=”” (in=”” tables=”” 3=”” 4).=”” gathered=”” from=”” online=”” resources.=”” performance=”” proposed=”” system=”” measured=”” based=”” on=”” accuracy=”” processing=”” time.=”” since=”” simulation=”” was=”” matlab,=”” overlayed=”” ar=”” video=”” differs=”” slightly=”” actual=”” experimental=”” results.=”” teeth=”” area=”” is=”” divided=”” into=”” six=”” as=”” mentioned=”” figure=”” 6.=”” they=”” are:=”” upper=”” front=”” teeth,=”” right=”” molar,=”” left=”” lower=”” molar.=”” tested=”” for=”” all=”” these=”” which=”” are=”” shown=”” table=”” 3.=”” also,=”” two=”” samples=”” have=”” successfully=”” algorithm=”” (results=”” 4)=”” grouped=”” three=”” stages=”” called=”” pre-operative=”” stage,=”” intra-operative=”” stage=”” refinement.=”” were=”” collected=”” segmented=”” to=”” create=”” an=”” aspect=”” graph.=”” this=”” helps=”” match=”” during=”” real=”” surgery.=”” will=”” be=”” generated=”” less=”” than=”” a=”” minute=”” camera=”” parameter=”” explained=”” 4.=”” hierarchy=”” real-time=”” multiple=”” possible=”” option.=”” fig.=”” 6:=”” grouping=”” dentition=”” space=”” surgical=”” scene=”” captured=”” by=”” two-stereo=”” cameras’.=”” initial=”” alignment=”” issues=”” resolved=”” applying=”” ulrich’s=”” method.=”” building=”” graph=”” contains=”” collection=”” can=”” matched=”” images.=”” creates=”” many=”” models=”” that=”” tld=”” tracking=”” method=”” best=”” feature=”” extraction=”” it=”” reduces=”” operating=”” offline=”” matching=”” phases=”” help=”” extract=”” most=”” relevant=”” collection.=”” implemented=”” obtain=”” then=”” superimposed=”” 3d=”” display=”” device.=”” sent=”” through=”” find=”” segment=”” exact=”” location=”” area.=”” starts=”” working=”” finds=”” bounding=”” box.=”” box=”” reduce=”” search=”” time=”” small=”” box,=”” i.e.=”” tries=”” detect=”” target,=”” site=”” or=”” result=”” compared=”” nerve=”” channels=”” tissue=”” 7(b).=”” resulting=”” icp=”” refinement=”” pose,=”” outcome=”” supports=”” occlusion=”” 7(c).=”” interrupted=”” heavy=”” blood=”” flow=”” other=”” unforeseen=”” situations,=”” still=”” track=”” back=”” continue=”” operation.=”” furthermore,=”” registration=”” remains=”” unchanged=”” 7(a).=”” overlay=”” serves=”” same=”” purpose=”” use=”” referential=”” marker=”” techniques.=”” systems,=”” both=”” way=”” skull=”” displays=”” exactly=”” where=”” surgeon=”” moves=”” accordingly=”” patient=”” instrument=”” moves.=”” 7=”” (a)=”” (b)=”” (c)=”” however,=”” may=”” identify=”” wrong=”” there=”” too=”” points=”” chosen=”” data,=”” causing=”” error.=”” video.=”” rmatv=”” introduces=”” refine=”” occurance=”” transaction=”” vector.=”” aims=”” increase=”” removing=”” process=”” number=”” loops=”” iterations=”” previous=”” (explained=”” 2)=”” thus,=”” enhanced=”” automatically=”” increases=”” generating=”” significant=”” frames=”” per=”” second.=”” simulated=”” database.=”” current=”” solution=”” solution.=”” undergoing=”” type=”” between=”” reports=”” graphs=”” detail=”” section=”” below.=”” further,=”” difference=”” discussed=”” terms=”” 9=”” 10=”” 11=”” system,=”” calculated=”” known=”” delay=”” projected=”” scene.=”” types=”” dataset=”” separated=”” parts,=”” 8&9)=”” 10&11)=”” areas.=”” data=”” individuals=”” groups,=”” ranging=”” 15=”” 70.=”” 6=”” categories=”” 7)=”” perform=”” testing.=”” samples.=”” able=”” provide=”” better=”” when=”” produced=”” factors=”” significantly=”” increased=”” figures=”” 8&9=”” reflect=”” those=”” 5=”” test=”” methods=”” stages.=”” 8:=”” 9:=”” 0=”” 0.2=”” 0.4=”” 0.6=”” 0.8=”” 1=”” 1.2=”” overlaying=”” moves?=”” -=”” (mm)=”” differnet=”” (wang&suenaga,=”” 2016)=”” 0.1=”” 0.3=”” 0.5=”” 0.7=”” 10:=”” 11:=”” 3:=”” s.no=”” details=”” original=”” processed=”” (age-25)=”” (male)=”” 0.70=”” mm=”” second=”” 0.32=”” 0.37=”” 2=”” 4=”” 12=”” 14=”” (number=”” second)=”” molar=”” (age-40)=”” (female)=”” 0.90mm=”” 0.35=”” 1.2mm=”” 0.38=”” 1.25=”” 0.40=”” (age-15)=”” 0.60=”” 0.30=”” 0.75=”” (age-70)=”” 0.28=”” 1.20=”” 0.39=”” 0.95=”” (age-35)=”” 0.70mm=”” 0.31=”” 13=”” 1.15mm=”” 1.10=”” (age-20)=”” 0.29=”” 0.45=”” 0.90=”” 4:=”” mandibular=”” reconstruction=”” 0.27=”” n=”” 0.65=”” (age-50)=”” 0.48=”” 0.65mm=”” 0.34=”” 0.62=”” 0.53=”” 0.55=”” discussion=”” above=”” show=”” registration,=”” instruments=”” move.=”” shows=”” improves=”” ~=”” 0.40mm=”” ultimately=”” 10-13=”” online-offline=”” well=”” speed=”” remained=”” unchanged.=”” conclusion,=”” combination=”” techniques=”” leads=”” creation=”” greatly=”” improved=”” oms.=”” stereo=”” cameras=”” translucent=”” mirror=”” improve=”” depth=”” perception.=”” reducing=”” pattern=”” extracts=”” problems.=”” marker-free=”” burden=”” fiducials=”” registration.=”” obtains=”” more=”” accurate=”” see-through=”” projection=”” display.=”” although=”” range=”” available=”” visualization,=”” so=”” far=”” failed=”” required=”” major=”” affects=”” research=”” explored=”” opportunities=”” overcoming=”” limitations=”” only=”” produce=”” 7-10=”” thus=”” obtained=”” 1mm=”” justifies=”” addition=”” overcome=”” simulation)=”” minimize=”” (also=”” error)=”” commonly=”” occurs=”” some=”” solutions=”” address=”” issue,=”” but,=”” nothing=”” minimization=”” acceptable=”” range.=”” demonstrates=”” capabilities=”” eliminate=”” errors=”” speed.=”” future=”” included=”” selecting,=”” rejection.=”” further=”” converted=”” 3d.=”” acknowledgements=”” work=”” supported=”” part=”” study=”” support=”” manager=”” angela=”” maag=”” charles=”” sturt=”” university,=”” sydney,=”” australia.=”” conflict=”” interest=”” interest.=”” appendices=”” appendix=”” 1:=”” abbreviations=”” used=”” paper=”” oms=”” oral=”” maxillofacial=”” surgery=”” augmented=”” reality=”” iterative=”” closest=”” learning=”” detection=”” mri=”” magnetic=”” resonance=”” imaging=”” computed=”” tomography=”” dimensional=”” 2d=”” <=”” h3=””> </n?></n?>

Leave a Reply

Your email address will not be published.