Upper and lower tooth occlusion simulation method and device and electronic equipment
A technology of upper and lower teeth and upper teeth, which is applied in the field of image processing, can solve the inconvenience of observing the alignment of the upper and lower teeth of patients, and achieve the effect of saving diagnosis and treatment time
Active Publication Date: 2021-12-28
APEIRON SURGICAL CO LTD
13 Cites 1 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0005] Embodiments of the present invention provide a method, device, electronic device, computer-readable storage medium, and computer program product for simulating the alignment of the upper an...
Method used
Determine positioning vector by above-mentioned positioning coordinates, and the upper and lower tooth interface determined according to the mixed product mode of positioning vector contains normal vector information, and then can directly determine upper tooth region and lower tooth region by this normal vector information in follow-up process Tooth area, which improves the calculation efficiency.
Further, in order to facilitate the user to observe the alignment process of the upper and lower teeth, the user can also drag and drop the upper teeth area or the lower teeth area displayed on the screen through an input device such as a touch screen or a mouse, so that the upper and lower areas or the lower teeth The area takes the axis of the jaw joint as the axis of rotation and rotates continuously according to the user's dragging trajectory, which is convenient for the user to observe the alignment process. The user can also stop dragging when the upper and lower teeth are confirmed to be aligned to obtain a more accurate alignment of the upper and lower teeth. combined image.
Specifically, in order to improve the determination accuracy of the interface between the upper and lower teeth, after determining the plane where the positioning coordinates are located according to the positioning coordinates, the user can receive input from input devices such as keyboard, mouse or touch screen, and fine-tune the plane , so that the user can calibrate the plane to obtain a more accurate interface between the upper and lower teeth, and improve the accuracy of the determined interface between the upper and lower teeth.
[0130] Through at least three metal balls in the target three-dimensional image, three positioning coordinates can be quickly determined, and no matter which method is used to generate the target three-dimensional image, the m...
Abstract
The invention provides upper and lower tooth occlusion simulation method and device and electronic equipment. The method comprises the following steps: acquiring a target three-dimensional image comprising a three-dimensional image of an oral cavity of a patient and a three-dimensional image of an imaging marker of an oral cavity positioning tool; according to the three-dimensional image of the imaging marker, determining an upper and lower tooth interface in the target three-dimensional image in the target three-dimensional image; dividing the target three-dimensional image through the upper and lower tooth interface to obtain an upper tooth area and a lower tooth area; and rotating the upper tooth area and/or the lower tooth area by taking a jaw joint axis in the target three-dimensional image as a rotation axis to generate an upper and lower tooth occlusion image. Therefore, when a doctor needs to observe the patient, the upper and lower tooth interface of the patient can be quickly determined by utilizing the three-dimensional image of the imaging marker in the target three-dimensional image, so that the upper tooth area and the lower tooth area can be quickly and accurately determined, quick upper and lower tooth occlusion simulation can be realized, the convenience of observing the upper and lower tooth occlusion condition of the patient in the diagnosis and treatment process is greatly improved.
Application Domain
Surgical navigation systemsDentistry
Technology Topic
EngineeringBiomedical engineering +6
Image
Examples
- Experimental program(1)
Example Embodiment
[0092] An exemplary embodiment of the present invention will be described in more detail below with reference to the accompanying drawings. While the exemplary embodiment shows an exemplary embodiment of the present invention, in the drawings, it should be understood that the present invention may be implemented in various forms and embodiments should not be set forth herein limits. Rather, these embodiments are able to more thorough understanding of the present invention, and the scope of the present invention is able to convey the complete skilled in the art.
[0093] figure 1 An upper and lower teeth according to an embodiment of the present invention, the step of bonding the analog flowchart of a method, such as figure 1 As shown, the method includes:
[0094] Step 101, the target acquires three-dimensional image, the three dimensional target comprises a three-dimensional video image imaging marker three-dimensional image of the patient's mouth and oral cavity positioning tool, the target three-dimensional image reconstruction is based on the patient's mouth CBCT images obtained by the CBCT images the patients teeth or lower teeth in the oral cavity positioning tool wear state of the photographed.
[0095] In use of the patient's teeth robot checks surgery or planning process, the robot needs to determine the precise location of the teeth in the patient's mouth, to the probe, the robot arm and other equipment can accurately operate on the patient's teeth. For example, at the planning stage planted robot using dental dental implant procedure, it is necessary to replace missing teeth placement metal implant planning. However, since the patient can not be held completely still state, and therefore, needs to be installed on the patient of dental positioning tool having an imaging marker is provided an imaging marker primarily for oral three-dimensional video patient's registration, i.e. to establish the patient oral mapping relationship between three-dimensional image coordinate system in the real space coordinate system to the robot arm of the robot can operate accurately on the precise position of the patient's teeth. Wherein the dental implant is a metallic implant into the bone to replace the original means to restore lost teeth.
[0096] Refer figure 2 and image 3 , figure 2 The present invention is an oral tool positioning structure provided in the embodiment of FIG. image 3 Installation is a schematic view of an oral positioning tool according to an embodiment of the present invention. like figure 2 , The oral cavity positioning tool 1171 includes a positioning tool body 11711 connected 11716, 11714 three imaging marker tool body and the other end positioned near the position of the collet chuck 11 716 positioned at one end of the tool body number 11717 IR positioning the ball. like image 3 , The cartridge 11716 may be mounted on the patient's tooth 20, and is fixedly connected by gluing, etc. and the tooth of a patient 20.
[0097]Before using the robot to check the patient's teeth, the oral positioning tool is installed on the patient's teeth, and the oral cavity containing the oral positioning tool is modeled, generating the patient's teeth and imaging markers. Three-dimensional oral image. The position information of each infrared positioning ball on the oral positioning tool is then acquired by the infrared camera, and the positional relationship between the imaging marker and the infrared positioning ball is determined due to the structural stability of the oral positioning tool, and the oral positioning tool is fixed. The positional relationship between the imaging marker and the patient's teeth is also fixed, so that the position and angle of the patient's teeth can be accurately determined according to the position information of the infrared positioning ball in the space, providing a protection of robot precision operations .
[0098] During the process of operating the robot, the patient needs to wear a mouth positioning tool so that the robot can determine the position and angle of the patient's teeth in the space, and therefore, the doctor is very likely during the operation of the robot. It is necessary to observe the patient's upper and lower teeth, in order to treat patients to treat patients, for example, in the process of dental planting, it is necessary to use dental planting robots to punch in the patient's tonthes, in order to place an implant, at this time, in order to Determine parameters such as accurate punch positions and angles, doctors need to analyze according to the patient's upper and lower teeth. However, since the patient's oral positioning tool is mounted, the patient's upper and lower teeth cannot be closed. At this time, a three-dimensional image containing the patient's upper and lower teeth and oral positioning tools can be acquired, and the patient's upper and lower teeth matching images are simulated by the target three-dimensional image.
[0099] Specifically, after installing the oral positioning tool on the patient's teeth, the patient's head can be scanned by a tomographic scanning device such as a CONE BEAM CT, CBCT. Target 3D image. The target three-dimensional image can also be established by other imaging techniques such as X-rays, MagneticResonance Imaging, MRIs. The embodiments of the present invention do not specifically limit the establishment of the target three-dimensional image.
[0100] Step 102, according to the three dimensional image of the imaging marker, three positioning coordinates are determined in the target 3D image.
[0101] Since the body of the oral positioning tool is typically a flat-panel structure, it is directly mounted on the upper or lower teeth of the patient, the body of the oral positioning tool is substantially in the same plane, and the imaging marker is mounted in the oral position. On the body of the tool, therefore, after obtaining the three-dimensional image of the three-dimensional image containing the patient's upper and lower teeth images and the imaging markers, the three-dimensional image of the imaging mark is quickly determined in the plane of the target 3D image. The patient's top-down interface.
[0102] Since it is determined that a plane requires at least three coordinates, the three positioning coordinates of the oral positioning tool can be quickly determined according to the three dimensional images of at least three imaging flags in the target 3D image, quickly determine the three positioning coordinates of the oral positioning tool, by these positioning coordinates, calculated A plane located in the target three-dimensional image coordinate system, which is the patient's upper and lower teeth interface. It should be noted that the determined positioning coordinates are coordinates in the target three-dimensional image spatial coordinate system.
[0103] Specifically, the above-described positioning coordinates can be determined by manually labeling in the target three-dimensional image. For example, the doctor can rotate the target three-dimensional image displayed on the screen using a touch screen or a mouse, etc. to find the angle of the three-dimensional image of the imaging marker, and then pass the port of the input device such as touch screen or mouse in the oral positioning tool in the model. Select the positioning point, the position coordinates in the target 3D image are positioned coordinates.
[0104] The positioning coordinates can also be automatically determined by the computer directly on the image recognition of the target 3D image. For example, the image data of the imaging marker stored in the gallery can be matched into the three-dimensional image of the imaging mark in the target 3D image, and it is determined that the position coordinates of these feature points in the target 3D image can be determined. As the above positioning coordinates. It should be noted that when the positioning coordinates are determined based on the three-dimensional image of the imaging mark, the imaging marker can be calculated in the target three-dimensional image, and the ball core coordinates of an imaging marker can be calculated as a positioning coordinate.
[0105] Step 103, determine the upper and lower teeth interfaces in the target 3D image according to the three positioning coordinates.
[0106] After determining the positioning coordinates, the planar formula intersecting the positioning coordinates can be calculated, and the plane drawn in the oral three-dimensional imaging in the mouth is the upper and lower teeth interfaces.
[0107] Since the structure of the oral positioning tool is different, some oral positioning tools are mounted to the patient's teeth, and the imaging markers on which they may be not substantially parallel to the surface of the patient's teeth, but there is a large angle, resulting in passing through The upper and lower teeth interfaces determined by the imaging marker will pass through the upper teeth and / or lower teeth, causing poor boundary effect. For this oral positioning tool, the correction data of the oral positioning tool can be preset in the database, and two positioning coordinations in three positioning coordinations determined by the three-dimensional image of the imaging mark can be corrected based on the correction data. To make the three positioning coordinates determined substantially parallel to the patient's tooth surface, determine the upper and lower teeth interface according to the corrected positioning coordinate.
[0108] Step 104, by the upper and lower teeth divide the target three-dimensional image, obtain the upper teeth region and the lower teeth region, the upper teeth region and the lower teeth region all of the three-dimensional region in the target 3D image. .
[0109] In order to generate analog upper and lower teeth contact images, after determining the upper and lower teeth interface, it is also necessary to determine the image area including the upper teeth and the image area including the lower teeth in the target three-dimensional image.
[0110] Specifically, the target three-dimensional image can be used to face the target three-dimensional image directly, and the target three-dimensional image on the side of the upper and lower teeth is determined as the upper teeth, and the target three-dimensional image of the other side of the upper and lower teeth interface is determined to be under Tooth area.
[0111] Since the upper and lower teeth interfaces are between the upper and lower teeth, it is also possible to image identification of the dental characteristics in the target three-dimensional image, determine the teeth region in the target three-dimensional image, and the teeth of the upper and lower teeth The area is determined as the upper teeth, and the tooth region on the other side of the upper and lower teeth interface is determined as the lower teeth.
[0112] Step 105, in the rotation axis of the mandibular shaft in the target three-dimensional image, rotate the upper and / or the lower teeth region to generate the upper and lower teeth contact images.
[0113] When the patient actually performs the action, the mandible is rotated in the mandibular joint to rotate, so that the lower teeth on the mandible is close to the upper jaws, thereby completing the opposite operation.
[0114] Therefore, it is also necessary to determine the two jaw joint positions of the patient's mandible to the maxillary joint connection in the target three-dimensional image, and connect the two jaw joint position to determine the jaw joint axis. The maxillary joint axis can be expressed by a straight line in a three-dimensional coordinate system.
[0115] Specifically, the above-mentioned maxillary joint position can be determined by manually labeling in the target three-dimensional image. For example, the user can rotate the target three-dimensional image displayed on the screen using a touch screen or a mouse and other input devices to find a model angle that facilitates observing the jaw joint, and then tag each jaw joint by a touch screen or a mouse and other input devices. It is also possible to image identification of the target three-dimensional image by image recognition technology, and automatically determine the position of each jaw joint.
[0116] Since the position of the maxillary joint can be represented by three-dimensional coordinate in the coordinate system of the target three-dimensional image, the position coordinates of the two jaw joints can be acquired, and the only position coordinates of the two jaw joints determine a three-dimensional image of the target. The straight line, the straight line is the jaw joint axis.
[0117] Further, the upper teeth and / or lower teeth in the target 3D image can be rotated in the mandibular joint axis to simulate the actual movement of the patient's teeth.
[0118] Specifically, the user can determine that the upper and lower teeth are entered on the rotation angle required to rotate, and select the side tooth region to be rotated, the system automatically selects the side tooth region to which the user is selected, rotates the rotation to the other side tooth region. Angular value to generate up and lower teeth contact images. It is also possible to automatically rotate the upper teeth and / or the lower teeth in the case where the mandibular region and / or the lower teeth are rotated in the mandibular joint axis, so that the teeth of the upper teeth are close to each other. In the image coordinate system, whether or not the dental image of the lower teeth is intersected in the target three-dimensional image coordinate system. If intersects, the rotation is stopped to give the upper and lower teeth.
[0119] Further, in order to facilitate the user to observe the facade of the upper teeth, the user can drag the upper and lower teeth regions displayed on the screen by a touch screen or a mouse and other input devices, so that the upper and lower regions or lower teeth are jaw The joint axis is a rotating shaft, according to the user's drag trajectory, it is easy to observe the contact process, and the user can also determine that the upper and lower teeth are observed to stop dragging and drop, obtain a more accurate upper and lower teeth.
[0120] In the embodiment of the present invention, a top and lower teeth combined simulation method is disclosed, including: obtaining a three-dimensional image of a target 3D image comprising a three-dimensional image of a patient's oral 3D image and an oral positioning tool, a target three-dimensional image based on The CBCT image reconstruction of the patient's oral cavity is obtained by the patient under the state of the upper or lower teeth wearing oral positioning tool; three positioning coordinates are determined in the target 3D image according to the three dimensional images of the imaging marker; Three positioning coordinates determine the upper and lower teeth interfaces in the target three-dimensional image; through the upper and lower teeth divided into the target three-dimensional image, obtain the upper teeth and the lower teeth, the upper and lower teeth are the target 3D image. The three-dimensional region; the maxillary joint axis in the target 3D image is the rotating shaft, and the upper teeth and / or the lower teeth are rotated, and the upper and lower teeth contact is generated. In this way, when the doctor needs to observe the patient, the three-dimensional image of the imaging marker in the target 3D image can quickly determine the patient's upper and lower teeth interface, and then quickly, accurately determine the upper tooth region and the lower teeth, so that it can be implemented. Fast upper and lower teeth, the simulation is greatly saved, and the treatment time and improves the convenience of observing the upper and lower teeth of patients during the diagnosis and treatment.
[0121] Figure 4 It is a flow chart of the step of another upper and lower teeth, such as the embodiment of the present invention. Figure 4 As shown, the method includes:
[0122] Step 201, a target three-dimensional image includes a three-dimensional image of the imaging mark of the patient's oral three-dimensional image and the oral positioning tool, the target three-dimensional image is based on the CBCT image reconstruction based on the patient's oral cavity, the CBCT image It is obtained in a state where the patient is wearing the oral positioning tool in the upper or lower teeth.
[0123] After the oral positioning tool is mounted on the patient's teeth, it is preferred that the patient's head is scanned by a tomographic scanning device such as a CONE BEAM CT, CBCT, and an imaging containing a patient's skull and an oral positioning tool. The target three-dimensional image of the marker 3D image can also use other technical means to scan the bones. 3D image.
[0124] Step 202, according to the three dimensional image of the imaging marker, three positioning coordinates are determined in the target 3D image.
[0125]In order to avoid the positioning tool placed in the oral cavity the patient's mouth, an image of the patient's head for target three-dimensional cone-beam computed tomography image obtained will include oral positioning tool, thereby causing occlusion of the tooth portion of the patient. Thus, the tool can not be positioned oral imaging in cone-beam computed tomography of the material during use plastic or the like is manufactured, in order to avoid interference oral positioning tool in the target body part in the three-dimensional image. However, due to the cone-beam computed tomography imaging of the metal can, thus, the imaging marker on the positioning tool may be employed orally ball structure of metallic material, such as titanium sphere.
[0126] Alternatively, step 202 may further comprise:
[0127] Sub-step 2021, the target is identified from the three-dimensional video image in a three-dimensional imaging of the at least three markers, wherein the coordinate position acquiring three dimensional video imaging marker, and based on the three-dimensional imaging marker determining a coordinate position of the image location coordinates three, or response to user selection operation for a three dimensional video image of the marker, the selection coordinate position acquiring three-dimensional image corresponding to the operation point, according to the three-dimensional determining a coordinate position of the image point three location coordinates.
[0128] Determining the three-dimensional position of the metal ball in the target image and determining location coordinates of the position of the metal ball According target three-dimensional image in step 102, reference may, for example, embodiments of the present invention is not repeated.
[0129] Incidentally, out of consideration to improve the accuracy, the positioning tool in certain oral number greater than three may be provided with a metal ball, resulting in a three-dimensional target image contains quantity exceeds three dimensional video imaging markers, this when these numbers may be more than three dimensional video imaging marker, select three dimensional video imaging marker for determining the three location coordinates. For example, the number may be more than three markers forming each of three-dimensional images form a triangle, and calculating the area of each triangle, three-dimensional image selected corresponding to the largest triangle forming three markers for determining the three positioning coordinates, so that the three location coordinates of a distance between each far, to improve the accuracy of the determined interface of upper and lower teeth. The art can also be employed in other ways than three from the number of three-dimensional video imaging marker, select three dimensional video imaging marker for determining the three location coordinates, in this embodiment of the present application is not specifically limited.
[0130] Located by target three-dimensional image of the at least three metal balls, you can quickly determine the three location coordinates and, regardless of the manner in which three-dimensional image generation target, comprising a metal ball which will be used to determine location coordinates of the image, not only greatly determining location coordinates of the improved speed, but also to enhance the applicability of technical solutions.
[0131] Alternatively, sub-step 2021 may further comprise:
[0132] Sub-step A1, the three-dimensional image of the target feature analysis, identifying the three-dimensional image of said target in at least three dimensional video imaging marker.
[0133] Alternatively, A1 may further include a sub-step of:
[0134] Sub-step a1, in response to a user selection for a three-dimensional image of the at least three markers imaging operation, acquires a coordinate position of at least three points of the three-dimensional image corresponding to the selection operation.
[0135] As more three-dimensional image of the target pixel, the entire region of the target computer using three-dimensional image recognition calculation need to consume more power and time, in order to further increase the speed of engagement of the teeth is determined to generate the image, it may also be employed in conjunction with a computer Artificial determining a three-dimensional manner forming images of at least three markers from the target in the three-dimensional image.
[0136] Specifically, it is possible to obtain the operation for the user to select the at least three dimensional video imaging marker acquires a coordinate position of at least three points of the three-dimensional image corresponding to the user's selection operation. The user may select the at least three dimensional coordinate positions of points on the three-dimensional image Effect object is achieved by a mouse, touch screen and other devices, may be directly entered coordinate values of at least three points by three-dimensional image input device such as a keyboard.
[0137] Sub-step a2, determined at least three position recognition area based on the coordinates of the at least three points of the three-dimensional image.
[0138] Due to the small metal balls, three-dimensional image coordinate position of the point determined by the user is difficult to reflect the exact position of the metal ball, it is also necessary for the three-dimensional coordinate position of each dot image selected by the user is expanded, forming at least three regions identified. Specifically, the predetermined expanded radius may be provided to each three-dimensional coordinate position of the image point selected by the user as the center pixel points in the preset radius expanded to expand recognition region. Of course, other means may be used to expand the coordinate position of each point of the three-dimensional image, application of the present embodiment is not specifically defined.
[0139] Incidentally, the user may be determined by at least three regions identified marquee, circling the like directly.
[0140] Sub-step a3, the recognition area of the at least three feature analysis, said identifying said at least three regions identified in at least three dimensional video imaging marker.
[0141] After determining the identification of at least three regions, then for each of the identified region feature analysis, matching the three-dimensional video image from each identification marker region, it can greatly reduce the number of pixels of the image recognition, the three-dimensional increase in the target three-dimensional video image recognition efficiency of at least three imaging marker.
[0142] Sub-step A2, acquiring three dimensional video image marker three-dimensional image from the imaging of at least three markers and determines the position coordinates of the three-dimensional image of the three imaging marker.
[0143] Sub-step A3, is determined based on the three location coordinates of three coordinate positions.
[0144] Step 203, according to the three coordinate positioning, three-dimensional image to determine the target interface in the upper and lower teeth.
[0145] After determining the location coordinates, the method in step 103 determines the upper and lower teeth may be employed interface, not repeated embodiment embodiment of the present invention.
[0146] Alternatively, step 203 may further comprise:
[0147] Sub-step 2031, according to a plan view of the positioning of the three coordinates of the target in the three-dimensional image clockwise or counterclockwise, configured to determine a vector positioned between two adjacent positioning coordinates.
[0148] Since both upper and lower teeth respectively different interface of tooth structure, i.e., upper and lower teeth, thus determining the upper and lower teeth in the interface, it is also necessary to determine the upper and lower teeth of tooth structure corresponding to each of points on both sides of the interface.
[0149] Specifically, the triangle is not equilateral triangle because the determined positioning coordinates formed by connecting, for example, since the three metal balls on the positioning tool orally three sides of a triangle formed by connecting not equal, and therefore, when a metal according to three after the ball is determined that the location coordinates can be formed in accordance with a triangular shape around three positioning coordinates, determining the positioning tool is mounted towards the outlet chamber in the oral cavity.
[0150] If the mounting direction of the mouth upward positioning tool is determined when the target three-dimensional image of the upright, can be connected to the positioning coordinates corresponding to the position of the metal ball in the clockwise direction, and the vector construct positioned between two adjacent positioning coordinates .
[0151] Refer Figure 5 , Figure 5 A positioning vector is a schematic view of an embodiment of the present invention. like Figure 5 Illustrated, the metal balls are connected clockwise positioning coordinates corresponding to the A, B and C, giving the three targeting vectors a, b and c.
[0152] Sub-step 2032, the positioning between the hybrid vector product of the determined tooth interface down.
[0153] After the targeting vector is determined to be the expression product of a plane established between the positioning equations hybrid vectors, and expression solving the plane equation, the upper and lower teeth interface. Wherein the mixing product, also known as the triple product is the result of multiplying the three vectors.
[0154] according to Figure 5 For example, wherein the metal balls are connected in a clockwise direction corresponding to positioning coordinates A, B and C, giving the three targeting vectors a, b and c are: a (x0, y 0, z0), b (x 1 , y1, z 1), c (x 2, y 2, z 2), the plane can be established the following expression equation:
[0155]
[0156] After solving the above equation expression plane, upper and lower teeth can be obtained by Equation interface, the interface to determine the upper and lower teeth.
[0157] Determining localization vector by the positioning coordinates, and is determined according to the mixing products positioned vector of the upper and lower teeth interface containing a normal vector information, and thus can be the normal vector information to directly determine the upper teeth area and a lower teeth area by a subsequent process, enhance operational efficiency.
[0158] Step 204, in response to the adjustment command for the user interface of the upper and lower teeth, according to the position of the adjustment instruction interface of the upper and lower teeth in the three-dimensional image of the target and / or angle.
[0159] Upper and lower teeth directly determined out of the interface may be present according to the positioning coordinates and the actual interface a certain bias, so that the user can also be corrected the determined upper and lower teeth interface, make it more accurate.
[0160] Specifically, in order to improve the accuracy of determining the upper and lower teeth the interface, also after it is determined that the plane of location coordinates is located according to the positioning coordinates of the user through the input keyboard, mouse or touch screen input device, the plane trimming, so that the user plane can be calibrated to give a more accurate vertical interface teeth, improves the accuracy of determining the upper and lower teeth of the interface.
[0161] Step 205, through the upper and lower teeth facing the boundary of the target three-dimensional image is divided, to obtain upper teeth area and a lower teeth area.
[0162] Alternatively, step 205 may further comprise:
[0163] Sub-step 2051, the upper and lower teeth calculating normal vector interface.
[0164] Since sub-step 2031 to sub-step 2032, it is determined according to the rules clockwise or counterclockwise positioning the upper and lower teeth vector calculating an interface, the interface is located vertically between the upper teeth and lower teeth, upper and lower teeth and therefore the interface the method may determine a direction vector indicative of teeth located on the upper and lower teeth and a lower interface azimuth orientation of teeth located upper and lower teeth of the interface.
[0165] It should be noted that the determination of location vectors result in different parts of the tooth normal vector of the upper and lower teeth of the interface determined in different indicated clockwise direction. For example, in a clockwise manner using targeting vector is determined, the direction of the vector method indicating the upper and lower teeth of the tooth interface direction, if it is determined in a counterclockwise manner using targeting vector, the vector direction of the upper and lower teeth interface method indicates the direction of the lower teeth.
[0166] Sub-step 2052, the method according to the direction of the vector, the target three-dimensional image of the upper and lower teeth of a side interface region is determined as the upper teeth, and determines target three-dimensional image of the other side of the interface of upper and lower teeth and lower teeth area.
[0167]Embodiments of the invention may be automatically determined by the normal vector of the upper and lower teeth of the interface region of the teeth and the lower teeth area, without the user of the upper teeth area and a lower teeth area is determined manually, and without using a large image recognition and other computationally expensive identify and determine the upper teeth area and a lower teeth area, not only to enhance the convenience when the user, but also reduces the amount of computation final output vertical occlusal desired effect on improving the speed of image output.
[0168] Step 206, according to a user input operation for the target three-dimensional image, determining two-dimensional mandibular joint position of the target image.
[0169] When the patient actually performs the action, the mandible is rotated in the mandibular joint to rotate, so that the lower teeth on the mandible is close to the upper jaws, thereby completing the opposite operation.
[0170] Therefore, it is also necessary to determine the two jaw joint positions of the patient's mandible to the maxillary joint connection in the target three-dimensional image, and connect the two jaw joint position to determine the jaw joint axis. Temporomandibular joint axis may be represented by a straight line equation of a three-dimensional coordinate system.
[0171] Specifically, the above-mentioned maxillary joint position can be determined by manually labeling in the target three-dimensional image. For example, the physician may use a touch screen or mouse input device to the target three-dimensional image displayed on the screen is rotated, the jaw model is found to facilitate the observation angle of the joint, and then marking each jaw joint through a touch screen or a mouse input device. It is also possible to image identification of the target three-dimensional image by image recognition technology, and automatically determine the position of each jaw joint.
[0172] Step 207, the connection between the two jaw joint is determined as the position of the jaw joint axis.
[0173] Since the position of the maxillary joint can be represented by three-dimensional coordinate in the coordinate system of the target three-dimensional image, the position coordinates of the two jaw joints can be acquired, and the only position coordinates of the two jaw joints determine a three-dimensional image of the target. The straight line, the straight line is the jaw joint axis.
[0174] Step 208, according to an input operation by the user is determined rotational angle region, and the region to be rotating, to be the rotation region comprising the upper teeth area and / or the lower teeth area.
[0175] Before the upper and lower teeth to be rendered images together, further rotation region to be determined to be rotated, and the rotation angle of the rotating region to be the region of rotation, wherein the rotating region may be a region on the tooth and / or the lower teeth area.
[0176] Specifically, the region may be determined to be the rotation of the tooth region selection operation and / or the lower teeth area according to a user through the touch screen, mouse, and other input devices. Can also be determined in other ways, embodiments of the present invention is not specifically limited.
[0177] After the area is determined to be rotating, the user can enter the region corresponding to the angle of rotation by way of the input keyboard or the like. Since the user may not know the appropriate upper and lower teeth can be made completely bonded to the rotation angle region, and therefore the user can also touch screen or the like by the mouse input device on the display area of the teeth or the lower teeth area on the screen of the drag operation, it is determined upper and lower regions and regions / or the rotation angle lower teeth area.
[0178] Step 209, according to the rotation angle and the region of the jaw joint axis, the sampling lines of the rotation region to be offset, and the three-dimensional image of the target body by rendering the offset sampling line rendering and drawn to give upper and lower dental images involution.
[0179] Volume rendering (Volume rendering) is a three-dimensional data field (i.e., three-dimensional model data), two-dimensional image generation technique directly on the screen. When rendering volume rendering, pixel samples will was extracted three-dimensional model according to a sampling line through a line, then all the pixels in the sampling line through a two-dimensional image corresponding to the obtained composition. Wherein a plurality of sampling lines may be, a plurality of sampling lines may be formed through a three-dimensional surface model of the surface and / or inside, the volume rendering may render the extracted pixels on a curved surface and arranged to generate a two-dimensional image.
[0180] Refer Image 6 , Image 6 A body according to an embodiment of the present invention, a schematic drawing. like Image 6 , The volume rendering may be three-dimensional model 30, the sample line 40 through the pixels 50 of the user 70 drawn onto the screen 60 as viewed.
[0181] Refer Figure 7 FIG offset sampling line body is a comparative schematic drawing provided in embodiments of the present invention. like Figure 7 Section 301 shown, without offset of the sampling line, three-dimensional image of the target volume rendering two-dimensional images can be obtained unclosed upper and lower teeth. like Figure 7 The illustrated portion 302, after the offset sampling lines can be drawn two-dimensional image of the closed upper and lower teeth based on the sampling line of the shift.
[0182] Since the generation of the upper and lower teeth when the combined image, the contents of the target pixel does not need to be modified in the three-dimensional image, only need to adjust the position of the pixel portion, it is not necessary to re-construct the target three-dimensional image, only jaw joint axis, and in accordance with , and a region to be the rotational angle of rotation area, sample line used when adjusting the volume rendering rendering two-dimensional images can be obtained directly by the upper and lower teeth closed volume rendering.
[0183] By rendering the volume rendering image generating closed upper and lower teeth, without having to rebuild the target three-dimensional image, two-dimensional image rendering only the closed upper and lower teeth, greatly reduces system cost calculation image drawn closed upper and lower teeth, to enhance the image of the closed upper and lower teeth production rate, saving chair time.
[0184] Alternatively, step 209 may further comprise:
[0185] Sub-step 2091, the position to which a jaw joint coordinate origin, to the jaw joint axis as a rotation axis rotating coordinate system established.
[0186] To generate a combined image of the upper and lower teeth, first need to establish a rotating coordinate system.
[0187] Specifically, the two jaw joint may be a joint position as the coordinate origin jaw to jaw joint axis as a rotation axis rotating coordinate system established.
[0188] Sub-step 2092, is determined by the equation to be Rodriguez rotation region of the rotating coordinate system, about the rotation axis homogeneous matrix after the rotational angle zone.
[0189] Rodrigues' rotation formula is a formula for calculating new vector calculating three-dimensional space, the vector rotation after a given angle about the rotation axis obtained.
[0190] Rodriguez by the formula, can be calculated in the rotation region of the rotating coordinate system, transform the relationship around the rotation axis generated by the rotational angle zone, wherein the conversion relationship may be employed followed by 4X4 matrix is expressed.
[0191] Sub-step 2093, the rendering process by the rendering of the target volume rendering three-dimensional images, transforming the rotating position of the region to be the target in the three-dimensional image in accordance with said homogeneous matrix, to obtain the upper and lower teeth involution image.
[0192] In the process of volume rendering a rendering target three-dimensional image may be generated by the above-described homogeneous matrix sub-step 2092, the line of sampled three-dimensional image to be the target area of the rotary offset, sample line to obtain the offset, then render three-dimensional image of the target volume rendering mode by using the shifted sample line to give combined image on the upper and lower teeth.
[0193] In the embodiment of the present invention, a top and lower teeth combined simulation method is disclosed, including: obtaining a three-dimensional image of a target 3D image comprising a three-dimensional image of a patient's oral 3D image and an oral positioning tool, a target three-dimensional image based on The CBCT image reconstruction of the patient's oral cavity is obtained by the patient under the state of the upper or lower teeth wearing oral positioning tool; three positioning coordinates are determined in the target 3D image according to the three dimensional images of the imaging marker; Three positioning coordinates determine the upper and lower teeth interfaces in the target three-dimensional image; through the upper and lower teeth divided into the target three-dimensional image, obtain the upper teeth and the lower teeth, the upper and lower teeth are the target 3D image. The three-dimensional region; the maxillary joint axis in the target 3D image is the rotating shaft, and the upper teeth and / or the lower teeth are rotated, and the upper and lower teeth contact is generated. In this way, when the doctor needs to observe the patient, the three-dimensional image of the imaging marker in the target 3D image can quickly determine the patient's upper and lower teeth interface, and then quickly, accurately determine the upper tooth region and the lower teeth, so that it can be implemented. Fast upper and lower teeth, the simulation is greatly saved, and the treatment time and improves the convenience of observing the upper and lower teeth of patients during the diagnosis and treatment.
[0194] Upper and lower teeth and method of the present invention is provided for co-simulation method corresponds embodiment, see Figure 8 The present invention further provides a configuration diagram of the upper and lower teeth engagement simulation apparatus, in this embodiment, the apparatus may comprise:
[0195] Obtaining module 501, configured to obtain the target three-dimensional image, the three dimensional target comprises a three-dimensional video image imaging marker three-dimensional image of the patient's mouth and oral cavity positioning tool, the target three-dimensional image is reconstructed CBCT images based on the obtained patient's mouth, the CBCT images on a patient wearing the teeth or lower teeth of the oral cavity state imaging positioning tool obtained;
[0196] Coordinate determining module 502 is configured to calculate three-dimensional image of the imaging marker, to determine three-dimensional positioning coordinates of the target image;
[0197] Face determination module 503, configured in accordance with the three positioning coordinates, determining the target three-dimensional video interface of upper and lower teeth;
[0198] Partitioning module 504 is configured by the upper and lower teeth facing the boundary of the target three-dimensional image is divided, to obtain upper teeth area and a lower teeth area, and the area of the upper teeth and lower teeth of the target region are three-dimensional video the three-dimensional region;
[0199] An alignment module 505, is configured to jaw joint axis in the three-dimensional image of the target as a rotation axis, the upper teeth area and / or the lower teeth area rotates together images generated on the upper and lower teeth.
[0200] In an embodiment, the coordinate determination module comprises:
[0201] Coordinate determining sub-module is configured to identify the at least three imaging a three-dimensional image from the target marker three-dimensional image, acquires a coordinate position in which the three-dimensional video imaging marker, and according to the three imaging marker three-dimensional image coordinate position determining three location coordinates, or in response to a user selection operation for a three dimensional video imaging marker, acquiring three-dimensional coordinate position of the image point corresponding to the selection operation, according to the three three-dimensional image coordinate position of the three points determined location coordinates.
[0202] In an embodiment, the coordinate determination sub-module comprises:
[0203] Submodule, configured to target the three-dimensional image feature analysis, identifying the three-dimensional image of said target in at least three dimensional video imaging marker;
[0204] Coordinate position submodule, configured to obtain three dimensional video image marker three-dimensional image from the imaging of at least three markers and determines the position coordinates of the three-dimensional image of the three imaging marker;
[0205] Positioning coordinates submodule, configured to determine the three location coordinates based on the three position coordinates.
[0206] In an embodiment, the identification sub-module comprises:
[0207]The image point module is configured to obtain the coordinate position of at least three three-dimensional image points corresponding to the selection operation in response to the selection operation of the three-dimensional image of at least three imaging markers;
[0208] The identification area submodule is configured to determine at least three recognition zones based on the coordinate position of the at least three three-dimensional image points;
[0209] The regional analyzer is configured to characterize the at least three identification areas to identify the three dimensional images of the at least three imaging markers in the at least three identification regions.
[0210] In one embodiment, the surface determining module includes:
[0211] The positioning to the quantum module is configured to determine the positioning vector between the two two adjacent positioning coordinations in the clockwise or counterclockwise direction of the top view of the three positioning coordinates in the target 3D image;
[0212] The sub-interface determining sub-module is configured to determine the upper and lower teeth interfaces based on mixing volumes of the positioning vector.
[0213] In one embodiment, the apparatus further comprises:
[0214] The sub-interface adjustment module is configured to adjust the position and / or angle of the upper and lower teeth interface in the target 3D image according to the adjustment instruction in response to the user's adjustment instruction to the upper and lower tendex interface.
[0215] In one embodiment, the division module comprises:
[0216] The normal quantum module is configured to calculate the method of calculating the upper and lower teeth interface;
[0217] The division submodule is configured to determine the target three-dimensional image of the upper and lower teeth interfaces as the upper teeth according to the direction of the method vector, and determine the target three-dimensional image of the other side of the upper and lower teeth interface. For the lower teeth.
[0218] In one embodiment, the apparatus further comprises:
[0219] The maxillary joint position module is configured to determine two jaw joint positions in the target three-dimensional image according to the input operation of the target three-dimensional image;
[0220] The maxillary joint axis module is configured to determine the connection between the two jaw joint positions as the shaft axis.
[0221] In one embodiment, the contact module comprises:
[0222] The rotation determination sub-module is configured to determine the area rotation angle according to the user's input operation, and the area to be rotated, the waiting to be rotated, including the upper teeth region and / or the lower teeth;
[0223] The peer module is configured to shift the sampling line to be rotated and the sampling line to be rotated in accordance with the region rotation angle and the squeegee, and through the body drawing rendering method of the offset sampling line to the target 3D image. Rendering and get the upper and lower teeth.
[0224] In one embodiment, the parallel kissa module comprises:
[0225] The rotating coordinate system is configured to operate the rotating coordinate system in which one of the jaw joints is coordinate, with the axis of which the axis is rotated.
[0226] The matrix determination sub-module is configured to determine the Rode Ridus formula to rotate in the rotation coordinate system, and rotate the region rotation angle of the region rotation angle after the rotation angle;
[0227] The image sub-module is configured to render the target three-dimensional image in the target 3D image in accordance with the homogeneous matrix in accordance with the homogeneous matrix, to obtain the location of the target 3D image according to the homogeneous matrix. Upper lower teeth matching images.
[0228] In summary, an embodiment of the present invention provides a upper and lower teeth combined analog model generating means, including: 3D image of the target 3D image including the three-dimensional image of the imaging marker of the patient's oral 3D image and the oral positioning tool, the target 3D image It is obtained based on the CBCT image reconstruction of the patient's oral cavity. The CBCT image is obtained in a state where the upper teeth or the lower teeth wearing oral positioning tool; the three-dimensional image of the imaging mark is determined in the target 3D image. According to three positioning coordinates, determine the upper and lower teeth interfaces in the target three-dimensional image; divide the target three-dimensional image by the upper and lower teeth, obtain the upper teeth and the lower teeth, the upper teeth, and the lower teeth are the target 3D The three-dimensional area in the image; the maxillary joint axis in the target three-dimensional image is the rotating shaft, and the upper teeth and / or the lower teeth region are rotated to generate the upper and lower teeth. In this way, when the doctor needs to observe the patient, the three-dimensional image of the imaging marker in the target 3D image can quickly determine the patient's upper and lower teeth interface, and then quickly, accurately determine the upper tooth region and the lower teeth, so that it can be implemented. Fast upper and lower teeth, the simulation is greatly saved, and the treatment time and improves the convenience of observing the upper and lower teeth of patients during the diagnosis and treatment.
[0229] Figure 9 It is a block diagram of an electronic device 600 according to an exemplary embodiment. For example, electronic device 600 can be a mobile phone, computer, digital broadcast terminal, a message transceiver device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
[0230] Refer Figure 9 The electronic device 600 can include one or more components: processing assembly 602, memory 604, power assembly 606, multimedia component 608, audio component 610, input / output (I / O) interface 612, sensor assembly 614, and communication Component 616.
[0231] Processing component 602 typically controls an overall operation of electronic device 600, such as an operation associated with display, telephone call, data communication, camera operation, and recording operation. Processing component 602 can include one or more processor 620 to perform instructions to complete all or part of the steps of the above method. Additionally, processing component 602 can include one or more modules that facilitate interaction between components 602 and other components. For example, processing component 602 can include a multimedia module to facilitate interaction between multimedia components 608 and processing component 602.
[0232] Memory 604 is configured to store various types of data to support operations of electronic device 600. Examples of this data include instructions, contact data, phone book data, messages, pictures, video, and the like for any application or method for operating on electronic device 600. Memory 604 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), can be erased Programmable Read-Read memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or disc.
[0233] Power component 606 provides power to various components of electronic device 600. Power component 606 can include a power management system, one or more power supplies, and other components associated with generating, managing, and assigning power for electronic device 600.
[0234] Multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and the user. In some embodiments, the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive an input signal from the user. The touch panel includes one or more touch sensors to sense the gestures on the touch, slide, and touch panels. The touch sensor may not only sense the boundary of the touch or slide operation, but also detect the duration and pressure associated with the touch or sliding operation. In some embodiments, multimedia component 608 includes a front camera and / or a post camera. When the electronic device 600 is in an operation mode, such as shooting mode or video mode, the front camera and / or the rear camera can receive external multimedia data. Each front camera and the rear camera can be a fixed optical lens system or with focal length and optical zoom.
[0235] The audio component 610 is used to output and / or enter an audio signal. For example, the audio assembly 610 includes a microphone (MIC), when the electronic device 600 is in operation mode, such as call mode, recording mode, and speech recognition mode, the microphone is used to receive an external audio signal. The received audio signal can be further stored in memory 604 or transmitted via communication component 616. In some embodiments, the audio assembly 610 also includes a speaker for outputting an audio signal.
[0236] The I / O interface 612 provides an interface between the processing component 602 and the peripheral interface module, and the peripheral interface module can be a keyboard, a clicking wheel, a button, and the like. These buttons can include, but are not limited to: home button, volume buttons, start button, and lock buttons.
[0237] Sensor assembly 614 includes one or more sensors for assessment of various aspects for electronic device 600. For example, sensor assembly 614 can detect the opening / closing state of the electronic device 600, relative positioning of the assembly, such as the display and keypad of the electronic device 600, the sensor assembly 614 can also detect electronic device 600 or electronic device 600 one The position of the assembly is changed, and the user is in contact with the electronic device 600 or does not exist, the electronic device 600 or the acceleration / deceleration and the temperature of the electronic device 600 are changed. Sensor assembly 614 can include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact. Sensor assembly 614 can also include a light sensor such as a CMOS or CCD image sensor for use in imaging applications. In some embodiments, the sensor assembly 614 can also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
[0238] Communication component 616 is used to facilitate communication between wires or wireless modes between electronic devices 600 and other devices. Electronic device 600 can access communication-based wireless networks such as WiFi, carrier networks (such as 2G, 3G, 4G, or 5G), or combinations thereof. In an exemplary embodiment, communication component 616 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 also includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be achieved based on RFI (RFID) technology, infrared data association (IRDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other techniques.
[0239] In an exemplary embodiment, the electronic device 600 can be subject to one or more application dedicated integrated circuit (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), on-site Programming Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation, is implemented for implementing a upper and lower teeth of the present invention.
[0240] In an exemplary embodiment, a non-contextic computer readable storage medium including instructions, such as a memory 604 including instructions, and the above instructions can be performed by processor 620 of electronic device 600 to complete the above method. For example, the non-temporary storage medium can be a ROM, a random access memory (RAM), CD-ROM, tape, floppy disk, and optical data storage devices.
[0241] Figure 10 It is a block diagram of an electronic device 700 according to an exemplary embodiment. For example, electronic device 700 can be provided as a server. Refer Figure 10The electronic device 700 includes a processing component 722 that further includes one or more processors, and a memory resource represented by the memory 732 for storing instructions that can be executed by the processing component 722, such as an application. The application stored in memory 732 can include one or more modules corresponding to a set of instructions. Further, the processing assembly 722 is configured to perform instructions to perform a upper and lower teeth combined with an embodiment of the present invention.
[0242] The electronic device 700 may also include a power supply component 726 configured to perform power management of electronic device 700, a wired or wireless network interface 750 configured to connect electronic device 700 to the network, and an input / output (I / O) interface 758 . The electronic device 700 can operate based on the operating system stored in the memory 732, such as WindowsServertM, Mac OS XTM, UnixTM, LinuxtM, FreeBSDTM, or similar.
[0243] Embodiments of the present invention also provide a computer program product, including a computer program, the computer program being implemented when the processor implements the upper and lower teeth contactless simulation methods.
[0244] Other embodiments of the present invention will be readily apparent to those skilled in the art. The present invention is intended to encompass any variations, uses, or adaptive changes, such variations, use, or adaptive changes, follow the general principles of the invention and include known common sense or customary techniques in the art from the present disclosure. . The instructions and examples are considered only as exemplary, and the true scope and spirit of the invention are pointed out by the following claims.
[0245] It should be understood that the present invention is not limited to the exact structure described above and shown in the drawings, and various modifications and changes can be made without departing from their extent. The scope of the invention is limited only by the appended claims.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.