Lens segmentation method and device and storage medium
A lens and template technology, applied in the field of image analysis, can solve the problems of low accuracy of manual segmentation of lens structure and high labor cost, and achieve the effect of reducing labor cost and improving accuracy
Inactive Publication Date: 2019-08-27
GUANGZHOU SHIYUAN ELECTRONICS CO LTD +1
7 Cites 4 Cited by
AI-Extracted Technical Summary
Problems solved by technology
However, manual segmentation of the lens structure ...
Method used
Because manual labeling a large amount of medical images is a loaded down with trivial details and error-prone task, therefore, based on above-mentioned, the embodiment of the present invention provides a kind of lens segmentation method, device and storage medium, by preset neural network model and shape template, The automatic segmentation of the lens structure is realized, thereby reducing the labor cost and improving the accuracy of the lens structure segmentation. Among them, the shape template can improve the rough boundary of the initial lens structure processed by the preset neural network, thereby obtaining a lens structure with higher accuracy.
For example, the size of lens image area is 1024*1024, what Convwith Relu represents is that convolution kernel is 3*3, and activation function is Relu, and the cross entropy loss (Cross Entropy Loss) function that uses is each side-output The output of the layer, using the convolution kernel in the side-output layer, can achieve the effect of using features of different levels, and can further improve the segmentation effect.
In this embodiment, at first extract lens image area from original image; Then, by preset neural network model, obtain the initial lens structure in lens image area, and adopt shape template to carry out edge smoothing process to initial lens structure, The segmented lens structure is obtained, and the shape template is obtained by training the lens samples. The au...
Abstract
The embodiment of the invention provides a lens segmentation method and device and a storage medium. The method comprises the steps of extracting a lens image area from an original image; obtaining aninitial lens structure in the lens image area through a preset neural network model; and performing edge smoothing processing on the initial lens structure by adopting a shape template to obtain a segmented lens structure, the shape template being obtained by training a lens sample. By means of the preset neural network model and the shape template, the automatic segmentation of the lens structure can be achieved, so that the accuracy of lens structure segmentation is improved while the labor cost is reduced.
Application Domain
Image enhancementImage analysis +1
Technology Topic
OphthalmologyNetwork model +4
Image
Examples
- Experimental program(1)
Example Embodiment
[0049] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0050] The terms "first" and "second" in the description, claims, and the foregoing drawings of the embodiments of the present invention are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchanged under appropriate circumstances, so that the embodiments of the present invention described herein can, for example, be implemented in an order other than those illustrated or described herein. In addition, the terms "including" and "having" and any variations of them are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not necessarily limited to the clearly listed Those steps or units may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or equipment.
[0051] Currently, for the classification of cataracts, the internationally adopted LOCSII lens opacity classification standard. This classification standard is relatively human intervention, and doctors with different experience have certain differences in the classification of different structures. Therefore, it is extremely important to accurately segment the structure of the lens and automatically calculate the degree of opacity.
[0052] Since manually marking a large number of medical images is a tedious and error-prone task, based on the above, the embodiments of the present invention provide a lens segmentation method, device, and storage medium. The lens can be scanned by preset neural network models and shape templates. The automatic segmentation of the structure improves the accuracy of the segmentation of the lens structure while reducing labor costs. Among them, the shape template can improve the rough boundary of the initial lens structure processed by the preset neural network, thereby obtaining a lens structure with higher accuracy.
[0053] figure 1 It is a flowchart of a lens segmentation method provided by an embodiment of the present invention. This embodiment provides a lens division method, which can be executed by a lens division device. The lens segmentation device can be implemented by software and/or hardware. For example, the lens segmentation device may include, but is not limited to, electronic equipment such as computers and servers. Among them, the server can be a server, or a server cluster composed of several servers, or a cloud computing service center.
[0054] Such as figure 1 As shown, the lens segmentation method provided in this embodiment includes the following steps:
[0055] S101. Extract a lens image area from the original image.
[0056] The original image may be a target image actually collected. The target image includes not only the lens image area, but also other eyeball areas, such as cornea, vitreous, and so on. The "lens image area" here is the area where the actual lens structure to be divided is located. It is added that the lens image area is smaller than the size of the original image.
[0057] Optionally, this step may include: using canny edge detection technology to extract the lens image area from the original image. Extracting the lens image area through the canny edge detection technology can reduce the redundant interference information in the original image.
[0058] Among them, the canny edge detection technology is a multi-level edge detection algorithm whose goal is to find an optimal edge detection algorithm. Specifically, the meaning of optimal edge detection is:
[0059] (1) Optimal detection: The algorithm can identify as many actual edges as possible in the image, and the probability of missing true edges and false detection of non-edges are as small as possible;
[0060] (2) Optimal positioning criterion: the position of the detected edge point is the closest to the actual edge point, or the detected edge deviates from the true edge of the object to the smallest extent due to the influence of noise;
[0061] (3) One-to-one correspondence between detection points and edge points: The edge points detected by the operator and the actual edge points should correspond one-to-one.
[0062] It should be noted that the canny edge detection technology is only an example to illustrate how to extract the lens image area from the original image, but the embodiment of the present invention is not limited to this, and other techniques can also be used to extract the lens image from the original image area.
[0063] S102: Obtain an initial lens structure in the lens image area by presetting the neural network model.
[0064] The preset neural network model may be a U-shaped fully convolutional neural network model obtained by pre-training. The U-shaped fully convolutional neural network can make the preset neural network model have better stability and scalability, and the ability of deep learning to learn features is better.
[0065] Exemplarily, this step may be specifically: inputting the lens image area into the preset neural network model, and obtaining the output of the preset neural network model as the initial lens structure.
[0066] This embodiment mainly embodies the application of the preset neural network model, that is, how to use the preset neural network model. For the training process, please refer to related instructions, which will not be repeated here.
[0067] S103, using the shape template to perform edge smoothing processing on the initial lens structure to obtain a segmented lens structure.
[0068] Among them, the shape template is obtained by training the lens samples.
[0069] It can be understood that the boundary of the initial lens structure processed by the preset neural network model is irregular, especially the segmentation of the lens nucleus. Therefore, the embodiment of the present invention uses a shape template to improve the rough boundary of the lens nucleus region to obtain the final Segmentation result.
[0070] reference figure 2 , Shows the segmentation process of an original image processed by the above S101 to S103. Among them, 1 represents the AS-OCT image; 2 represents the Region of Interest (ROI), that is, the lens. This example divides the lens into three regions: cortex region, nucleus region, and cornea Region; 3 represents a U-shaped fully convolutional neural network, which is used to predict the segmentation area; 4 represents a shape template, which is used to improve the rough boundary of the core region of the lens. Such as figure 2 As shown, a part of the nuclear area is incorrectly classified as a cortical area. In order to solve this problem, a shape template is designed to improve the rough segmentation of the lens nucleus edge.
[0071] Below, combined figure 2 Explain the preset neural network model. reference figure 2 , The preset neural network model includes: coding part (left part) and decoding part (right part).
[0072] Among them, each deconvolution layer in the decoding part includes a cascade, and the global information output from the previous deconvolution layer and the information of the corresponding convolution layer in the encoding part are input. The deconvolution layer extracts information from the corresponding convolution layer and fuses the information of the local features, so as to more effectively process the information of the object to be segmented and avoid local interference.
[0073] The coding part includes six convolutional layers, each convolutional layer contains two to three sub-convolutional layers, each sub-convolutional layer will use an activation function (Relu) and 2x2 maximum pooling (MaxPooling); in order to maintain effectiveness To restore the image and extract features, the decoding part also contains six deconvolution layers, each deconvolution layer contains a cascade from the corresponding feature layer and spatial upsampling, followed by two convolutions and an activation function (Relu), input the global information from the upper layer and the information of the corresponding layer in the network coding process. In addition, it should be noted that the convolutional layer may include a predetermined number of subconvolutional layers, and the predetermined number of subconvolutional layers are cascaded with each other, and the subconvolutional layers adopt Relu activation function and maximum pooling processing.
[0074] The preset neural network model uses a six-layer network structure. In the case of a larger input lens image area, a deeper network can more effectively extract global information to accurately segment the lens structure, which is more suitable for the current network structure.
[0075] For example, the size of the lens image area is 1024*1024, Conv <3x3> with Relu means that the convolution kernel is 3*3, the activation function is Relu, and the cross entropy loss function used is the output of each side-output layer, which is used in the side-output layer <1x1> The convolution kernel can achieve the effect of using features of different levels, which can further improve the segmentation effect.
[0076] The U-shaped fully convolutional neural network model is used to predict the corneal, cortical and nuclear regions of the lens, which can achieve better training on small data and avoid overfitting; and the U-shaped fully convolutional neural network model can Obtain a clear boundary segmentation area by using jump connections.
[0077] In this embodiment, the lens image area is first extracted from the original image; then, the initial lens structure in the lens image area is obtained through the preset neural network model, and the shape template is used to smooth the edges of the initial lens structure to obtain the segmented lens The lens structure of the lens, the shape template is obtained by training the lens sample. The automatic segmentation of the lens structure can be realized by pre-setting the neural network model and the shape template, thereby reducing the labor cost and improving the accuracy of the segmentation of the lens structure.
[0078] In the foregoing embodiment, in a possible implementation manner, the shape template may include a first shape template and a second shape template. The first shape template is used for edge smoothing processing on the anterior lens nucleus of the initial lens structure, and the second shape template is used for edge smoothing processing on the posterior lens nucleus of the initial lens structure.
[0079] It can be understood that the number of the shape templates is multiple. Optionally, S103, using a shape template to perform edge smoothing processing on the initial lens structure to obtain a segmented lens structure, which may include: calculating the similarity between multiple shape templates and the initial lens structure; selecting the shape template with the greatest similarity, The initial lens structure undergoes edge smoothing processing to obtain the segmented lens structure. Exemplarily, calculate the similarity between a plurality of first shape templates and the anterior lens nucleus of the initial lens structure, select the first shape template with the greatest similarity, and perform edge smoothing processing on the anterior lens nucleus of the initial lens structure; The similarity between the two shape templates and the posterior lens nucleus of the initial lens structure; the second shape template with the greatest similarity is selected, and the edges of the posterior lens nucleus of the initial lens structure are smoothed to obtain the segmented lens structure.
[0080] Further, the calculating the similarity between the multiple shape templates and the initial lens structure may include:
[0081] For each shape template, the similarity between the shape template and the initial lens structure is obtained through the following steps:
[0082] The product of the normalized parameter of the calculated shape template and the initial lens structure is the first intermediate value, and the normalized parameter is the smallest distance among all the distances corresponding to the axis of symmetry of the initial lens structure during rotation;
[0083] Calculating the sum of the first intermediate value and the preset offset as the second intermediate value;
[0084] Perform boundary coding on the initial lens structure according to formula (1) to obtain the target value;
[0085] Determine the similarity according to the target value and the second intermediate value;
[0086] f(c,θ)=||c-P θ || Formula (1)
[0087] Wherein, c represents the coordinates of the center point of the initial lens structure; P θ Represents the coordinate of the intersection between the axis of symmetry of the initial lens structure and the boundary of the initial lens structure, where θ represents the angle of the axis of symmetry relative to the reference line, and the axis of symmetry starts from the reference line and rotates at a preset angle; |||| Shows the number symbol; {f(c,θ)} represents the target value.
[0088] For example, the number of shape templates is N, and N is a positive integer, which are expressed as {f(c n ,θ)}, n=1,2,3,...,N,c n Represents the center point coordinates of the shape template n, θ represents the rotation angle of the symmetry axis; the initial lens structure is represented as S t ={x j ,y j }, the center point coordinates of the initial lens structure are L represents the number of boundary sampling points of the initial lens structure, and its normalized parameter is expressed as z t , Where z t =||c-p 1 ||, p 1 When representing the smallest distance among the corresponding distances during the rotation of the symmetry axis of the initial lens structure, the coordinates of the intersection point between the symmetry axis and the boundary, for each shape template, the similarity between the shape template and the initial lens structure is obtained through the following steps:
[0089] 1. Calculate the first intermediate value T n : T n ={f(c n ,θ)}×z t.
[0090] 2. Calculate the second intermediate value T′ n : T′ n = T n +offset, offset represents the preset offset, and the value is {-10,-9,...,9,10}.
[0091] 3. Calculate the target value {f(c,θ)}.
[0092] 4. Determine the similarity according to the target value and the second intermediate value. Specifically, calculate the difference between the target value and the second intermediate value: D n =f(c,θ)-T′ n , D n The smaller it is, the more similar the shape template is to the initial lens structure at this time, and the similarity between each shape template and the initial lens structure is obtained.
[0093] The above embodiments explain how to use the shape template, and then explain how to train to obtain the shape template. Specifically, the shape template is obtained through training in the following manner:
[0094] According to the boundary point coordinates of the lens sample, obtain the center point coordinates of the lens sample; obtain the coordinates of the intersection point between the symmetry axis of the lens sample and the boundary of the lens sample, where the symmetry axis starts from the reference line and rotates at a preset angle; according to the center point coordinates With the coordinates of the intersection point, the distance from the center point to the intersection point of the lens sample is obtained; the normalization parameter of the lens sample is used to normalize the distance to obtain the nucleus boundary of the lens sample. The normalization parameter is the corresponding during the rotation of the symmetry axis The smallest distance in the distance; extract the middle area of the nuclear boundary; use the preset clustering algorithm to cluster the middle area corresponding to the M lens samples to obtain N shape templates, M and N are positive integers, and M is greater than N .
[0095] The structure of the lens is a structure similar to an onion, while the core structure of the lens is a smooth curved structure. Inspired by this inspiration, this article is designed as image 3 The nuclear structure of the lens shown is coded with the distance between the center point, and the intersection of the symmetry axis and the boundary. Among them, different layers share the same center point and can be distinguished from each other by the distance from the center point.
[0096] reference image 3 , The boundary of the lens sample m is denoted as S i ={x i ,y i }, n = 1, 2, 3,..., N, the center point coordinates of the lens sample m is expressed as (x i ,y i ) Is the coordinates of the i-th sampling point of the selected lens sample m, and the shape template corresponding to the lens sample m is defined as the following formula:
[0097] f(c m ,θ)=||c m -p θ ||
[0098] Where P θ Represents the coordinate of the intersection point between the symmetry axis of the lens sample m and the boundary of the lens sample m, and θ is from the reference line ( image 3 The middle dashed line) starts with a preset angle of 5 degrees. In this way, the boundaries in different images can be compiled as Figure 4 Shown. in Figure 4 In, the horizontal axis represents θ, and the vertical axis represents ||c m -p θ ||. Use the normalized parameter z of the lens sample m m =||c m -p m1 ||Yes Figure 4 The shape template shown is normalized to obtain the shape template shown in Figure 5 (a) and Figure 5 (b), where Figure 5 (a) represents the first shape template (the symmetry axis is rotated counterclockwise, 0 degrees -180 degrees), Figure 5(b) shows the second shape template (the symmetry axis rotates counterclockwise, 180 degrees -360 degrees).
[0099] The middle area of the nuclear boundary is image 3 The middle thick line part; cluster the middle region corresponding to the M lens samples using the preset clustering algorithm to obtain N shape templates, where M and N are both positive integers, and M is greater than N.
[0100] Optionally, the preset clustering algorithm may include any one of the following clustering algorithms: K-mean (K-mean) algorithm, Fuzzy C-means (FCM) clustering algorithm, etc. . Among them, for the detailed description of the K-mean algorithm and the FCM clustering algorithm, please refer to related technologies, which will not be repeated here.
[0101] Compared with the current lens segmentation method, the present invention has at least the following beneficial effects:
[0102] (1) The present invention designs an automatic lens structure segmentation method based on deep learning. Due to the relatively difficult data acquisition, the current lens segmentation method uses manual segmentation, and consistency and accuracy depend on the experience of the segmentation personnel Therefore, an automatic segmentation scheme is very meaningful for effective and stable segmentation.
[0103] (2) Design a shape template matching method based on the structural characteristics of the lens nucleus to make the segmentation result close to the real physical structure, and consider the characteristics of the internal structure of the lens. Therefore, design a shape template to learn the shape in the training sample, and then Correction of test data can effectively segment the structure.
[0104] (3) The U-shaped fully convolutional neural network is used to segment the lens structure. The network can train and learn the features in the data well.
[0105] (4) Strong anti-interference ability and good generalization ability.
[0106] Image 6 It is a schematic structural diagram of a lens segmentation device provided by an embodiment of the present invention. Such as Image 6 As shown, the lens segmentation device 60 includes: an extraction module 61 and a processing module 62. among them:
[0107] The extraction module 61 is used to extract the lens image area from the original image.
[0108] The processing module 62 is connected to the extraction module 61, and is used to obtain the initial lens structure in the lens image area obtained by the extraction module 61 through a preset neural network model; and, using a shape template to perform edge smoothing processing on the initial lens structure , The segmented lens structure is obtained, and the shape template is obtained by training the lens sample.
[0109] Optionally, the shape template may include a first shape template and a second shape template. Wherein, the first shape template is used to perform edge smoothing processing on the anterior lens nucleus of the initial lens structure, and the second shape template is used to perform edge smoothing processing on the posterior lens nucleus of the initial lens structure.
[0110] In the above embodiment, the number of the shape templates is multiple, and the processing module 62 is used to use the shape template to perform edge smoothing processing on the initial lens structure to obtain the segmented lens structure, specifically: The similarity between the shape template and the initial lens structure; selecting the shape template with the greatest similarity, and performing edge smoothing processing on the initial lens structure to obtain the divided lens structure.
[0111] Optionally, when the processing module 62 is used to calculate the similarity between the multiple shape templates and the initial lens structure, it is specifically used to:
[0112] For each of the shape templates, the similarity between the shape template and the initial lens structure is obtained through the following steps:
[0113] Calculate the product of the shape template and the normalized parameter of the initial lens structure as a first intermediate value, and the normalized parameter is the smallest of all distances corresponding to the axis of symmetry of the initial lens structure during the rotation process distance;
[0114] Calculating the sum of the first intermediate value and the preset offset as a second intermediate value;
[0115] Perform boundary coding on the initial lens structure according to formula (1) to obtain a target value;
[0116] Determine the degree of similarity according to the target value and the second intermediate value;
[0117] f(c,θ)=||c-P θ || Formula (1)
[0118] Wherein, c represents the coordinates of the center point of the initial lens structure; P θ Represents the coordinate of the intersection between the axis of symmetry of the initial lens structure and the boundary of the initial lens structure, where θ represents the angle of the axis of symmetry relative to the reference line, and the axis of symmetry starts from the reference line and rotates at a preset angle; || || represents the norm symbol; {f(c,θ)} represents the target value.
[0119] Further, the shape template can be obtained through training in the following manner:
[0120] Obtaining the coordinates of the center point of the lens sample according to the boundary point coordinates of the lens sample;
[0121] Acquiring the coordinates of the intersection point of the symmetry axis of the lens sample and the boundary of the lens sample, wherein the symmetry axis starts from a reference line and rotates at a preset angle;
[0122] Obtaining the distance from the center point of the lens sample to the intersection point according to the coordinates of the center point and the coordinates of the intersection point;
[0123] The normalized parameter of the lens sample is used to normalize the distance to obtain the nuclear boundary of the lens sample, and the normalized parameter is the distance in the corresponding distance during the rotation of the symmetry axis shortest distance;
[0124] Extract the middle area of the nuclear boundary;
[0125] A preset clustering algorithm is used to cluster the middle regions corresponding to the M lens samples to obtain N shape templates, where M and N are both positive integers, and M is greater than N.
[0126] Wherein, the preset clustering algorithm includes any one of the following clustering algorithms: K-means algorithm, FCM clustering algorithm, and so on.
[0127] In addition, the extraction module 61 may be specifically used for: using canny edge detection technology to extract the lens image area from the original image.
[0128] The lens segmentation device provided in this embodiment first extracts the lens image area from the original image; then, obtains the initial lens structure in the lens image area through a preset neural network model, and uses a shape template to perform edge smoothing processing on the initial lens structure , The segmented lens structure is obtained, and the shape template is obtained by training the lens sample. The automatic segmentation of the lens structure can be realized by pre-setting the neural network model and the shape template, thereby reducing the labor cost and improving the accuracy of the segmentation of the lens structure.
[0129] Figure 7 It is a schematic structural diagram of a lens segmentation device provided by another embodiment of the present invention. Such as Figure 7 As shown, the lens segmentation device 70 includes a memory 71 and a processor 72, and a computer program stored on the memory 71 and executable by the processor 72. The processor 72 executes the computer program to make the lens segmentation device 70 realize the following operations:
[0130] Extract the lens image area from the original image;
[0131] Obtaining the initial lens structure in the lens image area by preset neural network model;
[0132] A shape template is used to perform edge smoothing processing on the initial lens structure to obtain a segmented lens structure, and the shape template is obtained by training a lens sample.
[0133] It should be noted that the embodiment of the present invention does not limit the number of the memory 71 and the processor 72, and they may be one or more. Figure 7 Take one as an example for illustration; the memory 71 and the processor 72 may be wired or wirelessly connected in various ways.
[0134] In some embodiments, the shape template includes a first shape template and a second shape template, wherein the first shape template is used to perform edge smoothing processing on the anterior lens nucleus of the initial lens structure, and the second shape template The template is used to perform edge smoothing processing on the posterior lens nucleus of the initial lens structure.
[0135] Optionally, the number of the shape templates is multiple, and the use of the shape template to perform edge smoothing processing on the initial lens structure to obtain the lens structure after segmentation may include: calculating multiple shape templates and The similarity of the initial lens structure is selected; the shape template with the greatest similarity is selected, and the edge smoothing process is performed on the initial lens structure to obtain the divided lens structure.
[0136] Further, the calculating the similarity between the multiple shape templates and the initial lens structure may include: for each of the shape templates, obtaining the similarity between the shape template and the initial lens structure through the following steps : Calculate the product of the normalized parameter of the shape template and the initial lens structure as the first intermediate value, and the normalized parameter is the distance of all distances corresponding to the axis of symmetry of the initial lens structure during the rotation The minimum distance; calculate the sum of the first intermediate value and the preset offset as the second intermediate value; perform boundary coding on the initial lens structure according to formula (1) to obtain the target value; according to the target value and the The second intermediate value determines the similarity.
[0137] Optionally, the shape template can be obtained by training in the following manner: obtaining the center point coordinates of the lens sample according to the boundary point coordinates of the lens sample; obtaining the symmetry axis of the lens sample and the distance between the lens sample The coordinates of the intersection point of the boundary, wherein the axis of symmetry starts from the reference line and rotates at a preset angle; according to the center point coordinates and the intersection point coordinates, the distance from the center point of the lens sample to the intersection point is obtained; using the The normalized parameter of the lens sample normalizes the distance to obtain the nucleus boundary of the lens sample, and the normalized parameter is the smallest distance in the distance corresponding to the rotation of the symmetry axis; Extract the middle region of the nuclear boundary; cluster the middle regions corresponding to the M lens samples using a preset clustering algorithm to obtain N shape templates, where M and N are both positive integers, and M is greater than N.
[0138] Wherein, the preset clustering algorithm may include any one of the following clustering algorithms: K-means algorithm, fuzzy C-means FCM clustering algorithm.
[0139] In addition, the aforementioned extraction of the lens image area from the original image may include: using canny edge detection technology to extract the lens image area from the original image.
[0140] Based on the above, further, the lens segmentation device 70 can also output the segmented lens structure. Therefore, the lens segmentation device 70 may further include a display screen 73. The display screen 73 is used to output the divided lens structure.
[0141] Among them, the display screen 73 may be a capacitive screen, an electromagnetic screen or an infrared screen. Generally speaking, the display screen 73 is used to display data according to instructions from the processor 72, and is also used to receive touch operations on the display screen 73, and send corresponding signals to the processor 72 or other components of the lens segmentation device 70. Optionally, when the display screen 73 is an infrared screen, it also includes an infrared touch frame. The infrared touch frame is arranged around the display screen 73 and can also be used to receive infrared signals and send the infrared signals to the processor. 72 or other components of the lens segmentation device 70.
[0142] The embodiment of the present invention also provides a computer-readable storage medium, including computer-readable instructions. When the processor reads and executes the computer-readable instructions, the processor is caused to perform the steps in any of the above-mentioned embodiments. .
[0143] A person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware. The foregoing program can be stored in a computer readable storage medium. When the program is executed, it is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: read-only memory (Read-Only Memory, abbreviated as: ROM), random access memory (Random Access Memory, abbreviated as: RAM), magnetic disks or optical disks, etc. The medium storing the program code.
[0144] Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: It is still possible to modify the technical solutions described in the foregoing embodiments, or equivalently replace some or all of the technical features; these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention range.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Automatic rice seedling taking and feeding transplanter
Owner:巴州鑫茂林工贸有限责任公司
LCD with function of preventing light leak, and backlight module
Owner:AU OPTRONICS CORP
Machine for automatic assembly and detection of automotive connector and implementation method of machine
Owner:CHENGDU TIANCHUANG PRECISION MOLD
System and method for translation
Owner:IOL WUHAN INFORMATION TECH CO LTD
Parking management system for open type parking lot and management method of parking management system
Owner:INTELLIGENT INTER CONNECTION TECH CO LTD
Classification and recommendation of technical efficacy words
- Reduce labor costs
- improve accuracy
Cold-chain logistics management system
Owner:WUHAN WIN WIN INFORMATION TECH
Application of male sterility gene OsDPW2 and rice sterility recovery method
Owner:SHANGHAI JIAO TONG UNIV
Multi-network cooperative network optimization and energy saving method and system
Owner:BEIJING TUOMING COMM TECH
Golf club head with adjustable vibration-absorbing capacity
Owner:FUSHENG IND CO LTD
Direct fabrication of aligners for arch expansion
Owner:ALIGN TECH
Stent delivery system with securement and deployment accuracy
Owner:BOSTON SCI SCIMED INC
Method for improving an HS-DSCH transport format allocation
Owner:NOKIA SOLUTIONS & NETWORKS OY
Catheter systems
Owner:ST JUDE MEDICAL ATRIAL FIBRILLATION DIV