Three dimensional display method based on augmented reality and augmented reality glasses

An augmented reality and three-dimensional display technology, applied in optics, instruments, electrical components, etc., can solve the problems of poor fusion of virtual reality and other problems, and achieve the effect of easy portability and good effect

Active Publication Date: 2015-09-02
VR TECH (SHENZHEN) LTD
3 Cites 40 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0003] The main purpose of the present invention is to propose a three-dimensional display method and augment...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

Augmented reality glasses utilizes background difference method and interframe difference method to combine and extract described target object in described left and right images, wherein, interframe difference method is to utilize between adjacent frame images in the image sequence to make a difference A motion region in the left and right image pair is extracted. First, several frames of images are corrected in the same coordinate system, and then the two images of the same background at different times are differentially calculated, and the background part whose gray level does not change is subtracted, because the position of the moving target in two adjacent frames is different , and is different from the background grayscale, the target object is highlighted after two frames are subtracted, so as to roughly determine the position of the target object in the left and right image pair. The background subtraction method realizes the target object detection by subtracting the image sequence and the reference background model. The background difference method can provide relatively complete feature data to extract the target object of the left and right image pair, but it is too sensitive to dynamic scene changes caused by illumination and external conditions, and needs to be added to update the background image under uncontrolled conditions. Mechanism, and not for the movement with the binocular stereo camera, or the situation where the gray level of the background changes greatly. In this embodiment, firstly, the motion area is determined according to the inter-frame difference method, and then the target object is extracted by using the background difference method and the inter-frame difference method in the determined motion area, thereby greatly improving the recognition accuracy of the target object efficiency.
The augmented reality glasses proposed in this embodiment measure the parallax information of the target object by using the round-trip phase difference of the infrared light that is actively emitted by the binocular stereo camera head, and obtain the target object according to the parallax information The three-dimensional coordinates of the object, performing three-dimensional reconstruction on the target object according to the three-dimensional coordinates of the target object. Through infrared ranging, the three-dimensional reconstruction positioning accuracy is high.
The augmented reality glasses that present embodiment proposes adopts binocular stereo camera to obtain the left and right image pairs of target object under different viewing angles from two different viewpoints; And adopts scale-invariant feature transformation matching algorithm to carry out Tracking, so as to quickly ensure the correctness of target object recognition and improve the efficiency and accuracy of recognition.
The augmented reality glasses that the present embodiment proposes obtains the inverted image pair of the target object under different viewing angles from two different viewpoints through the binocular stereo camera, and flips the inverted image pair It is processed and transformed into an upright image pair to conform to people's visual habit of observing the target object, so as to obtain better visual effects.
The augmented reality glasses that the present embodiment proposes obtains the left and right image pair of target object under different viewing angles from two different viewpoints by adopting binocular stereo cameras; The augmented reality scene corresponding with described target object is superimposed on The left and right image pairs are on top; the superimposed left and right image pairs are displayed in split screens, and the left and right image pairs displayed in split screens are correspondingly projected onto the left and right eyepieces. The fusion effect of virtual and real is good, and it is easy to carry.
The extracting unit 11 of augmented reality glasses utilizes background difference method and interframe difference...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention discloses a three dimensional display method based on augmented reality. The method comprises a step of obtaining left and right image pairs of a target object under different perspectives from two different viewpoints by using a binocular stereo pick-up head; a step of superimposing an augmented reality scene corresponding to the target object on the left and right image pairs respectively; and a step of performing multi-screen display of the superposition left and right image pairs, and projecting the superposition left and right image pairs of multi-screen display on left and right oculars correspondingly. The invention also discloses augmented reality glasses. According to the invention, the virtual-real fusion effect is good, and the augmented reality glasses are easy to carry.

Application Domain

Technology Topic

Image

  • Three dimensional display method based on augmented reality and augmented reality glasses
  • Three dimensional display method based on augmented reality and augmented reality glasses
  • Three dimensional display method based on augmented reality and augmented reality glasses

Examples

  • Experimental program(1)

Example Embodiment

[0044] It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
[0045] Such as figure 1 As shown, the first embodiment of the present invention proposes a three-dimensional display method based on augmented reality, and the three-dimensional display method includes:
[0046] Step S100, using a binocular stereo camera to obtain a pair of left and right images of the target object in different viewing angles from two different viewpoints.
[0047] The augmented reality glasses use binocular stereo cameras to observe the same target object from two different viewpoints to obtain a pair of left and right images of the target object under different viewing angles. Wherein, the left and right image pairs are two mutually independent images collected by the binocular stereo camera from two different viewpoints and with the target object as the foreground. In this embodiment, a pair of left and right images is captured from different angles through a binocular stereo camera. First, the target object in the pair of left and right images is extracted by a combination of the background difference method and the inter-frame difference method; then, SIFT is used A (Scale-invariant feature transform, scale-invariant feature transform) matching algorithm tracks the target object. And use the phase difference of the infrared light actively emitted by the binocular stereo camera to measure the parallax information of the target object, and obtain the depth information and three-dimensional coordinates of the target object according to the parallax information, and according to the target The three-dimensional coordinates of the object are used for three-dimensional reconstruction of the target object. Among them, the parallax information is the difference in the direction of observing the same target from two points with a certain distance. Depth information refers to the number of bits used to store each pixel, and is also used to measure the color resolution of an image.
[0048] Step S200: Superimpose the augmented reality scene corresponding to the target object on the left and right image pair respectively.
[0049] In the augmented reality interactive environment, the augmented reality glasses construct a virtual reality scene corresponding to the target object in advance. For example, according to the motion status of the target object, construct an augmented reality sports scene matching the motion status, such as When the augmented reality glasses track that the target is running, an augmented reality treadmill adapted to the running item is constructed. And the constructed augmented reality scene is stored in the augmented reality database in advance, so that when the motion status of the target object is sensed, the augmented reality scene saved in advance in the augmented reality database is called in time. At the same time, the augmented reality scene is respectively superimposed on the left and right image pairs acquired by the binocular stereo camera.
[0050] Step S300: Perform split-screen display on the superimposed left and right image pairs, and project the left and right image pairs displayed on the split screen onto the left and right eyepieces accordingly.
[0051] The augmented reality glasses perform split-screen display of the left and right image pairs through the left and right screens, and correspondingly project the left and right image pairs displayed on the split screens onto the left and right eyepieces, for example, project the image on the left screen onto the left eyepiece, Project the image on the right screen picture onto the right eyepiece, so that each eye can see an independent left and right picture, forming a stereo vision.
[0052] The three-dimensional display method based on augmented reality proposed in this embodiment uses a binocular stereo camera to obtain a pair of left and right images of a target object in different perspectives from two different viewpoints; the augmented reality scene corresponding to the target object is superimposed respectively To the left and right image pair; display the superimposed left and right image pairs on a split screen, and project the left and right image pairs displayed on the split screen to the left and right eyepieces accordingly. The virtual and actual fusion effect is good, and it is easy to carry.
[0053] Such as figure 2 As shown, figure 2 for figure 1 The detailed flowchart of the first embodiment of step S100 described in the above, in this embodiment, the step S100 includes:
[0054] Step S110, using a combination of a background difference method and an inter-frame difference method to extract the target object in the left and right image pair.
[0055] Augmented reality glasses use a combination of background difference method and inter-frame difference method to extract the target object in the left and right image pairs. The inter-frame difference method uses the difference between adjacent frames in an image sequence to extract all the objects. Describe the movement area in the left and right image pairs. Firstly, the several frames of images are calibrated in the same coordinate system, and then the two images of the same background at different moments are subjected to a difference operation. The part of the background that does not change in gray level is subtracted. Because the position of the moving target in two adjacent frames is different , And different from the background gray scale, the target object is highlighted after subtracting the two frames, so as to roughly determine the position of the target object in the left and right image pairs. The background difference method uses the subtraction of the image sequence and the reference background model to achieve the target object detection. The background difference method can provide relatively complete feature data to extract the target object of the left and right image pair, but it is too sensitive to dynamic scene changes caused by lighting and external conditions, and background image updates need to be added under uncontrolled situations. The mechanism is not used for moving with the binocular stereo camera, or when the background gray level changes greatly. In this embodiment, the motion area is first determined according to the inter-frame difference method, and then the background difference method and the inter-frame difference method are used to extract the target object in the determined motion area, thereby greatly improving the recognition performance of the target object. effectiveness.
[0056] Step S120, using a scale-invariant feature transform matching algorithm to track the target object.
[0057] The augmented reality glasses use SIFT (Scale-invariant feature transform, scale-invariant feature transform matching) algorithm to track the target object. The main idea is to establish a target database, extract the left and right images from the target object in the first frame, and store the feature data into the target database after SIFT transformation. Each database includes target label, centroid coordinates, target Coordinate block and SIFT information. The feature information of each target includes feature point coordinates, feature vector, and the retention priority level corresponding to the feature vector. Then use the target library as an intermediary to match the target SIFT feature information in the second frame, find the correlation between the two frames before and after, determine the position and trajectory of the target object, and then use the matching relationship between the target in the library and the target in the second frame , Use specific strategies to update and eliminate target database information. Then use the target library as an intermediary to continue processing subsequent frames. The SIFT algorithm is divided into two processes: matching and updating. The matching process uses the matching probabilities of the two target features to find the same target in the previous two frames and associate the target. The update process is to supplement and update the target library on the basis of matching, to ensure that the target library information is similar to the target in the last few frames to ensure the correctness of the recognition.
[0058] The three-dimensional display method based on augmented reality proposed in this embodiment uses a binocular stereo camera to obtain a pair of left and right images of a target object in different viewing angles from two different viewpoints; and uses a scale-invariant feature transformation matching algorithm for the target object Tracking, so as to quickly ensure the correctness of target object recognition, and improve the efficiency and accuracy of recognition.
[0059] Such as image 3 As shown, image 3 for figure 1 The detailed flowchart of the second embodiment of step S100 described in the above, in this embodiment, the step S100 includes:
[0060] Step S130: Obtain an inverted image pair of the target object in different viewing angles from two different viewpoints through the binocular stereo camera.
[0061] The augmented reality glasses observe the target object from two different viewpoints through the binocular stereo camera, and use the small hole imaging principle of the binocular stereo camera to image on the photosensitive element to obtain the target object under different viewing angles. The inverted image pair.
[0062] Step S140: Perform a reversal process on the inverted image pair and convert it into an upright image pair.
[0063] The augmented reality glasses photoelectrically convert the inverted image to convert the optical image of the inverted image pair into electrical signals, and perform inversion processing on the inverted image pair to convert it into an upright image pair.
[0064] In the three-dimensional display method based on augmented reality proposed in this embodiment, the inverted image pair of the target object at different perspectives is acquired from two different viewpoints through the binocular stereo camera, and the inverted image pair is performed The flip processing is transformed into an upright image pair to conform to people's visual habit of observing the target object, so as to obtain a better visual effect.
[0065] Such as Figure 4 As shown, Figure 4 for figure 1 The detailed flowchart of the third embodiment of step S100 described in the above, in this embodiment, the step S100 includes:
[0066] Step S150: Measure the parallax information of the target object by using the round-trip phase difference of the infrared light actively emitted by the binocular stereo camera.
[0067] The augmented reality glasses measure the parallax information of the target object by using the round-trip phase difference of the infrared light actively emitted by the binocular stereo camera. Place the reflector on the target object, respectively perform amplitude modulation on the infrared light actively emitted by the binocular stereo camera, and measure the phase delay generated by the modulated light between the binocular stereo camera and the target object once. , And then convert the distance represented by the two phase delays according to the wavelength of the modulated light, and then make the difference between the two converted distances to obtain the parallax information of the target object.
[0068] Step S160: Acquire three-dimensional coordinates of the target object according to the parallax information, and perform three-dimensional reconstruction on the target object according to the three-dimensional coordinates of the target object.
[0069] The augmented reality glasses obtain the depth information of the augmented reality glasses and the three-dimensional coordinates of the feature points of the target object according to the parallax information, and perform three-dimensional reconstruction of the target object according to the three-dimensional coordinates of the feature points of the target object. Structure.
[0070] The three-dimensional display method based on augmented reality proposed in this embodiment uses the phase difference of the infrared light actively emitted by the binocular stereo camera to measure the parallax information of the target object, and obtains the parallax information according to the parallax information. The three-dimensional coordinates of the target object are reconstructed in three dimensions according to the three-dimensional coordinates of the target object. Through infrared ranging, three-dimensional reconstruction positioning accuracy is high.
[0071] Such as Figure 5 As shown, Figure 5 It is a schematic flowchart of the second embodiment of the three-dimensional display method based on augmented reality of the present invention. Based on the first embodiment, the step S300 includes:
[0072] Step S400: If there is interaction with the target object in the left and right image pairs displayed on the split screen, the corresponding augmented reality scene is displayed in the left and right image pairs displayed on the split screen.
[0073] If the augmented reality glasses recognize that the user interacts with the target object in the pair of left and right images displayed on the split screen, it will call up the corresponding augmented reality scene stored in the augmented reality database in advance, and display on the split screen. The left and right images are displayed in the center. For example, if the augmented reality glasses recognize that the user's motion is synchronized with the motion status of the target object, the augmented reality sports scene that matches the motion status will be correspondingly called up in the augmented reality database, as described When the augmented reality glasses track that the user and the target object are both running synchronously, an augmented reality treadmill adapted to the running item is displayed in the image displayed on the split screen.
[0074] In the three-dimensional display method based on augmented reality proposed in this embodiment, when interacting with the target object in the left and right image pairs displayed on the split screen, the corresponding augmented reality scene is displayed in the left and right image pairs displayed on the split screen. , Thereby enhancing the user’s visual immersion and improving the user experience.
[0075] Reference Image 6 , The present invention further provides an augmented reality glasses, the augmented reality glasses include:
[0076] The obtaining module 10 is used to obtain a pair of left and right images of a target object in different viewing angles from two different viewpoints by using a binocular stereo camera;
[0077] The superimposing module 20 is configured to superimpose the augmented reality scene corresponding to the target object on the left and right image pairs respectively;
[0078] The display module 30 is used for split-screen display of the superimposed left and right image pairs, and correspondingly project the left and right image pairs displayed on the split screen onto the left and right eyepieces.
[0079] The acquisition module 10 of the augmented reality glasses uses a binocular stereo camera to observe the same target object from two different viewpoints to acquire a pair of left and right images of the target object under different viewing angles. Wherein, the left and right image pairs are two mutually independent images collected by the binocular stereo camera from two different viewpoints and with the target object as the foreground. In this embodiment, a pair of left and right images is captured from different angles through a binocular stereo camera. First, the target object in the pair of left and right images is extracted by a combination of the background difference method and the inter-frame difference method; then, SIFT is used A (Scale-invariant feature transform, scale-invariant feature transform) matching algorithm tracks the target object. And use the phase difference of the infrared light actively emitted by the binocular stereo camera to measure the parallax information of the target object, and obtain the depth information and three-dimensional coordinates of the target object according to the parallax information, and according to the target The three-dimensional coordinates of the object are used for three-dimensional reconstruction of the target object. Among them, the parallax information is the difference in the direction of observing the same target from two points with a certain distance. Depth information refers to the number of bits used to store each pixel, and is also used to measure the color resolution of an image.
[0080] The superposition module 20 of the augmented reality glasses constructs a virtual reality scene corresponding to the target object in advance in the augmented reality interactive environment, for example, constructs an augmented reality motion matching the motion status according to the motion status of the target object In the project scenario, if the augmented reality glasses track that the target is running, an augmented reality treadmill adapted to the running project is constructed. And the constructed augmented reality scene is stored in the augmented reality database in advance, so that when the motion status of the target object is sensed, the augmented reality scene saved in advance in the augmented reality database is called in time. At the same time, the augmented reality scene is respectively superimposed on the left and right image pairs acquired by the binocular stereo camera.
[0081] The display module 30 of the augmented reality glasses performs split-screen display of the left and right image pairs through the left and right screens, and correspondingly project the left and right image pairs displayed on the split screens onto the left and right eyepieces, for example, project the image on the left screen onto the left and right eyepieces. On the left eyepiece, the image on the right screen picture is projected onto the right eyepiece, so that each eye can see an independent left and right picture, forming a stereo vision.
[0082] In the augmented reality glasses proposed in this embodiment, a pair of left and right images of a target object in different perspectives are obtained from two different viewpoints by using a binocular stereo camera; the augmented reality scene corresponding to the target object is superimposed on the left and right images respectively. Image pair up; split-screen display of the superimposed left and right image pairs, and correspondingly project the left and right image pairs displayed on the split screen onto the left and right eyepieces. The virtual and actual fusion effect is good, and it is easy to carry.
[0083] Such as Figure 7 As shown, Figure 7 for Image 6 A schematic diagram of the functional modules of the first embodiment of the acquiring module in the above, the acquiring module 10 includes:
[0084] The extraction unit 11 is configured to extract the target object in the pair of left and right images by using a combination of a background difference method and an inter-frame difference method;
[0085] The tracking unit 12 is configured to use a scale-invariant feature transformation matching algorithm to track the target object.
[0086] The extraction unit 11 of the augmented reality glasses extracts the target object in the left and right image pair by combining the background difference method and the inter-frame difference method, wherein the inter-frame difference method uses the difference between adjacent frame images in the image sequence. To extract the motion area in the left and right image pair. Firstly, the several frames of images are calibrated in the same coordinate system, and then the two images of the same background at different moments are subjected to a difference operation. The part of the background that does not change in gray level is subtracted. Because the position of the moving target in two adjacent frames is different , And different from the background gray scale, the target object is highlighted after subtracting the two frames, so as to roughly determine the position of the target object in the left and right image pairs. The background difference method uses the subtraction of the image sequence and the reference background model to achieve the target object detection. The background difference method can provide relatively complete feature data to extract the target object of the left and right image pair, but it is too sensitive to dynamic scene changes caused by lighting and external conditions, and background image updates need to be added under uncontrolled situations. The mechanism is not used for moving with the binocular stereo camera, or when the background gray level changes greatly. In this embodiment, the motion area is first determined according to the inter-frame difference method, and then the background difference method and the inter-frame difference method are used to extract the target object in the determined motion area, thereby greatly improving the recognition performance of the target object. effectiveness.
[0087] The tracking unit 12 of the augmented reality glasses uses the SIF algorithm to track the target object. The main idea is to establish a target database, extract the left and right images from the target object in the first frame, and store the feature data into the target database after SIFT transformation. Each database includes target label, centroid coordinates, target Coordinate block and SIFT information. The feature information of each target includes feature point coordinates, feature vector, and the retention priority level corresponding to the feature vector. Then use the target library as an intermediary to match the target SIFT feature information in the second frame, find the correlation between the two frames before and after, determine the position and trajectory of the target object, and then use the matching relationship between the target in the library and the target in the second frame , Use specific strategies to update and eliminate target database information. Then use the target library as an intermediary to continue processing subsequent frames. The SIFT algorithm is divided into two processes: matching and updating. The matching process uses the matching probabilities of the two target features to find the same target in the previous two frames and associate the target. The update process is to supplement and update the target library on the basis of matching, to ensure that the target library information is similar to the target in the last few frames to ensure the correctness of the recognition.
[0088] The augmented reality glasses proposed in this embodiment use a binocular stereo camera to obtain a pair of left and right images of a target object in different perspectives from two different viewpoints; and use a scale-invariant feature transformation matching algorithm to track the target object, thereby Quickly ensure the correctness of target object recognition, and improve the efficiency and accuracy of recognition.
[0089] Such as Figure 8 As shown, Figure 8 for Image 6 The functional module diagram of the second embodiment of the acquiring module described in the above, in this embodiment, the acquiring module 10 includes:
[0090] The image acquisition unit 13 is configured to acquire an inverted image pair of the target object in different viewing angles from two different viewpoints through the binocular stereo camera;
[0091] The reversing unit 14 is configured to perform reversal processing on the inverted image pair and convert it into an upright image pair.
[0092] The image acquisition unit 13 of the augmented reality glasses observes the target object from two different viewpoints through the binocular stereo camera, and uses the pinhole imaging principle of the binocular stereo camera to image on the photosensitive element to obtain the target A pair of inverted images of objects in different perspectives.
[0093] The inverting unit 14 of the augmented reality glasses performs photoelectric conversion on the inverted image, converts the optical image of the inverted image pair into electrical signals, and performs inversion processing on the inverted image pair to convert it into an upright image Correct.
[0094] In the augmented reality glasses proposed in this embodiment, the inverted image pair of the target object at different viewing angles is obtained from two different viewpoints through the binocular stereo camera, and the inverted image pair is flipped to transform It is an upright image pair to conform to people's visual habits of observing the target object, so as to obtain a better visual effect.
[0095] Such as Picture 9 As shown, Picture 9 for Image 6 A schematic diagram of the functional modules of the third embodiment of the acquisition module described in the above. In this embodiment, the acquisition module 10 includes:
[0096] The measuring unit 15 is configured to measure the parallax information of the target object by using the round-trip phase difference of the infrared light actively emitted by the binocular stereo camera;
[0097] The constructing unit 16 is configured to obtain the three-dimensional coordinates of the target object according to the disparity information, and perform three-dimensional reconstruction of the target object according to the three-dimensional coordinates of the target object.
[0098] The measuring unit 15 of the augmented reality glasses measures the parallax information of the target object by using the phase difference of the infrared light actively emitted by the binocular stereo camera. Place the reflector on the target object, respectively perform amplitude modulation on the infrared light actively emitted by the binocular stereo camera, and measure the phase delay generated by the modulated light between the binocular stereo camera and the target object once. , And then convert the distance represented by the two phase delays according to the wavelength of the modulated light, and then make the difference between the two converted distances to obtain the parallax information of the target object.
[0099] The construction unit 16 of the augmented reality glasses obtains the depth information of the augmented reality glasses and the three-dimensional coordinates of the feature points of the target object according to the parallax information, and calculates the target object according to the three-dimensional coordinates of the feature points of the target object. The object is reconstructed in three dimensions.
[0100] The augmented reality glasses proposed in this embodiment use the phase difference of the infrared light actively emitted by the binocular stereo camera to measure the parallax information of the target object, and obtain the three-dimensional information of the target object according to the parallax information. The coordinates, according to the three-dimensional coordinates of the target object, perform three-dimensional reconstruction of the target object. Through infrared ranging, three-dimensional reconstruction positioning accuracy is high.
[0101] See further Image 6 In this embodiment, the display module 10 is also configured to display the corresponding enhancement in the left and right image pairs displayed on the split screen when interacting with the target object in the left and right image pairs displayed on the split screen. Realistic scene.
[0102] If the display module 10 of the augmented reality glasses recognizes that the user interacts with the target object in the pair of left and right images displayed on the split screen, it will call up the corresponding augmented reality scene stored in the augmented reality database in advance, and The left and right images displayed on the split screen are displayed in the center. For example, if the augmented reality glasses recognize that the user's motion is synchronized with the motion status of the target object, the augmented reality sports scene that matches the motion status will be correspondingly called up in the augmented reality database, as described When the augmented reality glasses track that the user and the target object are both running synchronously, an augmented reality treadmill adapted to the running item is displayed in the image displayed on the split screen.
[0103] If the augmented reality glasses proposed in this embodiment interact with the target object in the left and right image pairs displayed on the split screen, the corresponding augmented reality scene is displayed in the left and right image pairs displayed on the split screen, thereby enhancing the user The visual immersion to improve the user experience.
[0104] The above are only the preferred embodiments of the present invention, and do not limit the scope of the present invention. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the present invention, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of the present invention.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Musculo-skeletal implant having a bioactive gradient

ActiveUS20060045903A1Good effectEnhances chemotactic propertyPharmaceutical delivery mechanismLigamentsActive agentMusculoskeletal implant
Owner:DEPUY SPINE INC (US)

Classification and recommendation of technical efficacy words

  • Good effect
  • Easy to carry

Hyperlipemia therapeutic agent

ActiveUS20050187292A1Good effectBiocideMetabolism disorderEPA - Eicosapentaenoic acidHyperlipidemia
Owner:KOWA CO LTD +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products