Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

791 results about "Joint point" patented technology

Wearable 7-degree-of-freedom upper limb movement rehabilitation training exoskeleton

The invention provides a wearable 7-degree-of-freedom upper limb movement rehabilitation training exoskeleton which comprises a supporting rod and an exoskeleton training device which are fixed on a base, wherein the exoskeleton training device is formed by connecting a shoulder adduction / abduction joint, a shoulder flexion / extension joint, a shoulder medial rotation / lateral rotation joint, an elbow flexion / extension joint, an elbow medial rotation / lateral rotation joint, a wrist adduction / abduction joint and a wrist flexion / extension joint in series. The joints are directly driven by a motor, wherein the shoulder and elbow rotating joints are additionally provided with a spur-gear set, the structure is simple, and the response speed is high. Compared with the prior art, the exoskeleton provided by the invention has more degrees of freedom of movement and can adapt to standing and sitting training. The length of an exoskeleton rod can be adjusted according to the height of a patient, thereby ensuring the wearing comfort. A spacing structure is adopted by the joints, thereby improving the safety. Angle and moment sensors at the joint points can acquire kinematic and dynamic data of each joint in real time, thereby being convenient for physical therapists to subsequently analyze and establish a training scheme to achieve an optimal rehabilitation effect.
Owner:ZHEJIANG UNIV

Pedestrian falling recognition method based on skeleton detection

The invention discloses a pedestrian falling recognition method based on skeleton detection. The method comprises the following steps: S1, acquiring a monitoring area image by using a camera; S2, segmenting the image to obtain a pedestrian body area image, and detecting to obtain pedestrian skeleton feature point distribution information; S3, analyzing pedestrian skeleton feature point distribution information, extracting articulation point coordinates of key parts of the human body of the pedestrian, acquiring pedestrian human body articulation point space position features and pedestrian posture geometrical quantities according to the articulation point coordinates, and the key parts are preset parts on the human body; S4, establishing a tumble detection model according to the spatial position characteristics of the joint points of the human body of the pedestrian and the pedestrian posture geometric quantity; and S5, judging the posture of the pedestrian by using the fall detectionmodel wherein the posture of the pedestrian comprises normal walking and fall, and the detection and recognition of the posture of the pedestrian are realized according to the judgment result. The method can actively detect abnormal conditions such as pedestrian falling in a monitoring video, and can improve the monitoring capability of pedestrian safety events by combining with an early warning system.
Owner:INST OF INTELLIGENT MFG GUANGDONG ACAD OF SCI

Multi-feature fusion behavior identification method based on key frame

A multi-feature fusion behavior identification method based on a key frame comprises the following steps of firstly, extracting a joint point feature vector x (i) of a human body in a video through anopenpose human body posture extraction library to form a sequence S = {x (1), x (2),..., x (N)}; secondly, using a K-means algorithm to obtain K final clustering centers c '= {c' | i = 1, 2,..., K},extracting a frame closest to each clustering center as a key frame of the video, and obtaining a key frame sequence F = {Fii | i = 1, 2,..., K}; and then obtaining the RGB information, optical flow information and skeleton information of the key frame, processing the information, and then inputting the processed information into a double-flow convolutional network model to obtain the higher-levelfeature expression of the RGB information and the optical flow information, and inputting the skeleton information into a space-time diagram convolutional network model to construct the space-time diagram expression features of the skeleton; and then fusing the softmax output results of the network to obtain a final identification result. According to the process, the influences, such as the timeconsumption, accuracy reduction, etc., caused by redundant frames can be well avoided, and then the information in the video can be better utilized to express the behaviors, so that the recognition accuracy is further improved.
Owner:NORTHWEST UNIV(CN)

Multidimensional weighted 3D recognition method for dynamic gestures

The invention discloses a multidimensional weighted 3D recognition method for dynamic gestures. At the training stage, firstly, standard gestures are segmented to obtain a feature vector of the standard gestures; secondly, coordinate system transformation, normalization processing, smoothing processing, downsampling and differential processing are performed to obtain a feature vector set of the standard gestures, weight values of all joint points and weight values of all dimensions of elements in the feature vector set, and in this way, a standard gesture sample library is constructed. At the recognition stage, by the adoption of a multidimensional weighted dynamic time warping algorithm, the dynamic warping distances between the feature vector set Ftest of the gestures to be recognized and feature vector sets Fc =1,2,...,C of all standard gestures in the standard gesture sample library are calculated; when the (m, n)th element S(m, n) of a cost matrix C is calculated, consideration is given to the weight values of all the joint points and the weight values of all the dimensions of the elements, the joint points and coordinate dimensions making no contribution to gesture recognition are removed, in this way, the interference on the gesture recognition by joint jittering and false operation of the human body is effectively removed, the anti-interference capacity of the algorithm is enhanced, and finally the accuracy and real-time performance of the gesture recognition are improved.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Method for recovering real-time three-dimensional body posture based on multimodal fusion

InactiveCN102800126AThe motion capture process is easyImprove stability3D-image rendering3D modellingColor imageTime domain
The invention relates to a method for recovering a real-time three-dimensional body posture based on multimodal fusion. The method can be used for recovering three-dimensional framework information of a human body by utilizing multiple technologies of depth map analysis, color identification, face detection and the like to obtain coordinates of main joint points of the human body in a real world. According to the method, on the basis of scene depth images and scene color images synchronously acquired at different moments, position information of the head of the human body can be acquired by a face detection method; position information of the four-limb end points with color marks of the human body are acquired by a color identification method; position information of the elbows and the knees of the human body is figured out by virtue of the position information of the four-limb end points and a mapping relation between the color maps and the depth maps; and an acquired framework is subjected to smooth processing by time domain information to reconstruct movement information of the human body in real time. Compared with the conventional technology for recovering the three-dimensional body posture by near-infrared equipment, the method provided by the invention can improve the recovery stability, and allows a human body movement capture process to be more convenient.
Owner:ZHEJIANG UNIV

Gesture tracking method for VR headset device and VR headset device

The invention provides a gesture tracking method for a VR headset device, which comprises the following steps of: acquiring a plurality of training images; separating hand depth images; marking a three-dimensional gesture, and forming original point cloud; calculating a normal vector and a curvature, and carrying out mean removing normalization; setting up a CNN network, wherein an input end of the CNN network is used for respectively inputting multiple normal vectors, curvatures and hand depth images, and an output end of the CNN network is used for outputting three-dimensional coordinates of a plurality of joint points including a palm center; using the trained CNN network as a feature extractor of the three-dimensional gesture, acquiring real-time action depth images by a depth camera, extracting and processing normal vector, curvature and hand depth image information of the three-dimensional gesture which the real-time action depth images comprise by the feature extractor, outputting the three-dimensional coordinates of a plurality of joint points including the palm center, and carrying out tracking on the identified three-dimensional gesture. The invention further discloses a VR headset device. The gesture tracking method and the VR headset device which are provided by the invention fuse three-dimensional feature information, and have the advantage of high model identification rate.
Owner:GEER TECH CO LTD

Fast 3D skeleton model detecting method based on depth camera

The invention relates to the field of the computer visual technology, in particular to a fast 3D skeleton model detecting method based on a depth camera. The fast 3D skeleton model detecting method based on the depth camera comprises the steps that a whole human body is shot by using the depth camera, human face detection is carried out in the image by using an Adaboost algorithm, and thus depth information of the human face is obtained; the body silhouette is extracted based on the depth information of the human face; detection verification is carried out on the detected body silhouette through a 'convex template' verification algorithm; after the verification succeeds, image smoothness processing is carried out on the body silhouette, and the skeleton line of the body silhouette is obtained through a detailing algorithm; characteristic points on the skeleton line of the body silhouette are extracted, the number and the positions of the characteristic points are corrected, and interference points are removed; the corrected characteristic points are verified, and accurate joint points and other characteristics are obtained by adopting a fast joint point extracting algorithm if the verification succeeds. The fast 3D skeleton model detecting method based on the depth camera is high in operating speed, low in computing complexity and adaptive to various complex backgrounds, and each frame of image only needs 5ms.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Cerebral palsy child rehabilitation training method based on Kinect sensor

InactiveCN104524742AAccurate detection of coordinated actions in real timeImprove coordinationDiagnostic recording/measuringSensorsCerebral paralysisMuscular force
The invention discloses a cerebral palsy child rehabilitation training method based on a Kinect sensor. The method comprises the following steps: S1, acquiring skeleton point data of a child; S2, carrying out limb movement training and judging the tilting angle and uplifting angle of skeleton points of the child, wherein after the child conducts a movement, the movement of the child is captured, and the tilting angle and uplifting angle of the joint points of the head, the upper limbs and the lower limbs are judged; S3, carrying out robust interactive processing on the movement of the child; S4, connecting a game engine and sending child skeleton data obtained after robust interactive processing in the step S3 to the game engine; S5, feeding back the movement of the child in a voice mode to remind the child of non-standard movements and encourage the child to conduct standard movements; S6, estimating the progress of rehabilitation training of the child. According to the method, the movement behavior characteristics of the child in training are acquired based on microsoft Kinect, so that the cardiovascular endurance, muscular endurance, muscular force, balance and flexibility of the child are comprehensively developed.
Owner:HOHAI UNIV CHANGZHOU

Continuous trajectory planning transition method for robot tail end

The invention discloses a continuous trajectory planning transition method for a robot tail end. The method comprises following steps that firstly, a first line segment and a second line segment which need to be subjected to continuous trajectory planning transition are determined, demonstration points of a connecting line segment are determined, and the transition distance between the demonstration points and the first line segment and the transition distance between the demonstration points and the second line segment are determined; secondly, according to the transition distance between the demonstration points and the first line segment, first transition joint points on the first line segment are determined, and according to the transition distance between the demonstration points and the second line segment, second transition joint points on the second line segment are determined; and thirdly, an amplitude coefficient, a phase coefficient and a speed zooming coefficient are calculated in a coordinate axis, and the amplitude coefficient, the phase coefficient and the speed zooming coefficient are brought into a limited term sine position planning function to determine a transition curve expression between the first transition joint points and the second transition joint points. The algorithm flow is clear, the calculating time is greatly shortened, and the complex degree of a robot control system is reduced.
Owner:HANGZHOU WAHAHA PRECISION MACHINERY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products