Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

129 results about "Time structure" patented technology

Structure Time :> TIME. The structure Time provides an abstract type for representing times and time intervals, and functions for manipulating, converting, writing, and reading them.

Methods and apparatuses for interactive similarity searching, retrieval and browsing of video

Methods for interactive selecting video queries consisting of training images from a video for a video similarity search and for displaying the results of the similarity search are disclosed. The user selects a time interval in the video as a query definition of training images for training an image class statistical model. Time intervals can be as short as one frame or consist of disjoint segments or shots. A statistical model of the image class defined by the training images is calculated on-the-fly from feature vectors extracted from transforms of the training images. For each frame in the video, a feature vector is extracted from the transform of the frame, and a similarity measure is calculated using the feature vector and the image class statistical model. The similarity measure is derived from the likelihood of a Gaussian model producing the frame. The similarity is then presented graphically, which allows the time structure of the video to be visualized and browsed. Similarity can be rapidly calculated for other video files as well, which enables content-based retrieval by example. A content-aware video browser featuring interactive similarity measurement is presented. A method for selecting training segments involves mouse click-and-drag operations over a time bar representing the duration of the video; similarity results are displayed as shades in the time bar. Another method involves selecting periodic frames of the video as endpoints for the training segment.
Owner:FUJIFILM BUSINESS INNOVATION CORP +1

Human body action detection and positioning method based on space-time combination

PendingCN109784269AEfficient use ofSolve the problem of extremely different lengthsCharacter and pattern recognitionTime structureHuman body
The invention discloses a human motion detection and positioning method based on space-time combination, and the method comprises the steps: taking an unpruned video as an input, dividing the video into a plurality of short units with the same length through data preprocessing, carrying out the sparse sampling randomly, and extracting the space-time characteristics through a double-flow convolutional neural network; Secondly, entering a space-time joint network to judge the occurrence interval of actions to obtain a group of action scoring oscillograms, inputting the action oscillograms into aGTAG network, and setting different thresholds to meet different positioning precision requirements and obtain action proposal sections with different granularities; All the action proposal sectionsdetect the types of actions through an action classifier, the time boundary of action occurrence is finely corrected through an integrity filter, and human body action detection and positioning in a complex scene are achieved. The method provided by the invention can be applied to actual scenes with serious human body shielding, changeable postures and more interference objects, and can be used for well processing activity categories with different time structures.
Owner:CHINA UNIV OF PETROLEUM (EAST CHINA)

Semiconductor structure processing using multiple laterally spaced laser beam spots delivering multiple blows

Methods and systems process a semiconductor substrate having a plurality of structures to be selectively irradiated with multiple laser beams. The structures are arranged in a plurality of substantially parallel rows extending in a generally lengthwise direction. The method generates a first laser beam that propagates along a first laser beam axis that intersects a first target location on or within the semiconductor substrate. The method also generates a second laser beam that propagates along a second laser beam axis that intersects a second target location on or within the semiconductor substrate. The second target location is offset from the first target location in a direction perpendicular to the lengthwise direction of the rows by some amount such that, when the first target location is a structure on a first row of structures, the second target location is a structure or between two adjacent structures on a second row distinct from the first row. The method moves the semiconductor substrate relative to the first and second laser axes in a direction approximately parallel to the rows of structures, so as to pass the first target location along the first row to irradiate for a first time selected structures in the first row, and so as to simultaneously pass the second target location along the second row to irradiate for a second time structures previously irradiated by the first laser beam during a previous pass of the first target location along the second row.
Owner:ELECTRO SCI IND INC

Method for deeply learning and predicting medical track based on medical records

The invention discloses a method for deeply learning and predicting medical track based on medical records. The method comprises the following steps: S1, encoding diagnostic information and intervention information on admission through an encoding scheme and converting code into vector to acquire diagnostic information conversion vector (the formula is shown in the description) and intervention information conversion vector (the formula is shown in the description) separately, and converting the diagnostic information and intervention information on admission for one time into one 2M-dimensional vector [xt, pt]; S2, input the vector [xt, pt] into an LSTM model, and evaluating the current output value ht to obtain the current disease state; S3, predicting a diagnostic code dt+1 according tothe disease state ht and predicting the disease progression through the diagnostic code dt+1; S4, calculating an intervention code st of the time t, increasing a time structure in the LSTM model, collecting the historical disease states in multiple time ranges, collecting the state of each section of a horizontal time shaft, collecting all the diseases states, stacking into a vector (the formulais shown in the description), and feeding back the vector (the formula is shown in the description) into a nerve network to predict the future risk result Y.
Owner:莫毓昌

Video behavior timeline detection method

The invention discloses a video behavior timeline detection method. The method comprises the following steps: performing modeling based on a deep learning and time structure, detecting a video behavior timeline in combination with coarse granularity detection and fine granularity detection, and extracting temporal and spatial characteristics of a video by using a dual-flow model on the basis of anexisting model SSN; modeling a time structure of a behavior, and dividing a single behavior into three stages; then providing a new characteristic pyramid capable of effectively extracting time boundary information of a video behavior; and finally, combining the coarse granularity detection and the fine granularity detection to make a detection result more precise. The video behavior timeline detection method is high in detection precision, and the detection precision is higher than the detection precision of all of the current existing disclosed methods; the video behavior timeline detectionmethod is wide in application range, is applicable to detection of video clips in which people are interested in an intelligent monitoring system or a human-machine supervision system, is favorable for subsequent analysis and processing, and has an important application value.
Owner:PEKING UNIV SHENZHEN GRADUATE SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products