Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2085 results about "Spacetime" patented technology

In physics, spacetime is any mathematical model that fuses the three dimensions of space and the one dimension of time into a single four-dimensional continuum. Spacetime diagrams can be used to visualize relativistic effects such as why different observers perceive where and when events occur differently.

Internet-based method of and system for monitoring space-time coordinate information and biophysiological state information collected from an animate object along a course through the space-time continuum

InactiveUS6677858B1Avoiding shortcomingAvoiding shortcoming and drawbackInstruments for road network navigationInformation formatAnimationWireless data
An Internet-based method of and system for monitoring space-time coordinate information and biophysiological state information collected from an animate object moving along a course through the space-time continuum. The Internet-based system comprise a wireless GSU-enabled client network device affixed to the body of an animate object. The wireless device includes a global synchronization unit (GSU) for automatically generating time and space (TS) coordinate information corresponding to the time and space coordinate of the animate object with respect to a globally referenced coordinate system, as the animate object moves along a course through the space time continuum. The device also includes biophysiological state sensor affixed to the body of the animate object, for automatically sensing the biophysiological state of the animate object and generating biophysiological state information indicative of the sensed biophysiological state of the animate object along its course. The wireless device also includes a wireless date transmitter for transmitting the TS coordinate information and the biophysiological state information through free-space. A TS-stamping based tracking server receives the TS coordinate information and the biophysiological state information through in a wireless manner, and stores the same as the animate object moves along its course. An Internet information server serves Internet-based documents containing the collected TS coordinate and biophysiological state information. An Internet-enabled client system enables authorized persons to view the served Internet-based documents and monitor the collected TS coordinate and biophysiological state information, for various purposes.
Owner:REVEO

Method and apparatus for determination and visualization of player field coverage in a sporting event

A method and apparatus for deriving an occupancy map reflecting an athlete's coverage of a playing area based on real time tracking of a sporting event. The method according to the present invention includes a step of obtaining a spatio-temporal trajectory corresponding to the motion of an athlete and based on real time tracking of the athlete. The trajectory is then mapped over the geometry of the playing area to determine a playing area occupancy map indicating the frequency with which the athlete occupies certain areas of the playing area, or the time spent by the athlete in certain areas of the playing area. The occupancy map is preferably color coded to indicate different levels of occupancy in different areas of the playing area, and the color coded map is then overlaid onto an image (such as a video image) of the playing area. The apparatus according to the present invention includes a device for obtaining the trajectory of an athlete, a computational device for obtaining the occupancy map based on the obtained trajectory and the geometry of the playing area, and devices for transforming the map the a camera view, generating a color (or other visually differentiable) coded version of the occupancy map, and overlaying the color coded map on a video image of the playing area. In particular, the spatio-temporal trajectory may be obtained by an operation on a video image of the sporting event, in which motion regions in the image are identified, and feature points on the regions are tracked as they move, thereby defining feature paths. The feature paths, in turn, are associated in clusters, which clusters generally correspond to the motion of some portion of the athlete (e.g., arms, legs, etc.). The collective plurality of clusters (i.e., the trajectory) corresponds with the motion of the athlete as a whole.
Owner:LUCENT TECH INC +1

Clustering method based on mobile object spatiotemporal information trajectory subsections

The invention discloses a clustering method based on mobile object spatiotemporal information trajectory subsections. The clustering method based on mobile object spatiotemporal information trajectory subsections comprises the steps that the three attributes of time, speed and direction are introduced, and a similarity calculation formula of the time, speed and direction is provided for analyzing an internal structure and an external structure of a mobile object trajectory; firstly, according to the space density of the trajectory, the trajectory is divided into a plurality of trajectory subsections, then the similarities of the trajectory subsections are judged by calculating differences of the trajectory subjections on the space, time, speed and direction, finally, trajectory subsections in a non-significant cluster are deleted or are merged into adjacent significant clusters on the basis of a first cluster result, and therefore an overall moving rule is displayed on the clustering spatial form. According to the clustering method based on the mobile object spatiotemporal information trajectory subsections, the clustering result is improved, higher application value is provided, a space quadtree is adopted to conduct indexing on the trajectory subsections, clustering efficiency is greatly improved under the environment of a large-scale trajectory number set, and trajectories can be effectively clustered.
Owner:胡宝清

Content-based matching of videos using local spatio-temporal fingerprints

A computer implemented method computer implemented method for deriving a fingerprint from video data is disclosed, comprising the steps of receiving a plurality of frames from the video data; selecting at least one key frame from the plurality of frames, the at least one key frame being selected from two consecutive frames of the plurality of frames that exhibiting a maximal cumulative difference in at least one spatial feature of the two consecutive frames; detecting at least one 3D spatio-temporal feature within the at least one key frame; and encoding a spatio-temporal fingerprint based on mean luminance of the at least one 3D spatio-temporal feature. The least one spatial feature can be intensity. The at least one 3D spatio-temporal feature can be at least one Maximally Stable Volume (MSV). Also disclosed is a method for matching video data to a database containing a plurality of video fingerprints of the type described above, comprising the steps of calculating at least one fingerprint representing at least one query frame from the video data; indexing into the database using the at least one calculated fingerprint to find a set of candidate fingerprints; applying a score to each of the candidate fingerprints; selecting a subset of candidate fingerprints as proposed frames by rank ordering the candidate fingerprints; and attempting to match at least one fingerprint of at least one proposed frame based on a comparison of gradient-based descriptors associated with the at least one query frame and the at least one proposed frame.
Owner:SRI INTERNATIONAL

Space-time attention based video classification method

ActiveCN107330362AImprove classification performanceTime-domain saliency information is accurateCharacter and pattern recognitionAttention modelTime domain
The invention relates to a space-time attention based video classification method, which comprises the steps of extracting frames and optical flows for training video and video to be predicted, and stacking a plurality of optical flows into a multi-channel image; building a space-time attention model, wherein the space-time attention model comprises a space-domain attention network, a time-domain attention network and a connection network; training the three components of the space-time attention model in a joint manner so as to enable the effects of the space-domain attention and the time-domain attention to be simultaneously improved and obtain a space-time attention model capable of accurately modeling the space-domain saliency and the time-domain saliency and being applicable to video classification; extracting the space-domain saliency and the time-domain saliency for the frames and optical flows of the video to be predicted by using the space-time attention model obtained by learning, performing prediction, and integrating prediction scores of the frames and the optical flows to obtain a final semantic category of the video to be predicted. According to the space-time attention based video classification method, modeling can be performing on the space-domain attention and the time-domain attention simultaneously, and the cooperative performance can be sufficiently utilized through joint training, thereby learning more accurate space-domain saliency and time-domain saliency, and thus improving the accuracy of video classification.
Owner:PEKING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products