Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

189 results about "Stream network" patented technology

Methods for interactive visualization of spreading activation using time tubes and disk trees

Methods for displaying results of a spreading activation algorithm and for defining an activation input vector for the spreading activation algorithm are disclosed. A planar disk tree is used to represent the generalized graph structure being modeled in a spreading activation algorithm. Activation bars on some or all nodes of the planar disk tree in the dimension perpendicular to the disk tree encode the final activation level resulting at the end of N iterations of the spreading activation algorithm. The number of nodes for which activation bars are displayed may be a predetermined number, a predetermine fraction of all nodes, or a determined by a predetermined activation level threshold. The final activation levels resulting from activation spread through more than one flow network corresponding to the same generalized graph are displayed as color encoded segments on the activation bars. Content, usage, topology, or recommendation flow networks may be used for spreading activation. The difference between spreading activation through different flow networks corresponding to the same generalized graph may be displayed by subtracting the resulting activation patterns from each network and displaying the difference. The spreading activation input vector is determined by continually measuring the dwell time that the user's cursor spends on a displayed node. Activation vectors at various intermediate steps of the N-step spreading activation algorithm are color encoded onto nodes of disk trees within time tubes. The activation input vector and the activation vectors resulting from all N steps are displayed in a time tube having N+1 planar disk trees. Alternatively, a periodic subset of all N activation vectors are displayed, or a subset showing planar disk trees representing large changes in activation levels or phase shifts are displayed while planar disk trees representing smaller changes in activation levels are not displayed.
Owner:XEROX CORP

Intelligent fault diagnosis method based on deep adversarial domain self-adaption

The invention provides an intelligent fault diagnosis method based on deep adversarial domain self-adaption. The method comprises the steps: collecting vibration signals of a rotating machine under different working conditions through a sensor, and carrying out the signal segmentation of a data set under different working conditions through a moving time window; discriminative features in the dataset are extracted; constructing a deep adversarial domain adaptive network by combining a feature extractor and a domain discriminator, and extracting domain invariant features under two working conditions; adopting a training strategy of the adversarial network to jointly train the two-stream network model until the model converges, and using the trained category classifier to identify the bearing health state of the target domain data set lacking the fault label. According to the method, fault diagnosis is carried out on the working condition with insufficient data information by means of the working condition with rich data information, migration of diagnosis knowledge is completed. Meanwhile, a deep learning network is constructed, dependence on expert knowledge in a traditional diagnosis method is overcome, and an effective tool is provided for reducing the cost of a future intelligent fault diagnosis system.
Owner:XI AN JIAOTONG UNIV

Cross-view gait identification device based on dual-flow generation confrontation network and training method

The invention belongs to the computer vision and mode identification field, particularly relates to a cross-view gait identification device based on the dual-flow generation confrontation network anda training method and aims to solve a problem of non-high cross-view gait identification accuracy. The method specifically comprises steps that a global flow gait energy image of a standard angle is learned through a global flow generation confrontation network model, and local flow gait energy images of standard angles are learned through utilizing three local flow generation confrontation network models. Global gait characteristics can be learned by a global flow model, on the basis of the global flow model, the local flow networks are added, so local gait characteristics can be learned; gait details can be restored through adding pixel-level constraints to a generator of the dual-flow generation confrontation network; through fusing the global gait characteristics and the local gait characteristics, gait identification accuracy can be improved. The method is advantaged in that quite strong robustness for the gait images is realized, and a cross-view gait identification problem can be better solved.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Behavior recognition method and system based on attention mechanism double-flow network

InactiveCN111462183ATake advantage ofImprove the accuracy of behavior recognitionImage enhancementImage analysisTime domainRgb image
The invention provides a behavior recognition method and system based on an attention mechanism double-flow network, and belongs to the technical field of behavior recognition, and the method comprises the steps: dividing an obtained whole video segment into a plurality of video segments with the same length, extracting an RGB image and an optical flow gray-scale image of each frame of each videosegment, and carrying out the preprocessing of the RGB images and the optical flow gray-scale images; carrying out random sampling on the preprocessed image to obtain an RGB image and an optical flowgrayscale image of each video clip; extracting appearance features and time dynamic features of the sampled images by using a double-flow network model introducing an attention mechanism, fusing the appearance features and the time dynamic features according to the types of a time domain network and a space domain network respectively, and performing weighted fusion on a fusion result of the timedomain network and a fusion result of the space domain network to obtain an identification result of the whole video. According to the invention, the video data can be fully utilized, the local key features of the video frame image can be better extracted, the foreground area where the action occurs is highlighted, the influence of irrelevant information in the background environment is inhibited,and the behavior recognition accuracy is improved.
Owner:SHANDONG UNIV

A double-flow network pedestrian re-identification method combining the apparent characteristics and the temporal-spatial distribution

The invention discloses a double-flow network pedestrian re-identification method combining the apparent characteristics and the temporal-spatial distribution. The method mainly comprises the following steps of extracting the apparent characteristics of the pedestrian images by using a depth neural network and calculating the apparent similarity of image pairs; learning the spatio-temporal distribution model of a training dataset by Gaussian smoothing based statistical method; obtaining the final similarity by calculating the apparent similarity and the spatio-temporal probability with the joint measurement method based on logical smoothing; sorting the final similarity to get the result of pedestrian re-recognition. The main contributions comprises proposing a pedestrian re-identificationframework based on dual-stream network which combines the apparent features and spatial-temporal distribution; (2) proposing a new spatio-temporal learning method based on Gaussian smoothing; (3) proposing a new joint similarity measurement method based on logical smoothing. The experimental results show that the can be applied to the accuracy of Rank1 of the proposed method on the DukeMTMC-reIDand Market1501 datasets are respectively increased from 83.8% and 91.2% to 94.4% and 98.0%, and a significant performance improvement is realized over other methods.
Owner:SUN YAT SEN UNIV

Behavior recognition method based on space-time attention enhancement feature fusion network

ActiveCN111709304AEnhanced ability to extract valid channel featuresImprove the problem of easy feature overfittingCharacter and pattern recognitionNeural architecturesFrame sequenceMachine vision
The invention discloses a behavior recognition method based on a space-time attention enhancement feature fusion network, and belongs to the field of machine vision. According to the method, a networkarchitecture based on an appearance flow and motion flow double-flow network is adopted, and is called as a space-time attention enhancement feature fusion network. Aiming at a traditional double-flow network, simple feature or score fusion is adopted for different branches, an attention-enhanced multi-layer feature fusion flow is constructed to serve as a third branch to supplement a double-flowstructure. Meanwhile, aiming at the problem that the traditional deep network neglects modeling of the channel characteristics and cannot fully utilize the mutual relation between the channels, the channel attention modules of different levels are introduced to establish the mutual relation between the channels to enhance the expression capability of the channel characteristics. In addition, thetime sequence information plays an important role in segmentation fusion, and the representativeness of important time sequence features is enhanced by performing time sequence modeling on the frame sequence. Finally, the classification scores of different branches are subjected to weighted fusion.
Owner:JIANGNAN UNIV

RGB-D multi-mode fusion person detection method based on asymmetric double-flow network

The invention discloses an RGB-D multi-modal fusion person detection method based on an asymmetric double-flow network, and belongs to the field of computer vision and image processing. The method comprises the steps of RGBD image acquisition, depth image preprocessing, RGB feature extraction and Depth feature extraction, RGB multi-scale fusion and Depth multi-scale fusion, multi-modal feature channel reweighting and multi-scale personnel prediction. According to the method, an asymmetric RGBD double-flow convolutional neural network model is designed to solve the problem that a traditional symmetric RGBD double-flow network is prone to causing depth feature loss. Multi-scale fusion structures are designed for RGBD double-flow networks respectively, so multi-scale information complementation is achieved. A multi-modal reweighting structure is constructed, the RGB and Depth feature maps are combined, and weighted assignment is performed on each combined feature channel to realize modelautomatic learning contribution proportion. Person classification and frame regression are carried out by using multi-modal features, so the accuracy of personnel detection is improved while the real-time performance is ensured, and the robustness of detection under low illumination at night and personnel shielding is enhanced.
Owner:BEIJING UNIV OF TECH

Action detection method based on asymmetric multi-flow

The invention discloses an action detection method based on asymmetric multi-stream, which comprises the following steps: extracting an RGB image and an optical stream from a prior video, and training to obtain a trained RGB image single-stream network and an optical stream single-stream network; extracting image flow characteristic information and optical flow characteristic information of each frame in the priori video, and training an asymmetric double-flow network by combining an action label; respectively extracting image flow characteristic information and optical flow characteristic information of each frame in the target video to be detected through the trained RGB image single-flow network and optical flow single-flow network, obtaining segment characteristics of the target video, inputting the segment characteristics into the trained asymmetric double-flow network, and calculating to obtain a video classification vector; selecting potential actions from the video classification vectors to obtain an action recognition sequence of the potential actions; and completing the action detection is completed through the action recognition sequence. According to the action detection method, the asymmetry between the image flow and the optical flow is considered, and the accuracy of action recognition and action detection can be improved.
Owner:XI AN JIAOTONG UNIV

Laparoscopic surgery stage automatic recognition method and device based on double-flow network

According to the automatic recognition method and device for the laparoscopic surgery stage based on the double-flow network, the requirement of an recognition task can be met, end-to-end training optimization of the network is achieved, and the recognition accuracy of the laparoscopic surgery stage is greatly improved. The method comprises the following steps: acquiring a laparoscopic cholecystectomy video to obtain a video key frame sequence; preliminarily extracting visual features of the N images at the same time by utilizing a shared convolutional CNN (Convolutional Neural Network), and taking an obtained feature map as input of a subsequent double-flow network structure; respectively extracting time correlation information and deep visual semantic information of the video sequence byusing a double-flow network structure, further extracting the deep visual semantic information by using a visual branch undertaking a Shared CNN, and fully capturing the time correlation informationof adjacent N images by using three-dimensional convolution and non-local convolution by using a time sequence branch; wherein the deep visual semantic information extracted by the double-flow networkstructure and the time associated information supplement each other, and obtaining an operation stage recognition result by using the fused features.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Load flow calculation method based on VSC internal correction equation matrix and alternate iteration method under augmented rectangular coordinates

The invention relates to a load flow calculation method based on a VSC internal correction equation matrix and an alternate iteration method under augmented rectangular coordinates. The calculation efficiency can be improved by utilizing the linearization relationship between the node voltage and the node injection current in the augmented rectangular coordinate model and the extremely high sparsity rate of the Jacobian matrix under the model; a correction equation and a Jacobian matrix for internal load flow calculation of a VSC (voltage source converter) under an augmented rectangular coordinate are provided; a VSC internal power flow model is established under an augmented rectangular coordinate; a VSC is decoupled from an AC network and a DC network; active power control is adjusted; original reactive power control is adopted, a network structure is kept unchanged, an AC / DC decoupling power flow algorithm model under augmented rectangular coordinates is finally obtained, only one-loop DC side power flow calculation and one-loop VSC and AC network alternate iterative calculation need to be carried out, the Jacobian matrix scale is reduced, and the convergence and calculation efficiency of power flow calculation are improved.
Owner:FUZHOU UNIV

Double-flow network behavior recognition method based on multi-level spatial-temporal feature fusion enhancement

The invention discloses a double-flow network behavior recognition method based on multi-level spatial-temporal feature fusion enhancement. According to the method, a network architecture based on a space-time double-flow network is adopted, and the network architecture is called as a multi-level space-time feature fusion enhancement network. The method aims at solving the problems that the effectof shallow features is ignored and the complementary features of a double-flow network cannot be fully utilized due to the fact that category probability distribution of two flows is only fused at the last layer in a traditional double-flow network. According to the method, a multi-level spatial-temporal feature fusion module is provided, and multi-depth-level mixed features are captured throughthe spatial-temporal feature fusion module at different depth levels of double flows so as to make full use of a double-flow network. In addition, in the network, all features weakens the effect of those features that contribute greatly to classification are equally treated. According to the method, a grouping attention enhancing module is provided in the network, and the saliency of effective areas and channels on features is automatically enhanced. Finally, the robustness of the behavior recognition model is further improved by collecting the classification results of the double-flow networkand feature fusion.
Owner:JIANGNAN UNIV

Three-branch network behavior identification method based on multipath space-time feature enhanced fusion

The invention discloses a three-branch network behavior identification method based on multipath space-time feature enhanced fusion. The method adopts a network framework based on a space-time double-flow network, and the network framework is called as a multipath space-time feature enhanced fusion network. The method aims at solving the problems that double-flow information is not fully utilizeddue to the fact that a double-flow network only fuses top-layer space-time features, and feature fusion interaction is insufficient due to the fact that a feature fusion stage is located behind a global sampling layer. According to the method, a compression bilinear algorithm is utilized to perform dimension reduction on multi-layer corresponding spatial-temporal features from a double-flow network, and then fusion is performed, so that the interaction between fusion features is increased and the fusion effect is enhanced while the memory required by the fusion features is reduced. Besides, amulti-scale channel-space attention module is provided in the fusion flow, effective features in the fusion features are enhanced, and invalid features are suppressed. Finally, long-term time information in the video is captured in combination with the thought of a time segment network TSN, and the robustness of the behavior recognition model is further improved.
Owner:JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products