Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1672 results about "Network output" patented technology

A network basic input output system (NetBIOS) is a system service that acts on the session layer of the OSI model and controls how applications residing in separate hosts/nodes communicate over a local area network. NetBIOS is an application programming interface (API), not a networking protocol as many people falsely believe.

Fault-tolerant, multi-network detour router system for text messages, data, and voice

The invention provides a method and system for fault-tolerant communication. It utilizes three wide area networks, including the cell phone network, the internet and the telephone network (PSTN). The method and system monitors the wide-area networks and sends warning messages if they cannot be accessed from a local site. Primary failure conditions relate to the access and use of the three wide-area networks. Secondary failure conditions include power outages. The possible fault conditions include: ‘telephone out’, ‘internet out’, ‘wireless radio out’, ‘power out’. If these failure conditions are detected, the method and system alerts the user and redirects voice, text message, and data traffic via a detour over a different wide-area network in order to avoid that failure.
A method and system has been disclosed that routs information over one of multiple networks in a fault-tolerant manner. In case of network failure, detour routers allow the start-detour routers to switch the information flow over another network to an end-detour router, where the information flow is switched back to the originally-intended network. Cell radios are utilized, as well as telephones and the internet to provide redundancy and fail-over under ‘telephone out’, ‘internet out’, ‘cell phone out’ conditions. The method has particular strengths in supporting communication even with dynamically assigned addresses, thus assuring that remote users receive monitoring and alarm information in a timely manner.
Owner:LIN GAO +2

Calculation apparatus and method for accelerator chip accelerating deep neural network algorithm

The invention provides a calculation apparatus and method for an accelerator chip accelerating a deep neural network algorithm. The apparatus comprises a vector addition processor module, a vector function value calculator module and a vector multiplier-adder module, wherein the vector addition processor module performs vector addition or subtraction and/or vectorized operation of a pooling layer algorithm in the deep neural network algorithm; the vector function value calculator module performs vectorized operation of a nonlinear value in the deep neural network algorithm; the vector multiplier-adder module performs vector multiplication and addition operations; the three modules execute programmable instructions and interact to calculate a neuron value and a network output result of a neural network and a synaptic weight variation representing the effect intensity of input layer neurons to output layer neurons; and an intermediate value storage region is arranged in each of the three modules and a main memory is subjected to reading and writing operations. Therefore, the intermediate value reading and writing frequencies of the main memory can be reduced, the energy consumption of the accelerator chip can be reduced, and the problems of data missing and replacement in a data processing process can be avoided.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Pedestrian detection and tracking method based on accelerated area Convolutional Neural Network

The invention relates to a pedestrian recognition and tracking method based on an accelerated area Convolutional Neural Network. Firstly, training and testing data set are preprocessed according to the requirements through a robot with an infrared camera to acquire a training dataset and a testing dataset at night, and then, actual target position labeling is conducted on all training and testing photos and is recorded to a sample file; then, the accelerated area Convolutional Neural Network is constructed, the accelerated area Convolutional Neural Network is trained by using the training dataset, and the final probability belonging to a pedestrian area and a bounding box of the area are calculated out from network output by the usage of a non-maximum suppression algorithm; the accuracy of the network is tested by the usage of the testing dataset, and a network model consistent with the requirements is obtained; photos collected by the robot at night are input to an accelerated area Convolutional Neural Network model, and the probability belonging to the pedestrian area and the bounding box of the area are online output by a model in real time. According to the pedestrian detection and tracking method based on the accelerated area Convolutional Neural Network, a pedestrian in an infrared image can be effectively recognized, and real-time tracking for a pedestrian target in an infrared video can be achieved.
Owner:DONGHUA UNIV

Image semantic segmentation method based on super-pixel edge and full convolutional network

The invention proposes an image semantic segmentation method based on a super-pixel edge and a full convolutional network, so that a technical problem of low accuracy in the existing image semantic segmentation method is solved. The method comprises: a training sample set, a testing sample set, and a verification sample set are constructed; a full convolutional network outputting a pixel-level semantic mark is trained, tested, and verified; semantic segmentation is carried out on a to-be-segmented image by using the verified full convolutional network outputting a pixel-level semantic mark to otain a pixel-level semantic mark; BSLIC sub-pixel segmentation is carried out on the to-be-segmented image; and semantic marking is carried out on BSLIC super-pixels by using the pixel-level semantic mark to obtain a semantic segmentation result with combination of the super-pixel edge and the high-level semantic information outputted by the full convolutional network. Therefore, the original full convolutional network segmentation accuracy is kept and the segmentation accuracy of the small edge is improved, so that the image segmentation accuracy is enhanced. The image semantic segmentation method can be applied to classification, identification, and tracking occasions requiring target detection.
Owner:XIDIAN UNIV

Generative adversarial network-based pixel-level portrait cutout method

The invention discloses a generative adversarial network-based pixel-level portrait cutout method and solves the problem that massive data sets with huge making costs are needed to train and optimizea network in the field of machine cutout. The method comprises the steps of presetting a generative network and a judgment network of an adversarial learning mode, wherein the generative network is adeep neural network with a jump connection; inputting a real image containing a portrait to the generative network for outputting a person and scene segmentation image; inputting first and second image pairs to the judgment network for outputting a judgment probability, and determining loss functions of the generative network and the judgment network; according to minimization of the values of theloss functions of the two networks, adjusting configuration parameters of the two networks to finish training of the generative network; and inputting a test image to the trained generative network for generating the person and scene segmentation image, randomizing the generated image, and finally inputting a probability matrix to a conditional random field for further optimization. According tothe method, a training image quantity is reduced in batches; and the efficiency and the segmentation precision are improved.
Owner:XIDIAN UNIV

Driving scene classification method based on convolution neural network

The invention discloses a driving scene classification method based on a convolution neural network, and the method comprises the following steps: collecting a road environment video image; carrying out the classification of a traffic scene, and building a traffic scene recognition database; extracting sample images of different driving scenes from the traffic scene recognition database, carryingout the feature extraction and multiple convolution training of the sample images through a deep convolution neural network, carrying out the rasterization of pixels, connecting the pixels to form a vector, inputting the vector into a conventional neural network, obtaining convolution neural network output, and achieving the deep learning of different driving scenes; carrying out the parameter optimization of a network structure of the built convolution neural network, obtaining a trained convolution neural network classifier, carrying out the adjustment of a traffic scene recognition model, and selecting an optimal mode as the standard of the traffic scene recognition model; carrying out the real-time collection of the image of a to-be-detected traffic scene, and inputting the image intothe traffic scene recognition model for the recognition of a road environment scene.
Owner:JILIN UNIV

Remote sensing image classification method based on attention mechanism deep Contourlet network

The invention discloses a remote sensing image classification method based on an attention mechanism deep Contourlet network, and the method comprises the steps: building a remote sensing image library, and obtaining a training sample set and a test sample set; then, setting a Contourlet decomposition module, building a convolutional neural network model, grouping convolution layers in the model in pairs to form a convolution module, using an attention mechanism, and performing data enhancement on the merged feature map through a channel attention module; carrying out iterative training; performing global contrast normalization processing on the remote sensing images to be classified to obtain the average intensity of the whole remote sensing images, and then performing normalization to obtain the remote sensing images to be classified after normalization processing; and inputting the normalized unknown remote sensing image into the trained convolutional neural network model, and classifying the unknown remote sensing image to obtain a network output classification result. According to the method, a Contourlet decomposition method and a deep convolutional network method are combined, a channel attention mechanism is introduced, and the advantages of deep learning and Contourlet transformation can be brought into play at the same time.
Owner:XIDIAN UNIV

Video classification method and model training method and device thereof, and electronic equipment

The invention provides a video classification method, a model training method and device thereof, and electronic equipment. The training method comprises the following steps: extracting initial features of a plurality of video frames through a convolutional neural network; extracting final features of the plurality of video frames from the initial features through a recurrent neural network; inputting the final feature into an output network, and outputting a prediction result of the multi-frame video frame; determining a loss value of the prediction result through a preset loss prediction function; and training the initial model according to the loss value until parameters in the initial model converge to obtain a video classification model. According to the method, the convolutional neural network and the recurrent neural network are combined, so that the operand can be greatly reduced, and the model training and recognition efficiency is improved; and meanwhile, the association information between the video frames can be considered in the feature extraction process, so that the extracted features can accurately represent the video types, and the accuracy of video classificationis improved.
Owner:BEIJING KINGSOFT CLOUD NETWORK TECH CO LTD +1

Egocentric vision in-the-air hand-writing and in-the-air interaction method based on cascade convolution nerve network

The invention discloses an egocentric vision in-the-air hand-writing and in-the-air interaction method based on a cascade convolution nerve network. The method comprises steps of S1: obtaining training data; S2: designing a depth convolution nerve network used for hand detection; S3: designing a depth convolution nerve network used for gesture classification and finger tip detection; S4: cascading a first-level network and a second-level network, cutting a region of interest out of a foreground bounding rectangle output by the first-level network so as to obtain a foreground region including a hand, and then using the foreground region as the input of the second-level convolution network for finger tip detection and gesture identification; S5: judging the gesture identification, if it is a single-finger gesture, outputting the finger tip thereof and then carrying out timing sequence smoothing and interpolation between points; and S6: using continuous multi-frame finger tip sampling coordinates to carry out character identification. The invention provides an integral in-the-air hand-writing and in-the-air interaction algorithm, so accurate and robust finger tip detection and gesture classification are achieved, thereby achieving the egocentric vision in-the-air hand-writing and in-the-air interaction.
Owner:SOUTH CHINA UNIV OF TECH

Multi-modal emotion recognition method based on fusion attention network

The invention discloses a multi-modal emotion recognition method based on a fusion attention network. The method comprises: extracting high-dimensional features of three modes of text, vision and audio, and aligning and normalizing according to the word level; then, inputting the signals into a bidirectional gating circulation unit network for training; extracting state information output by the bidirectional gating circulation unit network in the three single-mode sub-networks to calculate the correlation degree of the state information among the multiple modes; calculating the attention distribution of the plurality of modalities at each moment; wherein the state information is the weight parameter of the state information at each moment; and weighting and averaging state information ofthe three modal sub-networks and the corresponding weight parameters to obtain a fusion feature vector as input of the full connection network, a to-be-identified text, inputting vision and audio intothe trained bidirectional gating circulation unit network of each modal, and obtaining final emotion intensity output. According to the method, the problem of weight consistency of all modes during multi-mode fusion can be solved, and the emotion recognition accuracy under multi-mode fusion is improved.
Owner:ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products