Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

65 results about "Sound classification" patented technology

Classification of Sounds. THE ALPHABET. ORTHOGRAPHY. 2. The simple Vowels are a, e, i, o, u, y. The Diphthongs are ae, au, ei, eu, oe, ui, and, in early Latin, ai, oi, ou. In the diphthongs both vowel sounds are heard, one following the other in the same syllable.

Sleep snoring sound classification detecting method and system based on depth learning

The invention discloses a sleep snoring sound classification detecting method based on depth learning. The method mainly comprises the steps of collecting sleep sound signals of a patient to be detected all night long through a sensor, detecting sonic sections in the sleep sound signals, and obtaining a sonic section map in the sleep sound signals; adopting depth learning for performing snoring sound and non-snoring sound classification on the sonic section map, and reserving a pure-snoring sound recognition result; then, adopting the depth learning for classifying four types of snore sounds for the pure-snoring sound recognition result, and completing automatic recognition and detection on snoring sounds of the patient suffering from obstructive sleep apnea-hypopnea syndrome (OSAHS); according to the snoring sound recognizing and detecting result, counting the number of snoring sounds of each type of the patient to be detected all night long, and obtaining an AHI index of the patientto be detected all night long. The invention further discloses a detecting system of the sleep snoring sound classification detecting method based on depth learning. The method and system can effectively and accurately evaluate whether or not a snoring object falls ill and evaluate the illness degree, and data references are provided for the patient suffering from the OSAHS.
Owner:SOUTH CHINA UNIV OF TECH

Programmable electronic stethoscope devices, algorithms, systems, and methods

A digital electronic stethoscope includes an acoustic sensor assembly that includes a body sensor portion and an ambient sensor portion, the body sensor portion being configured to make acoustically coupled contact with a subject while the ambient sensor portion is configured to face away from the body sensor portion so as to capture environmental noise proximate the body sensor portion; a signal processor and data storage system configured to communicate with the acoustic sensor assembly so as to receive detection signals therefrom, the detection signals including an auscultation signal comprising body target sound and a noise signal; and an output device configured to communicate with the signal processor and data storage system to provide at least one of an output signal or information derived from the output signal. The signal processor and data storage system includes a noise reduction system that removes both stationary noise and non-stationary noise from the detection signal to provide a clean auscultation signal substantially free of distortions. The signal processor and data storage system further includes an auscultation sound classification system configured to receive the clean auscultation signal and provide a classification thereof as at least one of a normal breath sound or an abnormal breath sound.
Owner:THE JOHN HOPKINS UNIV SCHOOL OF MEDICINE

Holographic video monitoring system and method for directional picture capture based on sound classification algorithm

InactiveCN109547695AAvoid the lack of intelligent image captureRealize holographic video surveillance systemTelevision system detailsColor television detailsMel-frequency cepstrumSupport vector machine
The invention provides a holographic video monitoring system and method for directional picture capture based on a sound classification algorithm. The system comprises a front end collection system, atransmission device, a central control platform and a display recording device; the front end collection system is configured to collect onsite audio data and video data and transmit the sane to thecentral control platform through the transmission device; the central control platform is configured to perform noise reduction processing and sound classification on the audio data through a supportvector machine identification algorithm of the Mel frequency cepstrum coefficient, perform segmentation extraction on the audio data required by the user, send the audio data required by the user andthe corresponding video data to the display recording device, and directionally capture and amplify a corresponding video picture by selecting the specific sound; and the display recording device is configured to synchronously play the monitoring data of the monitoring system in real time, call the monitoring data of any time period in real time, and play the corresponding video picture corresponding to the capture and amplification of the specific sound.
Owner:SHANDONG JIAOTONG UNIV

Cochlear implantation sound scene identification system and method

The present invention discloses a cochlear implantation sound scene identification system and method. The system comprises a foreground and background classifier, a foreground feature extraction module, a foreground identification network, a background feature extraction module, a background identification network, a comprehensive scene determination module and a program selector, and the foreground and background classifier performs classification of foreground and background sound for sound signals of an input system and then outputs the processed sound signals; after classification of the foreground and background classifier, if the sound signals are foreground sound, the sound signals are input to the foreground feature extraction module, and after extraction of sound features, a foreground feature array is output to the foreground identification network; if the sound signals are background sound, the sound signals are input to the background feature extraction module, and after extraction of sound features, a background feature array is output to the background identification network; through comprehensive analysis, the concrete classification of the current scene is output; and an output program is selected. Compared to a traditional scene identification system, the cochlear implantation sound scene identification system and method can identify more sound scenes.
Owner:ZHEJIANG NUROTRON BIOTECH

Far-field sound classification method and device

The embodiment of the invention provides a far-field sound classification method, and the method comprises the steps: building a far-field sound classification relation through the self-learning capability of an artificial intelligence model, and enabling the far-field sound classification relation built by the self-learning capability of the artificial intelligence model to be a far-field sound classification relation built based on data augmentation and a convolution neural network of multi-scale information; acquiring a voice signal in the target area; performing feature extraction on the voice signal based on the amplitude information of the voice signal to obtain a spectrogram; and inputting the spectrogram into a far-field sound classification relationship established by utilizing the self-learning ability of the artificial intelligence model to obtain a classification result. Audio data of sound classification is matched with signal distribution received by a microphone in a real environment; according to the method, noise, reverberation and other interference factors are removed, and sound classification is carried out by using a data augmentation mode, so that training data of the model can better fit data distribution of a real environment, better robustness can be obtained, and the accuracy of a sound classification task is improved.
Owner:慧言科技(天津)有限公司 +1

Broiler feed intake detection system based on audio technology

PendingCN112331231AHigh precisionAvoiding the Current Situation of Manually Measuring Group Feed IntakeSpeech analysisAvicultureSupport vector machineSound classification
The invention discloses a broiler feed intake detection system based on an audio technology. The system comprises a sound collection chamber, a switch, an upper computer and a server, and is characterized in that the sound collection chamber is used for collecting broiler pecking audio data; the switch is used for transmitting broiler pecking audio data; the upper computer is connected with the server and reads the audio data at regular time; a broiler pecking sound classification and recognition model algorithm based on a one-class support vector machine OC-SVM operated in a server makes thesound divided into pecking sound and non-pecking sound, and pecking and non-pecking are accurately judged by taking power spectral density as sound recognition features; and the broiler feed intake isobtained based on the relationship between the broiler pecking times and the feed intake. According to the method, an audio detection technology is taken as a carrier, the relationship between pecking times and feed intake is analyzed and determined according to the pecking times of the broiler chickens during feeding, and the feed intake of the broiler chickens is calculated by utilizing high correlation between the pecking times and the feed intake.
Owner:NANJING AGRICULTURAL UNIVERSITY

Method for selecting TV half silence playing

InactiveCN101262578AAchieve semi-silent stateAchieve complete silence functionTelevision system detailsColor television detailsKey pressingTemporary variable
The invention relates to a method for selecting the semi-mute broadcasting mode of a television. The method comprises the steps as follows: (1) entering a sound classification menu from a view-guiding main menu and configuring a semi-mute interface at the same time; (2) configuring the control of the navigation and direction keys of a TV remote controller and adjusting left navigation keys and right navigation keys corresponding to a channel classification menu; (3) selecting a semi-mute function to be in 'on' state with the semi-mute function started; (4) firstly recording current volume variable data and storing the variable data in the temporary variable and then halving the volume variable data to be read into a corresponding register by a mute key, with the volume instantly halved so as to realize the semi-mute state; pressing the mute key again and clearing the volume variable to zero to be read into the corresponding register, with full mute state; (5) selecting the semi-mute function to be in 'off' state with the semi-mute function not started and then pressing the mute key again to have original mute function and realize the full mute state. The method for selecting a semi-mute broadcasting mode of a television provided can rapidly and conveniently reduce the volume, with convenient use.
Owner:TIANJIN SAMSUNG ELECTRONICS DISPLAY

A system and method for generating a status output based on sound emitted by an animal

The disclosure relates to a system for generating a status output based on sound emitted by an animal. The system comprising: a client (102), a server (104) and a database (106); the database (106) is accessible (107) by the server (104) and comprises historic sound data pertaining to the animal (302) or animals of the same type as the animal; the client (102) comprising circuitry (110) configured to: detect (202) sound emitted (308, 312) by the animal (302); record (204) the detected sound (308, 312); analyze (206) the recorded sound for detecting whether the sound (308, 312) comprises a specific sound characteristic out of a plurality of possible sound characteristics, wherein the sound characteristic includes at least one of intensity, frequency and duration of the detected sound; transmit (208), in response to detecting that the sound (308, 312) comprises the specific sound characteristic, the recorded sound to a server (104); the server (104) comprises circuitry (122) configured to: receive (210) the recorded sound; classify (212) the recorded sound by comparing one or more sound characteristics of the recorded sound with the historic sound data comprised in the database (106); generate (214) the status output based on the classification of the recorded sound. A method (200) for generating a status output based on sound emitted by an animal is also provided.
Owner:SONY CORP

Waterside rescue robot based on SLAM technology and deep learning

The invention discloses a waterside rescue robot based on the SLAM technology and deep learning. The waterside rescue robot comprises an image information acquisition module, a sound information acquisition module, an information processing module, a control module, a motion module, a transmission receiving module and a rescue alarm module; the image information acquisition module is used for collecting map information of a surrounding environment and image information of a water surface environment; the sound information acquisition module is used for collecting sound information of the watersurface environment; the information processing module is used for receiving the map information, the image information and the sound information and performing recognition based on the SLAM technology of an ROS system and a target detection and sound classification algorithm based on a deep convolutional neural network; the control module is used for outputting a motion control signal; the motion module is used for doing responding action according to the motion control information; the transmission receiving module is used for data transmission; and the rescue alarm module is connected to the information processing module and is used for sending a rescue signal. The waterside rescue robot based on the SLAM technology and deep learning according to the embodiment can perform rapid patrol, can detect a sudden drowning situation and promptly notify rescuers and is efficient and intelligent.
Owner:WUYI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products