Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Baby crying sound translation method based on sound feature recognition

A sound feature and baby technology, applied in the field of baby cry translation, can solve problems such as low efficiency, no reference standard, and decline in nursing quality, and achieve the effect of improving quality and efficiency, reducing misjudgment and delayed judgment

Active Publication Date: 2018-12-21
HENAN POLYTECHNIC UNIV
View PDF20 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method relies on the experience of the caregivers to a certain extent, but young parents or other caregivers rarely receive professional training, resulting in inexperienced or different caregivers, and there is no relatively uniform reference standard
This traditional empirical judgment has the following disadvantages: 1. Because the caregiver did not grasp the baby's physiological or psychological needs in time, the quality of care was reduced and the efficiency was not high; 2. The baby's medical treatment, Untimely treatment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Baby crying sound translation method based on sound feature recognition
  • Baby crying sound translation method based on sound feature recognition
  • Baby crying sound translation method based on sound feature recognition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] The present invention will be further described below in conjunction with specific embodiments. The exemplary embodiments and descriptions of the present invention are used to explain the present invention, but not as a limitation to the present invention.

[0022] Such as figure 1 As shown, a kind of baby cry translation method based on sound feature recognition of the present embodiment, the specific steps are as follows:

[0023] A hand-held precision sound level pickup can be used to be placed 10cm above the baby's mouth to collect 1s long sound clips of baby crying, and pre-process the sound clips of all baby cries collected. The pre-processing includes Adopt MINI DSP audio processor, DSP voice noise reduction algorithm, LD-2L filter noise reduction current sound anti-jamming device for voice noise reduction and filter noise reduction of all baby crying sound clips.

[0024] These sound signals need to be analyzed and processed before they are input into the BP ne...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a baby crying sound translation method based on sound feature recognition. According to crying sound characteristic differences of babies in different physiological states, crying sound characteristic parameters of babies in different physiological states are extracted respectively by using a computer sound processing technology, wherein the crying sound characteristic parameters are 16 characteristic parameters including 11 characteristic parameters: the tone, pitch, loudness, energy, frequency and frequency co-occurrence matrix; on the basis of a BP neural network algorithm, crying sound sections of the babies in different physiological states are collected, denoising and filtering are carried out on the sound sections, a correspondence relationship between baby crying sound characteristic differences and six kinds of physiological states including hunger, drowsiness, pain, boredom, fear and discomfort is established, and identification results of hunger, drowsiness, pain, boredom, fear and discomfort are provided. In the trained BP neural network, characteristic parameters extracted from any baby crying sound section are inputted and thus an identification result is obtained from the output layer, so that the baby care quality and efficiency are improved and the erroneous determination and delayed determination in baby caring are reduced.

Description

technical field [0001] The invention relates to the technical field of sound recognition, in particular to a baby cry translation method based on sound feature recognition. Background technique [0002] In the traditional process of accompanying infants and young children, since infants do not yet have the ability to speak, their physiological needs and emotional expression mainly rely on the experience and judgment of the guardians to observe the infant's expressions, appearance phenomena, and somatosensory characteristics. This method relies on the experience of the caregivers to a certain extent, but young parents or other caregivers rarely receive professional training, resulting in insufficient or varying experience, and there is no relatively uniform reference standard. This traditional empirical judgment has the following disadvantages: 1. Because the caregiver did not grasp the baby's physiological or psychological needs in time, the quality of care was reduced and t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/16G10L15/18G10L15/26G10L17/26G10L25/03G10L25/30G10L25/63
CPCG10L15/16G10L15/1822G10L17/26G10L25/03G10L25/30G10L25/63G10L15/26Y02T90/00
Inventor 邓小伟聂彦合叶广课韩明君殷帅军王勋龙
Owner HENAN POLYTECHNIC UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products