Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method, apparatus and device for achieving co-location of voices and images and medium

A sound and image technology, applied in the field of co-location of sound and image, can solve the problems of weak video presence and poor video playback effect

Active Publication Date: 2019-01-11
SHENZHEN SKYWORTH RGB ELECTRONICS CO LTD
View PDF10 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] The current electronic display products, such as large-size LCD TVs, when playing video, the video image is presented through the display screen, while the video sound is produced through the speakers installed in other parts of the TV, because the video sound and the corresponding video The images are not in the same position, resulting in poor playback of the video, and the user's sense of presence is not strong when watching the video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method, apparatus and device for achieving co-location of voices and images and medium
  • Method, apparatus and device for achieving co-location of voices and images and medium
  • Method, apparatus and device for achieving co-location of voices and images and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] figure 1It is a schematic flow chart of a method for coordinating sound and images provided by Embodiment 1 of the present invention. The method for realizing the colocation of sound and image provided by this embodiment can be applied to electronic products with large-size display screens, such as television products with a size of 65 inches and above; That is, the distance between sound sources) is relatively short, and the sound effect of co-location of sound and image cannot be highlighted. The method for coordinating sound and images is applicable to the playback of videos with sound features with obvious directional attributes, for example, the video with sound features with obvious directional attributes contains characters and the characters make speech sounds , videos of quarreling sounds or singing sounds, or videos containing animals and the animals make sounds, or videos containing objects and the objects make knocking sounds (such as iron, electric welding...

Embodiment 2

[0101] image 3 It is a schematic flow chart of a method for coordinating sound and images provided by Embodiment 2 of the present invention. On the basis of the above-mentioned embodiments, this embodiment describes the realization process of the sound restoration of the sound source. For details, see image 3 As shown, the method specifically includes the following steps:

[0102] 310. Decode the currently played video to obtain image data and sound data corresponding to the currently played video.

[0103] 320. Call an image recognition interface to perform image recognition based on the image data to obtain corresponding image features, and call a sound recognition interface to perform sound recognition based on the sound data to obtain corresponding sound features.

[0104] 330. Determine whether there is a sound source in the currently playing video based on the image features, and if there is a sound source in the currently playing video, continue to execute step 340...

Embodiment 3

[0127] Image 6 A schematic structural diagram of a device for coordinating sound and images provided in Embodiment 3 of the present invention; see Image 6 As shown, the device includes: an identification module 610, a sound source judgment module 620, an acquisition module 630, a sound source judgment module 640 and a control module 650;

[0128] Among them, the identification module 610 is used to perform image recognition and sound recognition on the currently played video respectively, so as to obtain the image features and sound features corresponding to the currently played video; the sound source judging module 620 is used to judge the sound source based on the image features. Whether there is a sound source in the currently playing video; the acquisition module 630 is used to obtain the sound source of the currently playing video from the preset image feature database based on the image feature if there is a sound source in the currently playing video in the current v...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a method, an apparatus and a device for achieving co-location of voices and images and a medium. The method comprises the steps of: performing image recognition and voice recognition for a current playing video to obtain image features and voice features corresponding to the current playing video; based on the feature images, determining whether the current playing video has a sound production source; if yes, obtaining position information of the sound production source of the current playing video at a current video display screen; based on the voicefeatures, determining whether the current playing video has a voice source matching with the sound production source or not; if yes, generating control signals according to the position information ofthe sound production source in the current video display screen to control the voice corresponding to the position information to restore element sound production, wherein a preset image feature database is constructed in advance according to the current playing video. Through the technical scheme, the play effect of the video can be improved to bring high sense of immediacy for watchers.

Description

technical field [0001] The embodiments of the present invention relate to the technical field of smart TVs, and in particular to a method, device, equipment and medium for realizing co-location of sound and images. Background technique [0002] The current electronic display products, such as large-size LCD TVs, when playing video, the video image is presented through the display screen, while the video sound is produced through the speakers installed in other parts of the TV, because the video sound and the corresponding video The images are not in the same position, resulting in poor video playback, and the user's sense of presence is not strong when watching the video. Contents of the invention [0003] The present invention provides a method, device, equipment and medium for realizing co-positioning of sound and image, through which the co-positioning of sound and image can be effectively realized, and the playing effect of video can be improved. [0004] In order to ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N21/43H04N21/439H04N21/44H04N21/4402H04N21/442
CPCH04N21/4307H04N21/4394H04N21/4398H04N21/44008H04N21/440218H04N21/44213
Inventor 赵新科
Owner SHENZHEN SKYWORTH RGB ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products