Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

77 results about "Audiovisual technology" patented technology

The proliferation of audiovisual communications technologies, including sound, video, lighting, display and projection systems, is evident in every sector of society: in business, education, government, the military, healthcare, retail environments, worship, sports and entertainment, hospitality, restaurants, and museums.

E-commerce development intranet portal

An intranet providing a multiple-carrel public-access kiosk is disclosed. The intranet provides free access to foreign and domestic informational e-commerce intranet sites as well as e-mail and public service educational and informational materials. The kiosk accepts anonymous pre-paid cards issued by a local franchisee of a network of c-commerce intranets that includes the local intranet. The franchisee owns or leases kiosks and also provides a walk-in e-commerce support center where e-commerce support services and goods, such as pre-paid accounts for access to paid services at a kiosk, can be purchased. The paid services provided by the carrel include video-conference and chat room time, playing and / or copying audio-visual materials such as computer games and music videos, and international e-commerce purchase support services such as customs and currency exchange. The third-party sponsored public service materials include audio-visual instructional materials in local dialects introducing the user to the use of the kiosks services and providing training for using standard business software programs. Sponsors include pop-up market research questions in the sponsored public service information and receive clickstream date correlated with the user's intranet ID and answers to the original demographic questions answered by the user.
Owner:DE FABREGA INGRID PERSCKY

Robot audiovisual system

A robot visuoauditory system that makes it possible to process data in real time to track vision and audition for an object, that can integrate visual and auditory information on an object to permit the object to be kept tracked without fail and that makes it possible to process the information in real time to keep tracking the object both visually and auditorily and visualize the real-time processing is disclosed. In the system, the audition module (20) in response to sound signals from microphones extracts pitches therefrom, separate their sound sources from each other and locate sound sources such as to identify a sound source as at least one speaker, thereby extracting an auditory event (28) for each object speaker. The vision module (30) on the basis of an image taken by a camera identifies by face, and locate, each such speaker, thereby extracting a visual event (39) therefor. The motor control module (40) for turning the robot horizontally. extracts a motor event (49) from a rotary position of the motor. The association module (60) for controlling these modules forms from the auditory, visual and motor control events an auditory stream (65) and a visual stream (66) and then associates these streams with each other to form an association stream (67). The attention control module (6) effects attention control designed to make a plan of the course in which to control the drive motor, e.g., upon locating the sound source for the auditory event and locating the face for the visual event, thereby determining the direction in which each speaker lies. The system also includes a display (27, 37, 48, 68) for displaying at least a portion of auditory, visual and motor information. The attention control module (64) servo-controls the robot on the basis of the association stream or streams.
Owner:JAPAN SCI & TECH CORP

Man-machine interaction method and system for online education based on artificial intelligence

PendingCN107958433ASolve the problem of poor learning effectData processing applicationsSpeech recognitionPersonalizationOnline learning
The invention discloses a man-machine interaction method and system for online education based on artificial intelligence, and relates to the digitalized visual and acoustic technology in the field ofelectronic information. The system comprises a subsystem which can recognize the emotion of an audience and an intelligent session subsystem. Particularly, the two subsystems are combined with an online education system, thereby achieving the better presentation of the personalized teaching contents for the audience. The system starts from the improvement of the man-machine interaction vividnessof the online education. The emotion recognition subsystem judges the learning state of a user through the expression of the user when the user watches a video, and then the intelligent session subsystem carries out the machine Q&A interaction. The emotion recognition subsystem finally classifies the emotions of the audiences into seven types: angry, aversion, fear, sadness, surprise, neutrality,and happiness. The intelligent session subsystem will adjust the corresponding course content according to different emotions, and carry out the machine Q&A interaction, thereby achieving a purpose ofenabling the teacher-student interaction and feedback in the conventional class to be presented in an online mode, and enabling the online class to be more personalized.
Owner:JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products