Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

911 results about "Interaction function" patented technology

Interaction() function computes a factor which represents the interaction of the given factors. The result of interaction is always unordered.

Imaging interactive numerical control turning automatic programming method and system

The invention provides an automatic graphical interaction-typed numerical control (NC) turning programming method and system used for improving programming efficiency and NC code quality, prompting quick product process realization (RPPR) and integrated product process development (IPPD). The technical proposal of the invention is that the automatic graphical interaction-typed numerical control turning programming method is characterized in that the method comprises the steps of: reading in blank and part drawing, removing redundant information, and judging whether the graphics are exact; leading the graphics to have real-time interaction function and be capable of correcting the information of the part processing surface, including processing surface type, exact geometrical information and surface roughness; determining whether to execute the corresponding system function modules by carrying out automatic programming or auxiliary programming or mixed programming according to the selection of the user. The automatic graphical interaction-typed numerical control turning control system is characterized in that the automatic graphical interaction-typed NC turning control system comprises a CAD data reading-in module which is respectively connected with an automatic programming module, an auxiliary programming module and a mixed programming module which are respectively connected with an automatic track layout module which is connected with an NC code generation module.
Owner:TSINGHUA UNIV

File creation process, file format and file playback apparatus enabling advanced audio interaction and collaboration capabilities

A file creation process, file format and playback device are provided that enables an interactive and if desired collaborative music playback experience for the user(s) by combining or retro-fitting an ‘original song’ with a MIDI time grid, the MIDI score of the song and other data in a synchronized fashion. The invention enables a music interaction platform that requires a small amount of time to learn and very little skill, knowledge or talent to use and is designed to bring ‘mixing music’ to the average person. The premiere capability that the file format provides, is the capability for any two bars, multiples of bars or pre-designated ‘parts’ from any two songs to be mixed in both tempo and bar by bar synchronization in a non-linear drag and drop fashion (and therefore almost instantaneously). The file format provides many further interaction capabilities however such a remixing MIDI tracks from the original song back in with the song. In the preferable embodiment the playback means is a software application on a handheld portable device which utilizes a multitouch-screen user interface, such as an iPhone. A single user can musically interact with the device and associated ‘original songs’ etc or interactively collaborate with other users in like fashion, either whilst in the same room or over the Internet. The advanced inter-active functionality the file format enables in combination with the unique features of the iPhone (as a playback device), such as the multitouch-screen and accelerometer, enable furthered intuitive and enhanced music interaction capabilities. The object of the invention is to make music interaction (mixing for example) a regular activity for the average person.
Owner:ODWYER SEAN PATRICK

Mobile terminal iris recognition device with human-computer interaction mechanism and method

The invention provides a mobile terminal iris recognition device with a human-computer interaction mechanism. The device comprises a human-computer interaction module, a multi-spectral iris image optical gathering module, an iris image analysis and processing module, a feedback control module and a power supply module, wherein the human-computer interaction module is used for ensuring that a user configures the mobile terminal iris recognition device in the process of iris image gathering and processing and being combined with the device to realize a human-computer interaction function; the multi-spectral iris image optical gathering module is used for gathering an iris image of the user; the feedback control module is used for feeding back processed results of the iris image analysis and processing module to the multi-spectral iris image optical gathering module, accordingly adjusting imaging parameters of the multi-spectral iris image optical gathering module, and feeding back processed results from the iris image analysis and processing module to the feedback control module of the human-computer interaction module. Accordingly to the mobile terminal iris recognition device with the human-computer interaction mechanism and a method, miniaturization, mobility, usability and the like of the iris recognition device can be improved.
Owner:BEIJING IRISKING

Intelligent interaction system and method

The invention relates to an intelligent interaction system and method. The system includes an audio receiving module, a real-time processing module and an execution module, wherein the audio receiving module is used for receiving audio information inputted by a user, the real-time processing module is used for performing parallel online real-time processing on the audio information, and the execution module is used for executing corresponding operation according to identification results transmitted by the real-time processing module. The parallel online real-time processing includes the following steps that: classification processing and identification processing corresponding to different types are performed on the audio information; if credible classification types are obtained before the ending of audio input, identification processing on classification types except the credible classification types is terminated; identification results corresponding to the credible classification types can be obtained and are transmitted to the execution module. With the intelligent interaction system and method of the invention adopted, the user can use audio identification and voice interaction functions easily and quickly, and user experience can be enhanced.
Owner:科大讯飞(北京)有限公司

Method and system for interaction of video elements and web page elements in web pages

The invention discloses a method for the interaction of video elements and web page elements in web pages. The method comprises the following steps: video inside elements and web page elements in web pages are established by a video control module and a web page control module in a component control box, an interaction relationship between the video inside elements and the web page elements is constructed, the video inside elements, the web page elements and the interaction relationship between the elements are stored as a resource description file, and the resource description file is transmitted to a server database; a web browser loads and analyzes the resource description file by accessing the server database, videos are played in the web pages, all the video inside elements and all the web page elements are constructed, and the interaction relationship between the elements is shown. The invention also discloses a system for the interaction of the video elements and the web page elements in the web pages. The invention has the advantages that the videos and the web pages can be edited much easily by the method and the system which are provided by the invention, and an interaction function between the video inside elements and the web page elements can be realized.
Owner:孟智平

Voice interaction smart home system and voice interaction method

The invention relates to the technical field of smart home, in particular to a smart home system capable of realizing the voice interaction function and a voice interaction method. The smart home system comprises a hardware terminal and a server terminal, wherein the hardware terminal consists of a gateway, a ZigBee router and ZigBee terminal nodes; a plurality of appliance equipment or sensors can be connected to the ZigBee terminal nodes; all the ZigBee terminal nodes have Mic voice input modules for acquiring voice analog information of users and converting the voice analog information to digital information; the digital information is transmitted to a ZigBee coordinator through the ZigBee router; the gateway consists of the ZigBee coordinator, a Wifi module, a voice processing module and a control command list module. Compared with the prior art, the smart home system has the following advantages: the operating rate of voice recognition can be prominently improved under the premise that the hardware cost is slightly incresed; the smart home system has the self-learning function, so that the default programming workload of various voice service environments at the early stage is saved; the accuracy of voice recognition is improved by adopting the spectral analysis method of a voice recognition module.
Owner:SHANGHAI GOLD SOFTWARE DEV

Navigation system for intracerebral hemorrhage puncture surgery based on medical image model reconstruction and localization

InactiveCN109223121AEasy to observeEffectively Assist Minimally Invasive Oriented Puncture Operation of Cerebral HemorrhageSurgical needlesSurgical navigation systemsModel reconstructionDisplay device
A navigation system for intracerebral hemorrhage puncture surgery based on medical image model reconstruction and localization is disclosed in the embodiment of the invention, Positioned navigation system for intracerebral hemorrhage puncture surgery, including hosts, AR device and naked-eye 3D display device, The host computer receives the medical CT/MRI data of patients with intracerebral hemorrhage, reconstructs the model of multi-group patients with intracerebral hemorrhage, processes the model butt joint, obtains the three-dimensional virtual model of patients' skull and hematoma, and displays the three-dimensional virtual model on the naked eye 3D display device. The host computer receives the medical CT/MRI data of patients with intracerebral hemorrhage. According to the 3D virtualmodel and the angle depth information, the host computer simulates the operation navigation and connection to import the AR device. AR devices are used for feature point detection, spatial registration between virtual hematoma and real hematoma, real-time catheter tracking, holographic display of virtual hematoma and virtual catheter, voice and somatosensory interaction with physicians. The embodiment of the invention can reduce medical cost and provide human-computer interaction function.
Owner:广州狄卡视觉科技有限公司

Intelligent weeding robot system and control method based on depth vision

The invention discloses an intelligent weeding robot system based on depth vision, and belongs to the technical field of intelligent robots. The system includes an industrial personal computer, a power module, a dolly body, a crawler wheel, a hydraulic claw and weeding equipment. The crawler wheel driven by a driving wheel drive device is installed on the dolly body. The hydraulic claw and the weeding equipment are installed on the dolly body through a weeding mechanical arm device used for adjusting the postures of the hydraulic claw and the weeding equipment. The driving wheel drive device,the hydraulic claw and the weeding equipment are all powered by the power module and controlled by the industrial personal computer. The system further includes a depth camera module for measuring therelative position relationship of weeds, crops and the dolly body. The system has the advantages that the interaction function of two mechanical arms can be achieved by the interactive work of a single mechanical arm, and meanwhile a precise weeding method is adopted and can effectively reduce the secondary damage to crops and greatly promote the application and development of intelligent weedingrobots on the premise of ensuring the weeding efficiency and accuracy.
Owner:ANHUI AGRICULTURAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products