Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

662 results about "MIDI" patented technology

MIDI (/ˈmɪdi/; short for Musical Instrument Digital Interface) is a technical standard that describes a communications protocol, digital interface, and electrical connectors that connect a wide variety of electronic musical instruments, computers, and related audio devices for playing, editing and recording music. A single MIDI link through a MIDI cable can carry up to sixteen channels of information, each of which can be routed to a separate device or instrument. This could be sixteen different digital instruments, for example.

Systems and Methods for Portable Audio Synthesis

InactiveUS20080156178A1Create efficientlyEfficiently stored and/processedGearworksMusical toysAudio synthesisDisplay device
Systems and methods for creating, modifying, interacting with and playing music are provided, particularly systems and methods employing a top-down process, where the user is provided with a musical composition that may be modified and interacted with and played and/or stored (for later play). The system preferably is provided in a handheld form factor, and a graphical display is provided to display status information, graphical representations of musical lanes or components which preferably vary in shape as musical parameters and the like are changed for particular instruments or musical components such as a microphone input or audio samples. An interactive auto-composition process preferably is utilized that employs musical rules and preferably a pseudo random number generator, which may also incorporate randomness introduced by timing of user input or the like, the user may then quickly begin creating desirable music in accordance with one or a variety of musical styles, with the user modifying the auto-composed (or previously created) musical composition, either for a real time performance and/or for storing and subsequent playback. In addition, an analysis process flow is described for using pre-existing music as input(s) to an algorithm to derive music rules that may be used as part of a music style in a subsequent auto-composition process. In addition, the present invention makes use of node-based music generation as part of a system and method to broadcast and receive music data files, which are then used to generate and play music. By incorporating the music generation process into a node-subscriber unit, the bandwidth-intensive systems of conventional techniques can be avoided. Consequently, the bandwidth can preferably be also used of additional features such as node-to-node and node to base music data transmission. The present invention is characterized by the broadcast of relatively small data files that contain various parameters sufficient to describe the music to the node/subscriber music generator. In addition, problems associated with audio synthesis in a portable environment are addressed in the present invention by providing systems and methods for performing audio synthesis in a manner that simplifies design requirements and/or minimizes cost, while still providing quality audio synthesis features targeted for a portable system (e.g., portable telephone). In addition, problems associated with the tradeoff between overall sound quality and memory requirements in a MIDI sound bank are addressed in the present invention by providing systems and methods for a reduced memory size footprint MIDI sound bank.
Owner:MEDIALAB SOLUTIONS

Gesture synthesizer for electronic sound device

A MIDI-compatible gesture synthesizer is provided for use with a conventional music synthesizer to create musically realistic<DEL-S DATE="20020416" ID="DEL-S-00001" / >ally<DEL-E ID="DEL-S-00001" / > sounding gestures. The gesture synthesizer is responsive to one or more user controllable input signals, and includes several transfer function models that may be user-selected. One transfer function models properties of muscles using Hill's force-velocity equation to describe the non-linearity of muscle activation. A second transfer function models the cyclic oscillation produced by opposing effects of two force sources representing the cyclic oppositional action of muscle systems. A third transfer function emulates the response of muscles to internal electrical impulses. A fourth transfer function provides a model representing and altering virtual trajectory of gestures. A fifth transfer function models visco-elastic properties of muscle response to simulated loads. The gesture synthesizer outputs <DEL-S DATE="20020416" ID="DEL-S-00002" / >MIDI-compatible<DEL-E ID="DEL-S-00002" / > continuous pitch data, tone volume and tone timbre information. The continuous pitch data is combined with discrete pitch data provided by the discrete pitch generator within the conventional synthesizer, and the combined signal is input to a tone generator, along with the tone volume and tone timbre information. The tone generator outputs tones that are user-controllable in real time during performance of a musical gesture.
Owner:LONGO NICHOLAS

File creation process, file format and file playback apparatus enabling advanced audio interaction and collaboration capabilities

A file creation process, file format and playback device are provided that enables an interactive and if desired collaborative music playback experience for the user(s) by combining or retro-fitting an ‘original song’ with a MIDI time grid, the MIDI score of the song and other data in a synchronized fashion. The invention enables a music interaction platform that requires a small amount of time to learn and very little skill, knowledge or talent to use and is designed to bring ‘mixing music’ to the average person. The premiere capability that the file format provides, is the capability for any two bars, multiples of bars or pre-designated ‘parts’ from any two songs to be mixed in both tempo and bar by bar synchronization in a non-linear drag and drop fashion (and therefore almost instantaneously). The file format provides many further interaction capabilities however such a remixing MIDI tracks from the original song back in with the song. In the preferable embodiment the playback means is a software application on a handheld portable device which utilizes a multitouch-screen user interface, such as an iPhone. A single user can musically interact with the device and associated ‘original songs’ etc or interactively collaborate with other users in like fashion, either whilst in the same room or over the Internet. The advanced inter-active functionality the file format enables in combination with the unique features of the iPhone (as a playback device), such as the multitouch-screen and accelerometer, enable furthered intuitive and enhanced music interaction capabilities. The object of the invention is to make music interaction (mixing for example) a regular activity for the average person.
Owner:ODWYER SEAN PATRICK

Cybernetic 3D music visualizer

3D music visualization process employing a novel method of real-time reconfigurable control of 3D geometry and texture, employing blended control combinations of software oscillators, computer keyboard and mouse, audio spectrum, control recordings and MIDI protocol. The method includes a programmable visual attack, decay, sustain and release (V-ADSR) transfer function applicable to all degrees of freedom of 3D output parameters, enhancing even binary control inputs with continuous and aesthetic spatio-temporal symmetries of behavior. A “Scene Nodes Graph” for authoring content acts as a hierarchical, object-oriented graphical interpreter for defining 3D models and their textures, as well as flexibly defining how the control source blend(s) are connected or “Routed” to those objects. An “Auto-Builder” simplifies Scene construction by auto-inserting and auto-routing Scene Objects. The Scene Nodes Graph also includes means for real-time modification of the control scheme structure itself, and supports direct real-time keyboard / mouse adjustment to all parameters of all input control sources and all output objects. Dynamic control schemes are also supported such as control sources modifying the Routing and parameters of other control sources. Auto-scene-creator feature allows automatic scene creation by exploiting the maximum threshold of visualizer set of variables to create a nearly infinite set of scenes. A Realtime-Network-Updater feature allows multiple local and / or remote users to simultaneously co-create scenes in real-time and effect the changes in a networked community environment where in universal variables are interactively updated in real-time thus enabling scene co-creation in a global environment. In terms of the human subjective perception, the method creates, enhances and amplifies multiple forms of both passive and interactive synesthesia. The method utilizes transfer functions providing multiple forms of applied symmetry in the control feedback process yielding an increased level of perceived visual harmony and beauty. The method enables a substantially increased number of both passive and human-interactive interpenetrating control / feedback processes that may be simultaneously employed within the same audio-visual perceptual space, while maintaining distinct recognition of each, and reducing the threshold of human ergonomic effort required to distinguish them even when so coexistent. Taken together, these novel features of the invention can be employed (by means of considered Scene content construction) to realize an increased density of “orthogonal features” in cybernetic multimedia content. This furthermore increases the maximum number of human players who can simultaneously participate in shared interactive music visualization content while each still retaining relatively clear perception of their own control / feedback parameters.
Owner:VASAN SRINI +2

Method and device for incorporating additional information into main information through electronic watermaking technique

Two data units are selected from main information, such as MIDI data, into which additional information is to be incorporated, to calculate a difference between respective values of the two data units. A particular data segment to be incorporated into one of the MIDI data units is selected from a group of data of additional information. The size of the data segment to be incorporated into one of the data units may be either one bit or two or more bits. Substitute data to replace the content of one MIDI data unit is generated on the basis of a predetermined function using, as variables, the data-related value and a value of the particular data segment, and the content of the data unit corresponding to a predetermined one of the two MIDI data units is replaced by the generated substitute data. Thus, through such an electronic watermarking technique, any desired additional information can be incorporated into the MIDI data without changing the MIDI data format. In another implementation, data of encoding information, representative of an encoding procedure, are incorporated dispersedly into particular data units belonging to a predetermined first data group of the main information, and data belonging to a predetermined second data group of the main information are encoded by the encoding procedure represented by the encoding information.
Owner:YAMAHA CORP

Systems and methods for creating, modifying, interacting with and playing musical compositions

Systems and methods for creating, modifying, interacting with and playing music are provided, particularly systems and methods employing a top-down process, where the user is provided with a musical composition that may be modified and interacted with and played and/or stored (for later play). The system preferably is provided in a handheld form factor, and a graphical display is provided to display status information, graphical representations of musical lanes or components which preferably vary in shape as musical parameters and the like are changed for particular instruments or musical components such as a microphone input or audio samples. An interactive auto-composition process preferably is utilized that employs musical rules and preferably a pseudo random number generator, which may also incorporate randomness introduced by timing of user input or the like, the user may then quickly begin creating desirable music in accordance with one or a variety of musical styles, with the user modifying the auto-composed (or previously created) musical composition, either for a real time performance and/or for storing and subsequent playback. The graphic information preferably is customizable by a user, such as by way of a companion software program, which preferably runs on a PC and is coupled to the system via an interface such as a USB port. A modified MIDI representation of music is employed, preferably, for example, in which musical rule information is embedded in MIDI pitch data, and in which sound samples may be synchronized with MIDI events in a desirable and more optimum manner. The system architecture preferably includes a microprocessor for controlling the overall system operation. A synthesizer/DSP preferably is provided in order to generate audio streams. Non-volatile memory preferably is provided for storing sound banks. Preferably removable non-volatile storage/memory is provided to store configuration files, song lists and samples, and optionally sound bank optimization or sound bank data. A codec preferably is provided for receiving microphone input and for providing audio output. A radio tuner preferably is provided so that output from the radio tuner may be mixed, for example, with auto-composed songs created by the system, which preferably includes a virtual radio mode of operation.
Owner:MEDIALAB SOLUTIONS

System and method for video assisted music instrument collaboration over distance

A novel system and method of video assisted music instrument collaboration over distance. The system and method enable a musician to play a music instrument at one location and have the played music recreated by a music instrument at another location is provided. The system and method can be used to provide distance education for musical instrument instruction and, in this case, each student and instructor of the system has an end point which can connect to other end points in the system to exchange music data, preferably MIDI data, and videoconferencing data through a data network such as the Internet. The system and method can also be used for performances wherein a musician at a first end point plays an instrument and music data, representing the music played, is transferred to a second end point where the music played at the first end point is reproduced and one or more other musicians at the second end point play with the reproduced music in a musical performance. Preferably, each end point includes a music processing engine which buffers data received from another end point to remove the effects of transmission delays and jitter and to discard overly delayed data and to prevent damage to the music instrument at the end point due to undue network delays. Further, the music processing engine can inform the users when network performance is responsible for improper and / or undesired music playback by the instrument at the end point. This buffering by the music processing engine can also allow the synchronization of a video conferencing system between the end points with the playing of music by the instruments at the end points.
Owner:DIAMOND JAMES DR +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products