3D 
music visualization process employing a novel method of real-time reconfigurable control of 3D geometry and texture, employing blended control combinations of 
software oscillators, 
computer keyboard and mouse, audio spectrum, control recordings and 
MIDI protocol. The method includes a programmable visual 
attack, decay, sustain and release (V-ADSR) 
transfer function applicable to all 
degrees of freedom of 3D output parameters, enhancing even 
binary control inputs with continuous and aesthetic spatio-temporal symmetries of behavior. A “Scene Nodes Graph” for authoring content acts as a hierarchical, object-oriented graphical 
interpreter for defining 3D models and their textures, as well as flexibly defining how the control source blend(s) are connected or “Routed” to those objects. An “Auto-Builder” simplifies Scene construction by auto-inserting and auto-routing Scene Objects. The Scene Nodes Graph also includes means for real-time modification of the control scheme structure itself, and supports direct real-time keyboard / mouse adjustment to all parameters of all 
input control sources and all output objects. 
Dynamic control schemes are also supported such as control sources modifying the Routing and parameters of other control sources. Auto-scene-creator feature allows automatic scene creation by exploiting the maximum threshold of visualizer set of variables to create a nearly infinite set of scenes. A Realtime-Network-Updater feature allows multiple local and / or remote users to simultaneously co-create scenes in real-time and effect the changes in a networked 
community environment where in universal variables are interactively updated in real-time thus enabling scene co-creation in a global environment. In terms of the human 
subjective perception, the method creates, enhances and amplifies 
multiple forms of both passive and interactive synesthesia. The method utilizes transfer functions providing 
multiple forms of applied symmetry in the control feedback process yielding an increased level of perceived visual harmony and beauty. The method enables a substantially increased number of both passive and human-interactive interpenetrating control / feedback processes that may be simultaneously employed within the same audio-visual perceptual space, while maintaining distinct recognition of each, and reducing the threshold of human ergonomic effort required to distinguish them even when so coexistent. Taken together, these novel features of the invention can be employed (by means of considered Scene content construction) to realize an increased density of “orthogonal features” in cybernetic 
multimedia content. This furthermore increases the maximum number of human players who can simultaneously participate in shared interactive 
music visualization content while each still retaining relatively clear 
perception of their own control / feedback parameters.