Systems, methods and structures for combining
virtual reality and real-time environment by combining captured real-time video data and real-time 3D environment renderings to create a fused, that is, combined environment, including capturing video imagery in RGB or HSV / HSV color coordinate systems and
processing it to determine which areas should be made transparent, or have other color modifications made, based on sensed cultural features,
electromagnetic spectrum values, and / or sensor line-of-
sight, wherein the sensed features can also include
electromagnetic radiation characteristics such as color, infra-red, ultra-
violet light values, cultural features can include patterns of these characteristics, such as object recognition using
edge detection, and whereby the processed image is then overlaid on, and fused into a 3D environment to combine the two data sources into a single scene to thereby create an effect whereby a user can look through predesignated areas or “windows” in the
video image to see into a 3D simulated world, and / or see other enhanced or reprocessed features of the captured image.