[0440]By enforcing a front-to-back traversal of meshes, terminating traversal at
occlusion boundaries, and employing hierarchical spatial
subdivision, the
algorithm is designed to achieve output sensitive performance even for densely occluded environments.
[0137]In exemplary embodiments, a
system includes a
server having a memory to store information indicating at least one navigation cell that represents part of a navigable space of a computer generated modeled environment. The
server is further configured to send said information representing said navigation cell to said
client computing device upon determination that said at least one navigation cell is reachable via the navigable space from a predicted
client viewpoint location. The
system further includes a
client computing device having a processor configured to determine a location in the navigable space using said information.
[0115]Such a practical method of precision-controlled PVS determination could be used in conjunction with
delta-PVS and intermediate representation schemes which reduce storage costs and facilitate
visibility-based streaming prefetch. This
visibility-based streaming prefetch method would allow the user to quickly begin interacting with a massive textured 3D model because initially only the geometry, texture, and other graphic elements visible in the vicinity of the user's initial location would be delivered. This initial data is typically a small fraction of the entire graphical
database for the modeled environment. This method would significantly decrease the
waiting time for interactivity when compared to existing methods, such as MPEG-4 part 11 (
VRML or X3D), which do not specify an efficient, visibility-based prefetch streaming approach. Such existing methods typically either require the entire
database to be downloaded before interactivity begins or, alternatively, are subject to visibility errors (e.g., the sudden appearance of objects) during user navigation.
[0148]The method further includes determining a visual salience of said first set and said second set of
graphics information, said visual salience representing a likelihood that the client computing device is tracking an object moving in said navigable space, said visual salience being a function of a current client viewpoint and one or more view direction vectors extending from said current client viewpoint. The method further includes sending said second set of
graphics information during said first period upon determination that said visual salience of said first set and said second set of graphics information is below a predetermined value. The method also includes sending said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.
[0459]This
delta-PVS method represents an efficient codec for visibility-based streaming of out-of-core geometry and texture information in which the dynamic occluding or exposing silhouette contours (for the viewcell-to-viewcell transitions) are identified and labeled in an off-line, precomputed encoding; and the resulting labeled contours, along with other hint information, are used to rapidly construct a PVS / visibility map (or deltaG submesh data) from an existing PVS / visibility map at runtime. This codec allows for a distributed client-
server implementation in which the storage / transmission costs can be selectively decreased at the expense of increased runtime compute costs.
[0148]The method further includes determining a visual salience of said first set and said second set of graphics information, said visual salience representing a likelihood that the client computing device is tracking an object moving in said navigable space, said visual salience being a function of a current client viewpoint and one or more view direction vectors extending from said current client viewpoint. The method further includes sending said second set of graphics information during said first period upon determination that said visual salience of said first set and said second set of graphics information is below a predetermined value. The method also includes sending said first set of graphics information upon determination that said visual salience is greater than or equal to said predetermined value.
[0460]In addition, a
perception-based encoding strategy is used to
encode low level-of-detail (LOD) geometric and texture information during periods when the deltaG+ submesh information is not delivered to the client in time to generate a complete PVS for the current viewcell / viewpoint. This strategy exploits the fact that the human
visual system cannot fully resolve information that it presented to it for less than approximately 1000 milliseconds. This approach allows a relatively perceptually lossless performance degradation to occur during periods of low spatiotemporal visibility coherence: a situation which challenges the performance of both the codec and the human
visual system in similar ways.
[0110]The goal of out-of-core rendering systems is to allow uninterrupted exploration of very large, detailed environments that cannot fit in core memory. Implemented effectively, this streaming approach can eliminate the frequent interruptions caused by traditional loading schemes in which entire sections (e.g. levels) of the environment are loaded until the next level is reached. Subdividing a complex 3D model into distinct “levels” drastically simplifies the loading and display of the graphics information while it forces the user to experience a series of disjoint locations, separated by load times that often disrupt the coherence of the experience.
[0137]In exemplary embodiments, a
system includes a server having a memory to store information indicating at least one navigation cell that represents part of a navigable space of a computer generated modeled environment. The server is further configured to send said information representing said navigation cell to said client computing device upon determination that said at least one navigation cell is reachable via the navigable space from a predicted client viewpoint location. The system further includes a client computing device having a processor configured to determine a location in the navigable space using said information.
[0110]The goal of out-of-core rendering systems is to allow uninterrupted exploration of very large, detailed environments that cannot fit in core memory. Implemented effectively, this streaming approach can eliminate the frequent interruptions caused by traditional loading schemes in which entire sections (e.g. levels) of the environment are loaded until the next level is reached. Subdividing a complex 3D model into distinct “levels” drastically simplifies the loading and display of the graphics information while it forces the user to experience a series of disjoint locations, separated by load times that often disrupt the coherence of the experience.
[0459]This
delta-PVS method represents an efficient codec for visibility-based streaming of out-of-core geometry and texture information in which the dynamic occluding or exposing silhouette contours (for the viewcell-to-viewcell transitions) are identified and labeled in an off-line, precomputed encoding; and the resulting labeled contours, along with other hint information, are used to rapidly construct a PVS / visibility map (or deltaG submesh data) from an existing PVS / visibility map at runtime. This codec allows for a distributed client-server implementation in which the storage / transmission costs can be selectively decreased at the expense of increased runtime compute costs.
[0460]In addition, a
perception-based encoding strategy is used to
encode low level-of-detail (LOD) geometric and texture information during periods when the deltaG+ submesh information is not delivered to the client in time to generate a complete PVS for the current viewcell / viewpoint. This strategy exploits the fact that the human
visual system cannot fully resolve information that it presented to it for less than approximately 1000 milliseconds. This approach allows a relatively perceptually lossless performance degradation to occur during periods of low spatiotemporal visibility coherence: a situation which challenges the performance of both the codec and the human visual system in similar ways.