Optimizing Disparity for Motion in Depth
Abstract
Beyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.
Supporting Information
Please note: Wiley-Blackwell Publishing are not responsible for the content or functionality of any supplementary materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.
Filename | Description |
---|---|
CGF_12160_sm_video-anaglyph.avi32.5 MB | Supporting info item |
CGF_12160_sm_video-sidebyside.avi56 MB | Supporting info item |
Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
References
- [BS04] Brooks K. R., Stone L. S.: Stereomotion speed perception: Contributions from both changing disparity and interocular velocity difference over a range of relative disparities. J. Vis. 4, 12 (2004). 2.
- [BvdBvD86] Brenner E., Van Den Berg A., Van Damme W.: Perceived motion in depth. Vis. Res. 36 (1986), 699–706. 2.
- [CL88] Cavallo V., Laurent M.: Visual information and skill level in time-to-collision estimation. Perception 17, 5 (1988), 623–632. 2.
- [DRE*10] Didyk P., Ritschel T., Eisemann E., Myszkowski K., Seidel H.: Adaptive image-space stereo view synthesis. In Proc. VMV (2010), pp. 299–306. 6, 8.
- [DRE*11] Didyk P., Ritschel T., Eisemann E., Myszkowski K., Seidel H.: A perceptual model for disparity. ACM Trans. Graph. (Proc. SIGGRAPH) 30, 4 (2011), 96:1–96:10. 3, 4, 5.
- [DWS*88] Deering M., Winner S., Schediwy B., Duffy C., Hunt N.: The triangle processor and normal vector shader: a vlsi system for high performance graphics. In Proc. of ACM SIGGRAPH (1988), pp. 21–30. 8.
- [EC85] Erkelens C., Collewijn H.: Motion perception during dichoptic viewing of moving random-dot stereograms. Vis. Res. 25, 4 (1985), 583–588. 2.
- [FFLS08] Farbman Z., Fattal R., Lischinski D., Szeliski R.: Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. (Proc. SIGGRAPH) 27, 3 (2008), 67:1–67:10. 4.
- [GR98] Gray R., Regan D.: Accuracy of estimating time to collision using binocular and monocular information. Vis. Res. 38, 4 (1998), 499–512. 2.
- [Heu87] Heuer H.: Apparent motion in depth resulting from changing size and changing vergence. Perception 16, 3 (1987), 337–50. 2.
- [HGG*11] Heinzle S., Greisen P., Gallup D., Chen C., Saner D., Smolic A., Burg A., Matusik W., Gross M.: Computational stereo camera system with programmable control loop. ACM Trans. Graph. (Proc. SIGGRAPH) 30, 4 (2011), 94:1–94:10. 3.
- [HKB11] Hoffman D. M., Karasev V. I., Banks M. S.: Temporal presentation protocols in stereoscopic displays: Flicker visibility, perceived motion, and perceived depth. J Soc Inf Disp 19, 3 (2011), 271–297. 3.
- [HNG08] Harris J. M., Nefs H. T., Grafton C. E.: Binocular vision and motion-in-depth. Spatial Vision 21, 6 (2008), 531–547. 2.
- [HW95] Harris J. M., Watamaniuk S. N.: Speed discrimination of motion-in-depth using binocular cues. Vis. Res. 35, 7 (1995), 885–896. 2.
- [HWSB99]
Hubona G. S.,
Wheeler P. N.,
Shirah G. W.,
Brandt M.: The relative contributions of stereo, lighting, and background scenes in promoting 3d depth visualization.
ACM Trans. Comput.-Hum. Interact.
6, 3 (1999), 214–242. 3.
10.1145/329693.329695 Google Scholar
- [JLHE01] Jones G., Lee D., Holliman N., Ezra D.: Controlling perceived depth in stereoscopic images. In Proc. SPIE (2001), vol. 4297, pp. 42–53. 3.
- [KCLU07] Kopf J., Cohen M., Lischinski D., Uyttendaele M.: Joint bilateral upsampling. ACM Trans. Graph. (Proc. SIGGRAPH) 26, 3 (2007), 96:1–96:6. 6.
- [KHH*11] Kim C., Hornung A., Heinzle S., Matusik W., Gross M.: Multi-perspective stereoscopy from light fields. ACM Trans. Graph. 30, 6 (2011), 190:1–190:10. 8.
- [KLHG09] Krähenbühl P., Lang M., Hornung A., Gross M.: A system for retargeting of streaming video. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 28, 5 (2009), 126:1–126:10. 3, 5.
- [KSL12] Kulshreshth A., Schild J., LaViola Jr. J. J.: Evaluating user performance in 3D stereo and motion enabled video games. In Foundations of Digital Games (2012), pp. 33–40. 3.
- [KZC*11] Koppal S. J., Zitnick C. L., Cohen M. F., Kang S. B., Ressler B., Colburn A.: A viewer-centric editor for 3D movies. IEEE Comp. Graph. and Appl. 31, 1 (2011), 20–35. 3.
- [LHW*10] Lang M., Hornung A., Wang O., Poulakos S., Smolic A., Gross M.: Nonlinear disparity mapping for stereoscopic 3D. ACM Trans. Graph. (Proc. SIGGRAPH) 29, 4 (2010), 75. 3, 4, 5, 6, 8.
- [LIFH09] Lambooij M., IJsselsteijn W., Fortuin M., Heynderickx I.: Visual discomfort and visual fatigue of stereoscopic displays: A review. J. Imaging Science and Technology 53, 3 (2009), 1–12. 3.
- [MA09] Matsumiya K., Ando H.: World-centered perception of 3d object motion during visually guided self-motion. J Vis. 9, 1 (2009). 3.
- [Men09] Mendiburu B.: 3D movie making: stereoscopic digital cinema from script to screen. Focal Press, 2009. 3.
- [NBPC05] Neinborg H., Bridge H., Parker A., Cumming B.: Neuronal computation of disparity in V1 limits temporal resolution for detecting disparity modulation. J. Neurosci 25 (2005), 10207–19. 2.
- [OHB*11] Oskam T., Hornung A., Bowles H., Mitchell K., Gross M.: OSCAM-optimized stereoscopic camera control for interactive 3D. ACM Trans. Graph. (Proc. SIGGRAPH Asia) 30, 6 (2011), 189:1–189:8. 3.
- [PYR96] Portfors-Yeomans C., Regan D.: Cyclopean discrimination thresholds for the direction and speed of motion in depth. Vis. Res. 36, 20 (1996), 3265–3279. 2, 5.
- [PYR97] Portfors-Yeomans C., Regan D.: Just-noticeable difference in the speed of cyclopean motion in depth and the speed of cyclopean motion within a frontoparallel plane. J Exp. Psych.: Human Perception and Performance 23, 4 (1997), 1074–1086. 5.
- [RB79] Regan D., Beverley K.: Binocular and monocular stimuli for motion in depth: Changing-disparity and changing-size feed the same motion-in-depth stage. Vis. Res. 19, 12 (1979), 1331–1342. 2.
- [Ric72] Richards W.: Response functions for sine-and square-wave modulations of disparity. J OSA 62, 7 (1972), 907–911. 2.
- [SGHS98] Shade J., Gortler S., He L.-w., Szeliski R.: Layered depth images. In Proc. of ACM SIGGRAPH (1998), pp. 231–242. 8.
- [SKHB11] Shibata T., Kim J., Hoffman D., Banks M.: The zone of comfort: Predicting visual discomfort with stereo displays. J Vision 11, 8 (2011), 11:1–11:29. 3.
- [STRH06] Speranza F., Tam W. J., Renaud R., Hur N.: Effect of disparity and motion on visual comfort of stereoscopic images. In SPIE (2006), vol. 6055, pp. 94–103. 3.
- [Tyl71] Tyler C.: Stereoscopic depth movement: Two eyes less sensitive than one. Science 174 (1971), 958–961. 2.
- [WLSL10] Wang Y.-S., Lin H.-C., Sorkine O., Lee T.-Y.: Motion-based video retargeting with optimized crop-and-warp. ACM Trans. Graph. 29 (2010), 90:1–90:9. 3, 5.
- [YIMT02] Yano S., Ide S., Mitsuhashi T., Thwaites H.: A study of visual fatigue and visual comfort for 3D HDTV/HDTV images. Displays 23, 4 (2002), 191–201. 3.
- [YLXH13] Yan T., Lau R., Xu Y., Huang L.: Depth mapping for stereoscopic videos. International Journal of Computer Vision 102 (2013), 293–307. 3, 4, 5.