Multiview Acquisition Systems
Laurent Lucas
Search for more papers by this authorCéline Loscos
Search for more papers by this authorYannick Remion
Search for more papers by this authorLaurent Lucas
Search for more papers by this authorCéline Loscos
Search for more papers by this authorYannick Remion
Search for more papers by this authorLaurent Lucas
Search for more papers by this authorCéline Loscos
Search for more papers by this authorYannick Remion
Search for more papers by this authorSummary
Multiview acquisition relates to the capture of synchronized video data representing different viewpoints of a single scene. The materials, devices or systems used in multiview acquisition are designed to cover several perspectives of a single, often fairly restricted, physical space and use redundancy in images for specific aims. Depending on the final application, the number, layout and settings of cameras can fluctuate greatly. The most common configurations available today include: binocular systems, lateral or directional multiview systems, and global or omnidirectional multiview systems. This chapter introduces the main configurations mentioned above in a purely video multiview capture context, using notable practical examples and their use. It also proposes links to databases providing access to media produced by devices within each category.
Bibliography
- [ADE 91] ADELSON E.H., BERGEN J.R., “The plenoptic function and the elements of early vision”, inLANDY M.S., MOVSHON A.J., (eds), Computational Models of Visual Processing, MIT Press, Cambridge, MA, pp. 3–20, 1991.
- [BAL 10] BALLAN L., BROSTOW G.J., PUWEIN J., et al., “Unstructured video-based rendering: interactive exploration of casually captured videos”, ACM SIGGRAPH Papers, SIGGRAPH'10 2010, ACM, New York, NY, pp. 87:1–87:11, 2010.
-
[CAR 03] CARRANZA J., THEOBALT C., MAGNOR M.A., et al., “Free-viewpoint video of human actors”, ACM SIGGRAPH 2003 Papers, SIGGRAPH'03, ACM, New York, NY, pp. 569–577, 2003.
10.1145/1201775.882309 Google Scholar
- [DE 08] DE AGUIAR E., STOLL C., THEOBALT C., et al., “Performance capture from sparse multi-view video”, ACM Transitions on Graphics, vol. 27, no. 3, pp. 98:1–98:10, August 2008.
- [DEV 10] DEVERNAY F., BEARDSLEY P., “Stereoscopic cinema”, in RONFARD R., TAUBIN G. (eds), Image and Geometry Processing for 3-D Cinematography, vol. 5 of Geometry and Computing, Chapter 2, Springer, Berlin, Heidelberg, pp. 11–51, 2010.
- [EMO 05] EMOTO M., NIIDA T., OKANO F., “Repeated vergence adaptation causes the decline of visual functions in watching stereoscopic television”, Journal of Display Technology, vol. 1, no. 2, pp. 328–340, December 2005.
- [GOE 07] GOESELE M., SNAVELY N., CURLESS B., et al., “Multi-view stereo for community photo collections”, Proceedings ICCV, IEEE International Conference on Computer Vision, Rio de Janeiro, Brasil, pp. 1–8, October 2007.
- [JOS 06] JOSHI N., MATUSIK W., AVIDAN S., “Natural video matting using camera arrays”, ACM SIGGRAPH 2006 Papers, SIGGRAPH '06, vol. 25, ACM, 2006.
- [KAN 97] KANADE T., RANDER P., NARAYANAN P.J., “Virtualized reality: constructing virtual worlds from real scenes”, IEEE MultiMedia, vol. 4, no. 1, pp. 34–47, January 1997.
- [KIM 12] KIM H., GUILLEMAUT J.-Y., TAKAI T., et al., “Outdoor dynamic 3-D scene reconstruction”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 11, pp. 1611–1622, November 2012.
- [LEV 96] LEVOY M., HANRAHAN P., “Light field rendering”, ACM SIGGRAPH 1996 Papers, SIGGRAPH '96, ACM, pp. 31–42, 1996.
- [LIP 82] LIPTON L., Foundations of the Stereoscopic Cinema, Van Nostrand Reinhold, 1982.
- [LIP 08a] LIPPMANN M.G., “Epreuves réversibles donnant la sensation du relief”, Journal of Physics, vol. 7, pp. 821–825, November 1908.
- [LIP 08b] LIPPMANN M.G., “Epreuves réversibles. photographies intégrales”, Comptes Rendus de l'Académie des Sciences, vol. 146, no. 9, pp. 446–451, March 1908.
- [MAR 99] MARCOS S., MORENO E., NAVARRO R., “The depth-of-field of the human eye from objective and subjective measurements”, Vision Research, vol. 39, no. 12, pp. 2039–2049, June 1999.
-
[MAT 04] MATUSIK W., PFISTER H., “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes”, ACM SIGGRAPH 2004 Papers, SIGGRAPH '04, vol. 24, ACM, 2004.
10.1145/1186562.1015805 Google Scholar
-
[MAT 12] MATSUYAMA T., NOBUHARA S., TAKAI T., 3D Video and its Applications, SpringerLink: Bücher, Springer, London, 2012.
10.1007/978-1-4471-4120-4 Google Scholar
- [MEN 09] MENDIBURU B., 3D Movie Making: Stereoscopic Digital Cinema from Script to Screen, Focal Press, 2009.
- [MEN 11] MENDIBURU B., 3D TV and 3D Cinema: Tools and Processes for Creative Stereoscopy, 1st ed., Focal Press, 2011.
- [MOE 97] MOEZZI S., TAI L.-C., GERARD P., “Virtual view generation for 3D digital video”, IEEE MultiMedia, vol. 4, no. 1, pp. 18–26, January 1997.
- [NOM 07] NOMURA Y., ZHANG L., NAYAR S., “Scene collages and flexible camera arrays”, Proceedings of Eurographics Symposium on Rendering, Eurographics Association, June 2007.
-
[PET 10] PETIT B., DUPEUX T., BOSSAVIT B., et al., “A 3d data intensive tele-immersive grid”, Proceedings of the International Conference on Multimedia, MM '10, ACM, New York, NY, pp. 1315–1318, 2010.
10.1145/1873951.1874210 Google Scholar
-
[PRE 10] PREVOTEAU J., CHALENÇON-PIOTIN S., DEBONS D., et al., “Multi-view shooting geometry for multiscopic rendering with controlled distortion”, International Journal of Digital Multimedia Broadcasting (IJDMB), special issue Advances in 3DTV: Theory and Practice, vol. 2010, pp. 1–11, March 2010.
10.1155/2010/975674 Google Scholar
- [SNA 09] SNAVELY K.N., Scene reconstruction and visualization from internet photo collections, PhD Thesis, University of Washington, Seattle, WA, 2009.
- [TAY 96] TAYLOR D., “Virtual camera movement: the way of the future?”, American Cinematographer, vol. 77, no. 9, pp. 93–100, 1996.
- [UKA 07] UKAI K., HOWARTH P.A., “Visual fatigue caused by viewing stereoscopic motion images: background, theories, and observations”, Displays, vol. 29, no. 2, pp. 106–116, March 2007.
- [VEE 07] VEERARAGHAVAN A., RASKAR R., AGRAWAL A., et al., “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing”, ACM Transactions on Graphics, vol. 26, no. 3, pp. 69-1–69-12, July 2007.
- [VLA 08] VLASIC D., BARAN I., MATUSIK W., et al., “Articulated mesh animation from multi-view silhouettes”, ACM Transitions on Graphics, vol. 27, no. 3, pp. 97:1–97:9, August 2008.
-
[WIL 05] WILBURN B., JOSHI N., VAISH V., et al., “High performance imaging using large camera arrays”, ACM SIGGRAPH 2005 Papers, SIGGRAPH '05, ACM, New York, pp. 765–776, 2005.
10.1145/1186822.1073259 Google Scholar
- [YAN 04] YANO S., EMOTO M., MITSUHASHI T., “Two factors in visual fatigue caused by stereoscopic HDTV images”, Displays, vol. 25, no. 4, pp. 141–150, November 2004.
- [ZHA 04] ZHANG C., CHEN T., “A self-reconfigurable camera array”, in KELLER A., JENSEN H.W. (eds), Proceedings of the 15th Eurographics Workshop on Rendering Techniques, Eurographics Association, pp. 243–254, 21–23 June 2004.