DanmuVis: Visualizing Danmu Content Dynamics and Associated Viewer Behaviors in Online Videos
S. Chen
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Search for more papers by this authorS. Li
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Search for more papers by this authorY. Li
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Search for more papers by this authorJ. Zhu
School of Design, Jiangnan University, Wuxi, Jiangsu, China
Search for more papers by this authorJ. Long
School of Design, Jiangnan University, Wuxi, Jiangsu, China
Search for more papers by this authorS. Chen
School of Data Science, Fudan University, Shanghai, China
Search for more papers by this authorJ. Zhang
College of Intelligence and Computing, Tianjin University, Tianjin, China
Search for more papers by this authorCorresponding Author
X. Yuan
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Beijing Engineering Technology Research Center of Virtual Simulation and Visualization, Peking University, Beijing, China
Xiaoru Yuan ([email protected]) is the corresponding author.Search for more papers by this authorS. Chen
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Search for more papers by this authorS. Li
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Search for more papers by this authorY. Li
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Search for more papers by this authorJ. Zhu
School of Design, Jiangnan University, Wuxi, Jiangsu, China
Search for more papers by this authorJ. Long
School of Design, Jiangnan University, Wuxi, Jiangsu, China
Search for more papers by this authorS. Chen
School of Data Science, Fudan University, Shanghai, China
Search for more papers by this authorJ. Zhang
College of Intelligence and Computing, Tianjin University, Tianjin, China
Search for more papers by this authorCorresponding Author
X. Yuan
Key Laboratory of Machine Perception (Minstry of Education), and School of AI, Peking University, Beijing, China
National Engineering Laboratory for Big Data Analysis and Application, Peking University, Beijing, China
Beijing Engineering Technology Research Center of Virtual Simulation and Visualization, Peking University, Beijing, China
Xiaoru Yuan ([email protected]) is the corresponding author.Search for more papers by this authorAbstract
Danmu (Danmaku) is a unique social media service in online videos, especially popular in Japan and China, for viewers to write comments while watching videos. The danmu comments are overlaid on the video screen and synchronized to the associated video time, indicating viewers' thoughts of the video clip. This paper introduces an interactive visualization system to analyze danmu comments and associated viewer behaviors in a collection of videos and enable detailed exploration of one video on demand. The watching behaviors of viewers are identified by comparing video time and post time of viewers' danmu. The system supports analyzing danmu content and viewers' behaviors against both video time and post time to gain insights into viewers' online participation and perceived experience. Our evaluations, including usage scenarios and user interviews, demonstrate the effectiveness and usability of our system.
References
- Andrienko G., Andrienko N., Anzer G., Bauer P., Budziak G., Fuchs G., Hecker D., Weber H., Wrobel S.: Constructing spaces and times for tactical analysis in football. IEEE Transactions on Visualization and Computer Graphics 27, 4 (2021), 2280–2297, doi:10.1109/TVCG.2019.2952129. 2
- Andrienko G., Andrienko N., Fuchs G., Wood J.: Revealing patterns and trends of mass mobility through spatial and temporal abstraction of origin-destination movement data. IEEE Transactions on Visualization and Computer Graphics 23, 9 (2017), 2120–2136, doi:10.1109/TVCG.2016.2616404. 7
- Albalawi R., Yeap T. H., Benyoucef M.: Using topic modeling methods for short-text data: A comparative analysis. Frontiers in Artificial Intelligence 3 (2020), 42–53, doi:10.3389/frai.2020.00042. 2
- Chandrasegaran S. K., Bryan C., Shidara H., Chuang T., Ma K.: Talktraces: Real-time capture and visualization of verbal content in meetings. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019), pp. 577:1–14, doi:10.1145/3290605.3300807. 2
- Chen Q., Chen Y., Liu D., Shi C., Wu Y., Qu H.: PeakVizor: Visual analytics of peaks in video clickstreams from massive open online courses. IEEE Transactions on Visualization and Computer Graphics 22, 10 (2016), 2315–2330, doi:10.1109/TVCG.2015.2505305. 3, 5
- Chen S., Chen S., Lin L., Yuan X., Liang J., Zhang X.: E-Map: A visual analytics approach for exploring significant event evolutions in social media. In Proceedings of IEEE Conference on Visual Analytics Science and Technology (2017), pp. 36–47, doi:10.1109/VAST.2017.8585638. 2
- Chen S., Chen S., Wang Z., Liang J., Yuan X., Cao N., Wu Y.: D-Map: Visual analysis of ego-centric information diffusion patterns in social media. In Proceedings of IEEE Conference on Visual Analytics Science and Technology (2016), pp. 41–50, doi:10.1109/VAST.2016.7883510. 2
- Chen Y., Gao Q., Rau P.-L. P.: Watching a movie alone yet together: Understanding reasons for watching danmaku videos. International Journal of Human–Computer Interaction 33, 9 (2017), 731–743. doi:10.1080/10447318.2017.1282187. 1, 2
- Chen Y., Gao Q., Yuan Q., Tang Y.: Facilitating students' interaction in moocs through timeline-anchored discussion. International Journal of Human–Computer Interaction 35, 19 (2019), 1781–1799, doi:10.1080/10447318.2019.1574056. 2
- Cheng Y.: Mean shift, mode seeking, and clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence 17, 8 (1995), 790–799, doi:10.1109/34.400568. 7
- Chorianopoulos K.: Collective intelligence within web video. Human-centric Computing and Information Sciences 3, 10 (2013), 1–10. doi:10.1186/2192-1962-3-10. 3
10.1186/2192-1962-3-10 Google Scholar
- Chowdhury G. G.: Introduction to Modern Information Retrieval. Facet Publishing, US, 2017. 10
- Chen S., Li S., Chen S., Yuan X.: R-Map: A map metaphor for visualizing information reposting process in social media. IEEE Transactions on Visualization and Computer Graphics 26, 1 (2020), 1204–1214, doi:10.1109/TVCG.2019.2934263. 2
- Cui W., Liu S., Tan L., Shi C., Song Y., Gao Z., Qu H., Tong X.: Textflow: Towards better understanding of evolving topics in text. IEEE Transactions on Visualization and Computer Graphics 17, 12 (2011), 2412–2421, doi:10.1109/TVCG.2011.239. 2
- Cui W., Liu S., Wu Z., Wei H.: How hierarchical topics evolve in large text corpora. IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 2281–2290, doi:10.1109/TVCG.2014.2346433. 2
- Chen S., Lin L., Yuan X.: Social media visual analytics. Computer Graphics Forum 36, 3 (2017), 563–587, doi:10.1111/cgf.13211. 2
- Cao N., Sun J., Lin Y., Gotz D., Liu S., Qu H.: Facetatlas: Multifaceted visualization for rich text corpora. IEEE Transactions on Visualization and Computer Graphics 16, 6 (2010), 1172–1181, doi:10.1109/TVCG.2010.154. 2
- Girgensohn A., Marlow J., Shipman F., Wilcox L.: HyperMeeting: Supporting asynchronous meetings with hypervideo. In Proceedings of the 23rd Annual ACM Conference on Multimedia Conference (2015), pp. 611–620. doi:10.1145/2733373.2806258. 2
- He M., Ge Y., Chen E., Liu Q., Wang X.: Exploring the emerging type of comment for online videos: Danmu. ACM Transactions on the Web 12, 1 (2018), 1–33. doi:10.1145/3098885. 1, 2
- Havre S., Hetzler E. G., Nowell L. T.: Themeriver: Visualizing theme changes over time. In Proceedings of IEEE Symposium on Information Visualization (2000), pp. 115–123, doi:10.1109/INFVIS.2000.885098. 7
- Jadhav S., Nadeem S., Kaufman A. E.: Featurelego: Volume exploration using exhaustive clustering of super-voxels. IEEE Transactions on Visualization and Computer Graphics 25, 9 (2019), 2725–2737, doi:10.1109/TVCG.2018.2856744. 6
- Kim N. W., Bach B., Im H., Schriber S., Gross M., Pfister H.: Visualizing nonlinear narratives with story curves. IEEE Transactions on Visualization and Computer Graphics 24, 1 (2017), 595–604. doi:10.1109/TVCG.2017.2744118. 2
- Kim J., Guo P. J., Seaton D. T., Mitros P., Gajos K. Z., Miller R. C.: Understanding in-video dropouts and interaction peaks in online lecture ideos. In Proceedings of the First ACM Conference on Learning @ Scale Conference (2014), pp. 31–40. doi:10.1145/2556325.2566237. 2, 3
- Kurzhals K., John M., Heimerl F., Kuznecov P., Weiskopf D.: Visual movie analytics. IEEE Transactions on Multimedia 18, 11 (2016), 2149–2160. doi:10.1109/TMM.2016.2614184. 2
- Lan A. S., Brinton C. G., Yang T., Chiang M.: Behavior-based latent variable model for learner engagement. In Proceedings of the 10th International Conference on Educational Data Mining (2017), doi:10.3389/frai.2020.00042. 3
- Lu Z., Heo S., Wigdor D. J.: StreamWiki: Enabling viewers of knowledge sharing live streams to collaboratively generate archival documentation for effective in-stream and post hoc learning. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 112:1–112:26, doi:10.1145/3274381. 3, 5
10.1145/3274381 Google Scholar
- Lv G., Zhang K., Wu L., Chen E., Xu T., Liu Q., He W.: Understanding the users and videos by mining a novel danmu dataset. IEEE Transactions on Big Data 8, 2 (2022), 535–551. doi:10.1109/TBDATA.2019.2950411. 2
- Ma X., Cao N.: Video-based evanescent, anonymous, asynchronous social interaction: Motivation and adaption to medium. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (2017), pp. 770–782. doi:10.1145/2998181.2998256. 1, 2
- Matejka J., Grossman T., Fitzmaurice G.: Video Lens: Rapid playback and exploration of large video collections and associated metadata. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (2014), pp. 541–550. doi:10.1145/2642918.2647366. 2
- Nguyen C., Niu Y., Liu F.: Video Summagator: An interface for video summarization and navigation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2012), pp. 647–650. doi:10.1145/2207676.2207767. 2
- Rao R., Card S. K.: The TABLE LENS: Merging graphical and symbolic representations in an interactive focus + context visualization for tabular information. In Proceedings of 1994 CHI Conference on Human Factors in Computing Systems (1994), pp. 318–322, doi:10.1145/191666.191776. 10
- Renoust B., Le D.-D., Satoh S.: Visual analytics of political networks from face-tracking of news video. IEEE Transactions on Multimedia 18, 11 (2016), 2184–2195. doi:10.1109/TMM.2016.2614224. 2
- Shi C., Fu S., Chen Q., Qu H.: VisMooc: Visualizing video clickstream data from massive open online courses. In Proceedings of 2015 IEEE Pacific Visualization Symposium (2015), pp. 159–166. doi:10.1109/PACIFICVIS.2015.7156373. 3, 4, 5
- Sung C.-Y., Huang X.-Y., Shen Y., Cherng F.-Y., Lin W.-C., Wang H.-C.: Exploring online learners' interactive dynamics by visually analyzing their time-anchored comments. Computer Graphics Forum 36, 7 (2017), 145–155. doi:10.1111/cgf.13280. 1, 3, 5
- Stein M., Janetzko H., Lamprecht A., Breitkreutz T., Zimmermann P., Goldlücke B., Schreck T., Andrienko G., Grossniklaus M., Keim D. A.: Bring it to the pitch: Combining video and movement data to enhance team sport analysis. IEEE Transactions on Visualization and Computer Graphics 24, 1 (2017), 13–22. doi:10.1109/TVCG.2017.2745181. 2
- Sun Z., Sun M., Cao N., Ma X.: VideoForest: Interactive visual summarization of video streams based on danmu data. In Proceedings of SIGGRAPH ASIA 2016 Symposium on Visualization (2016), pp. 10:1–10:8. doi:10.1145/3002151.3002159. 1, 3, 5
- Tennekes M., Chen M.: Design space of origin-destination data visualization. Computer Graphics Forum 40, 3 (2021), 323–334, doi:10.1111/cgf.14310. 7
- Wu Y., Cao N., Gotz D., Tan Y., Keim D. A.: A survey on visual analytics of social media data. IEEE Transactions on Multimedia 18, 11 (2016), 2135–2148, doi:10.1109/TMM.2016.2614220. 2
- Wang Y., Chen Z., Li Q., Ma X., Luo Q., Qu H.: Animated narrative visualization for video clickstream data. In Proceedings of SIGGRAPH Asia 2016 Symposium on Visualization (2016), pp. 11:1–11:8. doi:10.1145/3002151.3002155. 3
- Wu A., Qu H.: Multimodal analysis of video collections: Visual exploration of presentation techniques in ted talks. IEEE Transactions on Visualization and Computer Graphics 26, 7 (2020), 2429–2442. doi:10.1109/TVCG.2018.2889081. 2
- Wu Q., Sang Y., Huang Y.: Danmaku: A new paradigm of social interaction via online videos. ACM Transactions on Social Computing 2, 2 (2019), 7:1–24. doi:10.1145/3329485. 2, 3
10.1145/3329485 Google Scholar
- Wu Q., Sang Y., Zhang S., Huang Y.: Danmaku vs. forum comments: Understanding user participation and knowledge sharing in online videos. In Proceedings of the 2018 ACM Conference on Supporting Groupwork (2018), pp. 209–218. doi:10.1145/3148330.3148344. 1
- Xia M., Wei H., Xu M., Lo L. Y. H., Wang Y., Zhang R., Qu H.: Visual analytics of student learning behaviors on K-12 mathematics e-learning platforms. CoRR abs/1909.04749 (2019), arXiv:1909.04749. 5
- Yu C.-H., Wu J., Liu A.-C.: Predicting learning outcomes with mooc clickstreams. Education Sciences 9, 2 (2019), 104. doi:10.3390/educsci9020104. 3
- Zeng H., Wang X., Wu A., Wang Y., Li Q., Endert A., Qu H.: Emoco: Visual analysis of emotion coherence in presentation videos. IEEE Transactions on Visualization and Computer Graphics 26, 1 (2020), 927–937, doi:10.1109/TVCG.2019.2934656. 2