Workshop on Highly Diverse Cameras and Displays for Mixed and Augmented Reality
Scene and human sensing have been key topics in AR/MR research communities and there is no doubt that vision-based approaches have opened up a path of recent AR/MR advances. Researchers have established sophisticated methods such as scene geometry reconstruction, object recognition, head and eye tracking, and rendering to show the virtual objects on perspective displays. Thanks to these attempts, AR/MR has been widely spread to a number of people and reached to a certain point. To bring AR/MR to the next level, we need to explore concrete measures to combine these fruits. One of the possible and practical concepts to achieve the goal is to utilize diverse types of cameras and displays that would exist in the AR/MR environments or be brought to the environment by AR/MR participants. From this point of view, in this workshop, we aim at bringing opportunities to researchers to discuss experiences and findings on vision-based approaches from cameras to displays for future collaborations.
Topics of interest include, but are not limited to:
This workshop is a related workshop to HDC2017 held as a part of the WACV 2017 workshops. While the topic of HDC2017 was mainly focused on recognition and sensing of human activity using HDC (Highly Diverse Cameras), HDCD4MAR calls for papers on various technologies achieved by HDC and/or Displays for MAR contexts, which also include the topics discussed in HDC2017.
Deadline Workshop Papers: July 3rd July 10th, 2017
Notification Workshop Papers: August 7th, 2017
Workshop Papers Camera Ready Deadline: August 28th, 2017
Submitted papers will be peer-reviewed. Accepted papers will be published on IEEE Xplore as ISMAR 2017 adjunct Proceedings.
Page length: 2 - 8 pages
Template: Submitted in PDF format and formatted using the ISMAR 2017 paper template
Submission site: https://easychair.org/conferences/?conf=hdcd4mar
NOTE: At least one author must register for the conference and present the work.
10:00 | Opening |
---|---|
10:15 | Keynote: Diverse Cameras for AR/MR Applications Prof. Hajime Nagahara (Osaka University) Hajime Nagahara is a professor in Institution for datability science at Osaka University. He received the PhD degree in system engineering from Osaka University in 2001. He was a research associate of the Japan Society for the Promotion of Science from 2001 to 2003. He was an assistant professor at the Graduate School of Engineering Science, Osaka University, Japan from 2003 to 2010. He was an associate professor in faculty of information science and electrical engineering at Kyushu University from 2010 to 2017. He was a visiting researcher at Columbia University in 2007-2008 and 2016-2017. Computational photography, computer vision are his research areas. He received an ACM VRST2003 Honorable Mention Award in 2003, IPSJ Nagao Special Researcher Award in 2012, ICCP2016 Best Paper Runners-up, and SSII Takagi Award in 2016. |
11:15 | Break |
11:40 | Workshop paper presentations: #1 Pseudo-Dolly-In Video Generation Combining 3D Modeling and Image Reconstruction Hidehiko Shishido, Kazuki Yamanaka, Yoshinari Kameda, and Itaru Kitahara #2 An Instant See-Through Vision System Using a Wide Field-of-View Camera and a 3D-Lidar Kei Oishi, Shohei Mori, and Hideo Saito |
12:30 | Lunch Break |
13:30 | Oral presentations from ISMAR2017 posters: #1 CoVAR: Mixed-Platform Remote Collaborative Augmented and Virtual Realities System with Shared Collaboration Cues Thammathip Piumsomboon, Arindam Dey, Barrett Ens, Gun Lee, and Mark Billinghurst #2 Augmented Things: Enhancing AR Applications leveraging the Internet of Things and Universal 3D Object Tracking Jason Rambach, Alain Pagani, and Didier Stricker |
14:10 | Break |
14:30 | Oral presentations from ISMAR2017 posters: #3 Halo3D: A Technique for Visualizing Off-Screen Points of Interest in Mobile Augmented Reality Patrick Perea, Denis Morand, and Laurence Nigay #4 Position Estimation of a Strongly Occluded Object by Using an Auxiliary Point Cloud in Occluded Space Shinichi Sumiyoshi #5 Depth Map Interpolation Using Perceptual Loss Ilya Makarov, Vladimir Aliev, Olga Gerasimova, and Pavel Polyakov |
15:30 | Break |
16:00 | Workshop paper presentations: #3 Diminished reality for privacy protection by hiding pedestrians in motion image sequences using Structure from Motion Kentaro Yagi, Kunihiro Hasegawa, and Hideo Saito #4 Semantic Object Selection and Detection for Diminished Reality based on SLAM with Viewpoint Class Yoshikatsu Nakajima, Shohei Mori, and Hideo Saito |
17:00 | Closing |
Full Professor of Department of Information and Computer Science, Keio University. He received his Ph.D. degree in Electrical Engineering from Keio University, Japan, in 1992. He is a Steering Committee of ISMAR, and a Special Advisory Chair of ISMAR2017. >> More Info
JSPS Research Fellowship for Young Scientists (PD) at Keio University. He received his Ph.D. degree in engineering from Ritsumeikan University, Japan in 2016. >> More Info
Lecturer at the College of Information Science and Engineering, Ritsumeikan University. He received a Ph.D. degree from Nara Institute of Science and Technology, Japan, in 2006. >> More Info
Technical Committee on Plenoptic Time-Space Technology (PoTS), IEICE-ISS
This workshop is also co-organized by JST CREST projects "A Framework PRINTEPS to Develop Practical Artificial Intelligence" and "Analyzing Human Attention and Behavior via Collective Visual Sensing for the Creation of Life Innovation".