The recent improvements in the 3D sensing technologies have caused a remarkable amplification in the utilization of 3D data. 3D information has found tremendous use in Autonomous Driving, 3D Mapping, Quality Control, Drones and UAVs or Robot Guidance to name but a few applicative domains. These applications typically fuse different modalities such as range images, stereo triangulations, structure-from-motion reconstructions or laser scans. A common flexible representation governing all these are point clouds. Thus, in many of the applications that rely on multiple 3D acquisitions, good registration of point clouds is a prerequisite. Yet, when unstructured dense scans of large scenes are of concern, establishing the alignment in a fully automatic manner is far from being trivial -- a difficulty that is exacerbated when the scans in question are allowed to undergo locally non-rigid deformations due to miscalibration of the capturing device or object movement.
In such a complex scenario, researchers are now taking on the challenge of accurately auto-stitching tens of millions of unstructured/structured points that include symmetries, self-similarities and that do not admit scan-order constraints. This workshop will be dedicated to exploring the theoretical and practical aspects of obtaining multi-view global alignment and registration of scans captured by any 3D data modality. Our main objective is to gather together industry experts, academic researchers, and practitioners of 3D data acquisition and scene reconstruction into a lively environment for discussing methodologies and challenges raised by the emergence of large-scale 3D reconstruction applications; as a targeted topic venue, this workshop will offer participants a unique opportunity to network with a diverse but focused research community.
An official call-for-papers is found here:Download Call For Papers (CfP) PDF
Our invited speakers come from top research institutions and companies around the globe, and are leading figures in the topics covered by the workshop. This diverse selection will prove valuable for academic as well as industry researchers and practitioners. Both practical and theoretical aspects of multiview 3D computer vision will be covered by the invited lecturers.
* With these days, we hope to allow a sufficient time slot for re-submission of the unlucky ICCV papers.
We have a packed and exciting day ahead of us!
Good morning everybody! We gladly welcome you to our first workshop, MVR3D 2017.
Christopher Zach (PhD 2007 TU Graz) is currently a principal research scientist in the computer vision group at Toshiba Research Europe. Previous to that he had post-doctoral and senior researcher positions at UNC-Chapel Hill (2008–2009), ETH Zürich (2009–2011) and Microsoft Research Cambridge (2012–2014). His main research interests are structure from motion, dense 3D reconstruction from images, convex methods in computer vision and real-time computer vision.
In the first part of my presentation I review several methods to detect false positive matches between images before they lead to distorted 3D models, which are difficult to rectify afterwards. These false positive matches are due to "perceptual aliasing" and occur frequently in man-made environments. Several complementary cues allow to identify these false positives. I also describe situations where perceptual aliasing actually can help to improve the reconstructed model. In the second part I will describe recent efforts to bypass several stages of a typical 3D reconstruction pipeline, and how to push the envelope of initialization-free bundle adjustment. Empirical evidence (and early-stage theoretical understanding) strongly suggests that the secret of obtaining faithful 3D models lies mainly in using mostly outlier-free correspondences. Thus, the often theoretically non-satisfactory steps to obtain a sufficiently good initial 3D model (and camera poses) from pairwise image matches might not be necessary in many cases, and one can directly apply a suitable version of bundle adjustment from random initial values.
Vladlen Koltun is the Director of the Intel Visual Computing Lab. He received a PhD in 2002 for new results in theoretical computational geometry, spent three years at UC Berkeley as a postdoc in the theory group, and joined the Stanford Computer Science faculty in 2005 as a theoretician. He switched to research in visual computing in 2007 and joined Intel as a Principal Researcher in 2015 to establish the Visual Computing Lab.
We are proudly serving Italian coffee!
We look forward to novel and exciting oral presentations of a subset of the accepted papers. A slot of 15 minutes is allocated per presentation and we advise to spare 3 minutes for questions.
Andrew Fitzgibbon is a scientist with HoloLens at Microsoft, Cambridge, UK. He is best known for his work on 3D vision, having been a core contributor to the Emmy-award-winning 3D camera tracker “boujou” (www.boujou.com) and Kinect for Xbox 360, but his interests are broad, spanning computer vision, graphics, machine learning, and even a little neuroscience. He has published numerous highly-cited papers, and received many awards for his work, including ten “best paper” prizes at various venues, the Silver medal of the Royal Academy of Engineering, and the BCS Roger Needham award. He is a fellow of the Royal Academy of Engineering, the British Computer Society, and the International Association for Pattern Recognition. Before joining Microsoft in 2005, he was a Royal Society University Research Fellow at Oxford University, having previously studied at Edinburgh University, Heriot-Watt University, and University College, Cork.
Sparse Gauss-Newton optimization, or “Bundle Adjustment”, is a crucial tool of 3D reconstruction. It is considered common knowledge that bundle adjustment has a small basin of convergence and needs a good initialization. For example, there are well known benchmark sequences where initialization from a random starting point using any of the mainstream bundle adjustment packages leads to essentially 0% chance of convergence to the known best optima. However, this situation is changing. In 2011, Okatani and others re-introduced the VarPro method to matrix factorization (after some “dark ages”, for which I take some blame for promulgating in 2005). This conferred remarkable improvements on the basin of convergence of that problem, which is, of course, the same problem as affine bundle adjustment. More recently, my student John Hong, working with Zach, Cipolla, and me, has shown how to bring the advantages of VarPro to projective bundle adjustment. I shall describe the main components of this work, and then speculate on future directions.
The smell of that delicious Italian cusine is irresistable.
Konrad Schindler received the Diplomingenieur (M.tech) degree in photogrammetry from Vienna University of Technology, Austria in 1999, and a PhD in computer science from Graz University of Technology, Austria, in 2003. He has worked as a photogrammetric engineer in the private industry, and has held researcher positions at Graz University of Technology, Monash University, and ETH Zurich. He became assistant professor of Image Understanding at TU Darmstadt in 2009, and since 2010 has been a tenured professor of Photogrammetry and Remote Sensing at ETH Zurich. His research interests lie in the field of computer vision, photogrammetry, and remote sensing, with a focus on image understanding and 3d reconstruction. Konrad has been president of ISPRS Technical Commission III “Photogrammetric Computer Vision and Image Analysis” for the period 2012-2016. He has received several awards, including the U. V. Helava Award 2012 for the best paper published in the ISPRS Journal 2008-2011 (with A. Ess, B. Leibe and L. Van Gool), and a honorable mentions for the Marr Prize at ICCV 2013 (with C. Vogel and S. Roth).
Those heated discussions and further networking...
Dr. Alex Bronstein was born in 1980. He received the B.Sc. and M.Sc. (both summa cum laude) from the Department of Electrical Engineering in 2002 and 2005, and Ph.D. from the Department of Computer Science, Technion in 2007. Until 2016, Dr. Alex Bronstein was an Associate Professor in the School of Electrical Engineering at Tel Aviv University. In 2016, he has joined the Department of Computer Science at the Technion also as an Associate Professor. His main research interests are theoretical and computational methods in metric geometry and their application to problems in computer vision, pattern recognition, shape analysis, computer graphics, imaging and image processing, and machine learning. He has authored over 120 publications in leading journals and conferences, over two dozens of patents and patent applications, and the book Numerical geometry of non-rigid shapes (published by Springer). His h-index is 38. Alex Bronstein is the alumnus of the Technion Excellence Program and the Academy of Achievement, and a member of the IEEE. His research was recognized by numerous awards, including the Kasher prize (2002), Thomas Schwartz award (2002), Hershel Rich Technion Innovation award (2003), Gensler counter-terrorism prize (2003), the Copper Mountain Conference on Multigrid Methods Best Paper award (2005) and the Adams Fellowship (2006), the Krill Prize by Wolf Foundation (2012), and the European Research Council (ERC) Startup Grant (2013). Highlights of his research were featured in CNN, SIAM News, and in Prof. Guillermo Sapiro's Science Lecture "One small step for Gromov, one giant leap for shape analysis" that he gave in Oslo on the occasion of awarding Prof. Mikhail Gromov the 2009 Abel Prize, considered the "Nobel of Math". Besides scientific awards, Alex received the Technion Humanities and Arts Department prize (2001) for the translation of Shakespearean sonnets into Italian. He co-chaired the IEEE International Workshop on Non-rigid shapes and deformable image alignment (NORDIA) in 2008-2011, the International Conference on n Scale Space and Variational Methods in Computer Vision (SSVM) in 2011, served as the program chair of the Eurographics Workshop on 3D Object Retrieval (3DOR) in 2012, area chair of the IEEE Asian Conference on Computer Vision (ACCV) in 2010, and participated in program committees of major conferences in his field. Dr. Bronstein held visiting appointments in Politecnico di Milano (2008), Stanford university (2009), Verona University (2010,2014), and Duke University (from 2014). In addition to his academic activities, he was a co-founder of a Silicon Valley startup Novafora, Inc., where he served from 2004 till 2009 as a scientist and a Vice President of video technology, leading a group of researchers and engineers in developing novel Internet-scale video analysis technologies. Dr. Bronstein was one of the inventors and developers of the 3D sensing technology in the foundation of the Israeli startup Invision, subsequently acquired by Intel Corporation in 2012 and distributed under the RealSense brand.
The need to compute correspondence between three-dimensional objects is a fundamental ingredient in numerous computer vision and graphics tasks. In this talk, I will show how several geometric notions related to the Laplacian spectrum provide a set of tools for efficiently calculating correspondence between deformable shapes. I will also show how this framework combined with recent ideas in deep learning promises to bring correspondence problems to new levels of accuracy.
We are proudly serving Italian coffee!
Radu Patrice Horaud holds a position of director of research at INRIA Grenoble Rhône-Alpes, France. He is the founder and director of the PERCEPTION team. Radu’s research interests cover computational vision, audio signal processing, audio-visual scene analysis, machine learning, and robotics. He is the author of over 200 scientific publications. Radu pioneered work in computer vision using range data (or depth images) and developed a number of principles and methods at the cross-roads of computer vision and robotics. In 2006, he started to develop audio-visual fusion and recognition techniques in conjunction with human-robot interaction. He is an area editor for the CVIU (Elsevier), a member of the advisory board for the International JRR (Sage), and an associated editor for the IJCV (Kluwer-Springer). In 2001 he was program co-chair of the IEEE Eighth ICCV and in 2015 he was program co-chair of the 17th ACM International Conference on Multimodal Interaction (ICMI’15). Radu Horaud was the scientific coordinator of the European Marie Curie network VISIONTRAIN (2005-2009), STREP projects POP (2006-2008) and HUMAVIPS (2010-2013), and the principal investigator of a collaborative project between INRIA and Samsung’s Advanced Institute of Technology (SAIT) on computer vision algorithms for 3D television (2010-2013). In 2013 he was awarded an ERC Advanced Grant for his five year project VHIA (2014-2019). In 2015 he received a three year grant (jointly with Florence Forbes) from Xerox University Affairs Committee.
Andreas Nüchter is professor of computer science (telematics) at University of Würzburg. Before summer 2013 he headed as assistant professor the Automation group at Jacobs University Bremen. Prior he was a research associate at University of Osnabrück. Further past affiliations were with the Fraunhofer Institute for Autonomous Intelligent Systems (AIS, Sankt Augustin), the University of Bonn, from which he received the diploma degree in computer science in 2002 (best paper award by the German society of informatics (GI) for his thesis) and the Washington State University. He holds a doctorate degree (Dr. rer. nat) from University of Bonn. His thesis was shortlisted for the EURON PhD award. Andreas works on robotics and automation, cognitive systems and artificial intelligence. His main research interests include reliable robot control, 3D environment mapping, 3D vision, and laser scanning technologies, resulting in fast 3D scan matching algorithms that enable robots to perceive and map their environment in 3D representing the pose with 6 degrees of freedom. The capabilities of these robotic SLAM approaches were demonstrated at RoboCup Rescue competitions, ELROB and several other events. He is a member of the GI and the IEEE.
Mobile laser scanning puts high requirements on the accuracy of the positioning systems and the calibration of the measurement system. The talk describes a general framework for calibrating mobile sensor platforms that estimates all configuration parameters for any arrangement of positioning sensors. In addition, we present a novel Continuous-time Simultaneous Localization and Mapping (SLAM) algorithm that corrects the system position at every point in time along its trajectory, while simultaneously improving the quality and precision of the entire acquired point cloud. Using this algorithm. The talk demonstrate the capabilities of the algorithms on a wide variety of datasets, ranging from underground mining to improving Google's Cartographer result.
Luc Robert graduated from Ecole Polytechnique in 1988, and obtained his PhD in 3D computer vision, in 1993, from the National Research Institute for Computer Sciences and Automatics (Inria, France). After a 1-year post-doc at Carnegie Mellon University (Pittsburgh, USA) working on 3D vision for autonomous vehicles, he joined Inria as a research scientist in 1995. In 1998 he co-founded REALVIZ, a startup bringing technology from the lab to the industry of digital content creation. During the following ten years he led the development of the REALVIZ technology and products, that became industry leaders on the markets of 3D digital effects and panoramic photography. After the acquisition of REALVIZ by Autodesk in 2008, Luc drove reality capture technology development for the 123D Catch and ReCap products. In 2016, he joined the Bentley Systems team in charge of products and technology related to reality capture.
Untill next time!
MVR3D 2017 is enabled by our generous sponsors.
Your papers are in great hands! MVR3D 2017 is proudly backed by the following program committee composed of very influential computer vision researchers:
Of course this is not the entire list. Stay tuned for more.
Here are the diligent people behind MVR3D 2017.
Paper submission is through CMT.
Yes. Please check the awards section.
Our workshop is held in conjunction with International Conference on Computer Vision 2017. Workshop-inclusive ICCV registrations cover the attendance fees to the workshop. You are welcome to participate!
Follow the recent hapennings here.