This intensive-course will be held in four blocks. It is limited to 20 participants (first-come, first-served).
Mobile devices have grown to be a major computing platform that impacts the everyday life. The latest generation of devices offers great graphics and computing capabilities together with a rich set of input sensors from cameras, GPS and orientation sensors. This course will introduce concepts, methodologies and tools to develop interactive, 3D graphics applications on mobile devices that connect to the real world and location of the user. After completing the course successfully, you will be able to develop your own mobile, interactive, 3D application.
The course will focus on the technologies and concepts required to build mobile applications using 3D graphics and computer vision algorithms on mobile device. The following topics will be discussed in the course: introduction to mobile device programming with iOS and Android, introduction to OpenGL ES 2.0, basics of position tracking with mobile devices, accessing and using sensor information from GPS and orientation sensors, visual tracking methods for camera input, introduction to a 3D visual tracking API (QCAR).
Hands-on training will be a large part of the course. Throughout the course, teams of 2 students will develop a set of example applications on real devices. The applications will revolve around implementing augmented reality user interfaces and diplays which require the integration of visual and sensor information and 3D graphics. The introduced APIs and SDKs will be taught in lab sessions where the basics for developing your own application will be demonstrated. Bring your own phone and build the best demo application of the lab!
The course will consist of a set of blocks. Every block would consist of 1-2 basic lectures, 1-2 lab lectures and one presentation session, where students present results that were developed since the last block.