General Project Areas
I am interested in all areas of immersive and non-immersive virtual reality systems. This includes real-time rendering, model generation, systems programming, distributed application building, collaborative interaction and usability engineering for VE systems and presence.
I will potentially supervise projects in the following areas:
- Augmented & mixed reality systems
- Real-time computer graphics
- Telepresence robots
- Experiments on single user interaction in HMD and/or CAVE
- Experiments on presence
- Implementing and testing collaborative VE applications
- Component-ware for VE systems
- Implementation of 3D applications on web3d technology
- Integration of haptics, audio and visual presentation in the CAVE.
- Game-focussed projects
I also presented some project suggestions from Oculus and the BBC. These can be found on Moodle as they are confidential to UCL.
Specific Project Suggestions
The suggestions at the top are related to ongoing research projects.
~Zero Latency Virtual Reality
At UCL we have built an ultra-low latency virtual reality system (see Sebastian Friston’s page). We have several project suggestions around improving this system or evaluating it.
- Evaluate performance of tasks where you can alter the latency at very low levels.
- Measure the impact of rendering artefacts on the display (e.g. straight lines are not straight on the panel if you are moving your head).
- Evaluate the system with different types of display panel (e.g. OLED, CRT, g-sync, etc.).
- Compare this to alternative compensation methods such as asynchronous time-warping or space-warping.
A few years ago we built the AR Rift system. We have now built a second version for the Oculus CV1. This is a prototype video augmented reality system. It allows occlusion between real and virtual objects unlike optical see-through systems such as HoloLens. There are a few suggestions of projects:
- Run presence experiments using this system to test how we alter perception of the real world.
- User the stereo input (plus depth camera) to do a better job of occlusion of the hands of the user.
- Do latency compensation on the video.
Immersive 3D Editing
A frequently requested project is to build an immersive editor. There are now lots of such editors or prototypes of such editors (e.g. Oculus Medium, Tiltbrush). Nothing yet comes close to the simplicity and vision of the concept video World Builder by Bruce Branit. If you want to work in this area, I would think hard about doing something creative with the user-interface, or how to use templates/machine learning techniques to do editing very quickly.
OnLine 3D Editing
Our Open3D project is pioneering online editing of massive 3D models. One of our demonstrations is an online editable model of London. We are interested in developing new user-interfaces for editing. You could work on one of the following:
- Build an augmented-reality editor. Allow users on the streets to capture photographs and mark up buildings. From these inputs our existing backend server could then generate full 3D models that the user can compare to the model.
- Write a geo-location game to capture data for the model. Not all users will want to edit models. Write a geo-location game that facilitates data capture.
One of our large research efforts in the next couple of years will be to develop next-generation telepresence systems where a person in a virtual reality system controls a remote robot that can physically interact with objects. We have done preliminary work on this in a couple of funded projects and also an MSc project last year.
There are several related proposals:
- Build an ultra-low latency stereo video telepresence system. That is, use a stereo camera mounted on our Kuka robot arm, controlled directly by tracking from a HMD. We can model the end-to-end latency of the system and re-project video so that is apparently low latency. Stereo video telepresence dates back to the 1960s and was in fact an inspiration for Ivan Sutherland to develop the first head-mounted display. However previous systems have had >50ms latency we can now make it extremely low latency for a limited range of movement (<1ms apparent latency should be possible).
- We have a new telepresence robot, a Robothespian RT4, one of the first of its kind. Can we teleoperate this robot, mapping gesture of someone in a VR system to the robot, but also displaying to the user in the VR system the forces that the robot is experiencing? Unlike the previous robot, the RT4 can pick up and move small objects.
- Create a mobile panoramic capture system to enable remote assistance. That is a person in a VR system can collaborate with a remote person, by real-time panoramic video capture from a mobile robot. We are inspired by a new system from University of Tokyo/Sony Japan called “JackIn Head”, but think that having a separate mobile capture system is more useful than a head-mounted camera system. We have a couple of different platforms that can carry cameras including an AnyBot.
These projects are ideal for someone interested in combing software engineering, with graphics, networking and user interface.
Implement a prototype collaborative virtual environment on top of my Yther research prototype. Yther is a proposal of how to separate concerns in a large-scale distributed virtual environment. If you are intrigued by concepts such as Gibsonian Metaverses, or have recently read Ready Player One, this project might be for you.
We have an ongoing project with HSSMI/Ford about using VR to train assembly. We are looking for students to help with a virtual reality demonstrator for training someone to do part of the assembly. This is a chance to work with new consumer virtual reality systems. The main development will be user interaction techniques to manipulate objects with physical simulations (e.g. joining objects, screw threads, etc.)
Most of the current interest in systems such as Oculus Rift, VIVE and Project Morpheus revolves around their use for games. However, much of what goes on in games is the social interaction around the games. When you can be represented as an expressive avatar inside the game much of the interaction might revolve around how you are represented and can express emotion. There are various strategies for this project, such as taking a game and adding expressive avatars to it, in order to see how this affects game-play, or creating a game where expressive interaction is necessary in order to succeed.
I have a variety of confidential project suggestions around 3D printing: new things to do with current high-end customisable 3D printers. Experiment with new printing strategies, new 3D modelling tools or tools to assess prints.
We have a beautiful audio simulation of water that runs in the CAVE. We would like someone to build an interesting water simulator that allows a user to dive in and out of the water. This is more of a creative project, suitable for someone who wants to learn more about OpenGL and real-time graphics.
We are interested in novel displays of various types. In this project we would build a new display type and demonstrate it. This project would suit someone who itches to make physical artefacts that integrate a broad range of camera and display techniques. We would use the new Engineering MakeSpace. You should not be afraid of using a saw or a soldering iron! The project could go in a number of directions but some suggestions:
- Build an intangible mist-based projection system. Can you turn this type of display (Fog Projection by Yagi et al. at Siggraph Asia 2011) in to an interactive display.
- Dynamics camera/projector systems. Can you track a moving or deforming surface sufficiently well to project on to it.
- Telepresence display using projection on to physical mannequins. Can you take multiple video of a person’s head and create a realistic impression of them by projecting on to a proxy head (e.g. a shop dummy’s head).
- Lenticular display systems. We have a 16 view lenticular display. Can you make it interactive?
All ideas along these lines will be entertained as potential projects.
Content-Based Routing for Collaborative Virtual Environments (MSc CS/MSc CGVI/MSc NS)
Massively multi-player games such as World of Warcraft or Second Life use a client-server based system of networking. There are several reasons for this, not least of which is the security of the online world. However, for fast-paced interaction, peer-to-peer is much more suited. This project will build an experimental messaging system for a game, which uses a peer-to-peer network, based on the kind of overlay networks found in file-sharing applications. The key will be to automatically re-connect peers based on interest.
Note that the programming aspect of this project will need C/C++ and some familiarity with Linux as well as Windows. There is little graphics programming in this project.
Tesselations of Space(BSc/MSci/MSc CGVI)
Lots is known about tesselations of a plane with small sets of tiles. Famous tilings include the Penrose tilings, or the Eternity puzzles. A lot less is known about tiling in 3D. Working with the puzzle designer and geometer Robert Reid, we will explore some simple classes of geometric shape that fill space in interesting ways.
This project is suitable for a student with a mathematical inclination, but the work is mainly 3D visualisation and model construction in a CAD package or Matlab. There may be an opportunity to make some physical models on a rapid-prototyping machine.
3D Interaction with Eye Gaze (BSc/MSci/MSc CGVI)
In the CAVE we have a 3D eye-tracker built in to one set of shutter glasses. In this project, you will create a system to interact with the virtual world by using eye gaze. This involves both selecting and manipulating objects with eye gaze, but also communicating with other people through using eye gaze. This might involved, for example, annotating objects with recent eye gaze behaviour.
My students have done the following types of project in the past:
- AR for make up
- New distributed server systems for large-scale multi-player games
- Collaboration between HMD and tablet
- Low latency telepresence
- Impact of locomotion techniques on memory
- Streaming OpenGL
- Procedural city generation
- Procedural facade generation
- Shape grammars for building layout
- Novel marker-based tracking system for AR
- Music synthesizer for a multi-touch table
- Image-based imposters for city models
- 3D internaction on a multi-touch table
- Motor-imagery-based brain-computer interfaces
- Active appearance models for face tracking
- Image-based modelling of building facades
- Reproducing the rubber hand illusion in the CAVE
- Brain computer interface for a multi-touch table
- Brain computer interface for the CAVE
- Procedural clutter
- Animating building destruction
- Automatic camera placement
- Real-time gesture capture from video for the CAVE
- Football game
- Game level layout editor
- Online multi-player games
- Relief map rendering for cities
- Latency impact in networked games
- Presence in augmented reality experiences
- 3D Widgets for the CAVE
- Effective occlusion culling
- Physics simulation on the ReaCTor
- Mixed-Reality Games using GPS
- Time critical rendering for PDAs
- Simulating water effects on surfaces
- Shape grammars for building facade generation
- 3D Departmental Browser Service
- Computing Texture Maps with Photon Tracing
- Visibility for 3D Games
- Virtual Kitchen Designer
- A Fast Splat Based Volume Renderer
- Multicast Support for Massive Scale Games
- Extensible Online 3D Gaming Worlds
- Extensible 2D Vector Shape Editor
- Better Buildings (Making 3D Models of London from Maps)
- 3D Graph Layout Visualisations
- A Virtual Tate Gallery
- Virtual Paddington Station
- Streaming Download Server for 3D Games
- Partitioning Strategies for Online Games
- Handheld Maps for Navigation in the ReaCTor
- Experiments on Presence in the ReaCTor
- Visualisation and Population of New Computer Science Building
- Scene Graph Partitioning Strategies for Graphical Load Balancing
- Client/Server Streaming of 3D Graphical Objects
- Partitioning scheme for Network Games
- PDA Interfaces for the CAVE
- Vision Based Modelling of Buildings from Photographs
- Music Visualisation
- Occlusion Culling for Massive Urban Models
- Imposters for Massive Urban Models
- Animation of People Inside Buildings
- Evolving Garden
- Visibility computation inside buildings
- Cluster rendering for the CAVE
- Cellular texturing for buildings
- Worlds in miniature on a hand-held computer
- Vision-based modelling
- Mixed-Reality Interfaces