General Project Areas
I am interested in all areas of immersive and non-immersive virtual reality systems. This includes real-time rendering, model generation, systems programming, distributed application building, collaborative interaction and usability engineering for VE systems and presence.
I will potentially supervise projects in the following areas:
- Augmented & mixed reality systems
- Real-time computer graphics
- Telepresence robots
- Social virtual reality
- Experiments on single user interaction in HMD and/or CAVE
- Experiments on presence, immersion and embodiment
- Game-focused projects (but I will try to dissuade you from actually writing a game)
Most of the project suggestions are written as individual projects. However, many of them can work as a group project as well. In particular the projects that involve system building (e.g. telepresence robot, city-scale AR and metaverse protocols) can involve several aspects.
Note that most of the programming-oriented projects require the students to have taken or be taking the relevant graphics course options (COMP0027 Computer Graphics). If a project specifies work in the ReaCTor (CAVE) or one of the head-mounted displays (Oculus or Vive) you will almost certainly be using Unity and C#.
Not all the projects require programming.
Specific Project Suggestions
The suggestions at the top are related to ongoing research projects.
VR Controller Design
I have some confidential project suggestions around controller design. Would suit someone interested in procedural design and/or mechantronics. Please contact me.
Games as Experiments
I have some confidential project suggestions around competitive games as experimental platforms. Please contact me.
Social VR Platform
My top priority for projects in the academic year 2020/2021 is building of tools or experiments on top of our new Social VR platform. Topics can include:
- Server extensions, such as implementations of scalability algorithms (individual or group project, programming focussed)
- Building of social VR experiments (several independent individual projects, but could operate as a team with some common tools, suitable for MSc HCI students as well as others)
- Adding agent-based simulations for non-player characters (some AI modelling, probably a group project)
- Adding functionality for better personal avatars (e.g. based on facial tracking, voice tracking, physiological tracking) (individual or group project, programming focussed)
AR versus VR
Prior work at UCL has shown the efficacy of virtual training for tasks that involve manipulation of objects within arms reach. Extend this to explore different scenarios of display, including different types of augmented reality (e.g. using HoloLens, MagicLeap, Varjo XR1).
This task is mostly focused on assessing user experience and thus has little need for programming unless you want to get involved in building the user interface itself.
Cities are becoming smarter with lots of data now captured about the physical environment and human interactions. London has a large resource of data about things as diverse as traffic, demographic data and reservoir levels. Projects in this area would pick one or more data sources and come up with novel ways of visualising the data using augmented reality and/or virtual reality.
We have built a few different research prototypes over the years of using 3D models of cities as a resource for smart city applications. For example A very early demonstration (2004) was a visualisation of crowd-sourced pollution data in 3D models procedurally generated from map data. All the resources now exist to do this at a much higher fidelity or for on-the-street AR visualisation. For your own project pick a data source and think about how you can visualise it to make it accessible to the general public. In particular, given most smartphones are tracked by GPS (and other methods), and systems such as SkyHook give rough position for many other types of device on the Internet, we can utilised map information to find data that is nearby or especially relevant.
We have recently run a very successful pilot study (originally an MSc HCIE project, now an ACM VRST paper) of groups of family and friends using social VR for an extended period. We would like to extend this study to a longer period, and potentially more users.
This project is more suited to someone interested in human-computer interaction and user studies. There will be little programming unless we get agreement to add some monitoring to one of the social platforms.
~Zero Latency Virtual Reality
At UCL we have built an ultra-low latency virtual reality system (see Sebastian Friston’s page). We have several project suggestions around improving this system or evaluating it.
- Evaluate performance of tasks where you can alter the latency at very low levels.
- Measure the impact of rendering artefacts on the display (e.g. straight lines are not straight on the panel if you are moving your head).
- Evaluate the system with different types of display panel (e.g. OLED, CRT, g-sync, etc.).
- Compare this to alternative compensation methods such as asynchronous time-warping or space-warping.
- Re-run Mike Meehan’s heart-rate response experiment.
Virtual reality can exploit the user’s proprioceptive knowledge of their movements: when you move your hand or head, you understand the likely visual implications. While motion of the user is well captured, we would like to better capture the individual user so that we can represent them in the VR.
To this end, as a first step, we would like to build a scanner for hands and forearms. It should capture and fit a 3D model of the hand, and capture good texture maps for the surface. The goal would be to make a 3D model that can be rigged and tracked in real-time inside a VR system.
Suitable for someone who would like to work with hardware as well as software. Will need familiarity with computer vision and real-time graphics.
Hands and Keyboards
If VR is going to be used for long periods of time, one feature we would like to enable is “normal” office work inside VR. Various demonstrations have been made of enabling typing in virtual reality by real-time modelling of the fingers, calibration of video tracking, etc.
In this project, you would investigate low latency ways of tracking the fingers to investigate how latency of tracking or calibration of tracking affects typing speed.
There are some interesting interface issues to overcome, so this project would suit someone who was interested in a good engineering challenge that would lead to an experiment with users. High potential for being able to write up the results afterwards for a paper.
Probably more suited to someone who likes engineering and building physical systems (perhaps an MRes project) …
Build a smell display for a virtual reality system of your choice. The smells should be controllable from a 3D scene based on user location.
Virtual Hand Illusion
We have now run a few experiments on the virtual hand illusion – the participant in an immersive virtual reality can start to believe that the virtual body that they see represents themselves, and then they treat it is as their body. In particular, if you threaten the virtual body, the participant will flinch or react.
We are interested in exploring this further. We have a new result that is extremely interesting, but contradictory. We would like someone to help run a new set of experiment to build a theory about why this contradiction occurs.
This project is more suited for someone interested in experimental methodologies, data analysis and statistics. There will be very little programming.
A few years ago we built the AR Rift system. We have now built a second version for the Oculus CV1. This is a prototype video augmented reality system. It allows occlusion between real and virtual objects unlike optical see-through systems such as HoloLens. There are a few suggestions of projects:
- Run presence experiments using this system to test how we alter perception of the real world.
- Do run-time relighting of virtual objects with respect to real lighting (we have a recent paper on this)
- Use the stereo input (plus depth camera) to do a better job of occlusion of the hands of the user.
- Do latency compensation on the video.
- Investigate integrating mobile bio-signal monitoring to measure human reaction.
Pervasive Augmented Reality
There is a wave of augmented reality systems coming. Computer vision systems already recognize objects and people; soon they will map cities. We would look at how services would get built to allow shared applications between users, and between businesses and users. Examples:
- Build a document sharing systems for multiple hololens users, so that they can collaborate synchronously and asynchronously.
- Write a campus-scale sticky-note system that can be experienced cross-platform (hololens, ARKit-enabled phones, etc.).
- Write a toolkit to track and visualize smart sensor readings on the HoloLens.
OnLine 3D Editing
Our Open3D project is pioneering online editing of massive 3D models. One of our demonstrations is an online editable model of London. We are interested in developing new user-interfaces for editing. You could work on one of the following:
- Build an augmented-reality editor. Allow users on the streets to capture photographs and mark up buildings. From these inputs our existing backend server could then generate full 3D models that the user can compare to the model.
- Write a geo-location game to capture data for the model. Not all users will want to edit models. Write a geo-location game that facilitates data capture.
One of our large research efforts in the next couple of years will be to develop next-generation telepresence systems where a person in a virtual reality system controls a remote robot that can physically interact with objects. We have done preliminary work on this in a couple of funded projects and also an MSc project last year.
There are several related proposals:
- Build an ultra-low latency stereo video telepresence system. That is, use a stereo camera mounted on our Kuka robot arm, controlled directly by tracking from a HMD. We can model the end-to-end latency of the system and re-project video so that is apparently low latency. Stereo video telepresence dates back to the 1960s and was in fact an inspiration for Ivan Sutherland to develop the first head-mounted display. However previous systems have had >50ms latency we can now make it extremely low latency for a limited range of movement (<1ms apparent latency should be possible).
- We have a new telepresence robot, a Robothespian RT4, one of the first of its kind. Can we teleoperate this robot, mapping gesture of someone in a VR system to the robot, but also displaying to the user in the VR system the forces that the robot is experiencing? Unlike the previous robot, the RT4 can pick up and move small objects.
- Create a mobile panoramic capture system to enable remote assistance. That is a person in a VR system can collaborate with a remote person, by real-time panoramic video capture from a mobile robot. We are inspired by a new system from University of Tokyo/Sony Japan called “JackIn Head”, but think that having a separate mobile capture system is more useful than a head-mounted camera system. We have a couple of different platforms that can carry cameras including an AnyBot.
These projects are ideal for someone interested in combing software engineering, with graphics, networking and user interface.
Implement a prototype collaborative virtual environment on top of my Yther research prototype. Yther is a proposal of how to separate concerns in a large-scale distributed virtual environment. If you are intrigued by concepts such as Gibsonian Metaverses, or have recently read Ready Player One, this project might be for you.
Investigate the potential of various WebVR standards by doing an analysis of the capabilities of each platform (e.g. Project Sansar, VRChat, Improbable). For example, build a Virtual UCL in one or more of these systems to critique what types of programming are easy, and what types of feature are missing.
Most of the current interest in systems such as Oculus Rift, VIVE and Project Morpheus revolves around their use for games. However, much of what goes on in games is the social interaction around the games. When you can be represented as an expressive avatar inside the game much of the interaction might revolve around how you are represented and can express emotion. There are various strategies for this project, such as taking a game and adding expressive avatars to it, in order to see how this affects game-play, or creating a game where expressive interaction is necessary in order to succeed.
I have a variety of confidential project suggestions around 3D printing: new things to do with current high-end customisable 3D printers. Experiment with new printing strategies, new 3D modelling tools or tools to assess prints.
We are interested in novel displays of various types. In this project we would build a new display type and demonstrate it. This project would suit someone who itches to make physical artefacts that integrate a broad range of camera and display techniques. We would use the new Engineering MakeSpace. You should not be afraid of using a saw or a soldering iron! The project could go in a number of directions but some suggestions:
- Build an intangible mist-based projection system. Can you turn this type of display (Fog Projection by Yagi et al. at Siggraph Asia 2011) in to an interactive display.
- Dynamics camera/projector systems. Can you track a moving or deforming surface sufficiently well to project on to it.
- Telepresence display using projection on to physical mannequins. Can you take multiple video of a person’s head and create a realistic impression of them by projecting on to a proxy head (e.g. a shop dummy’s head).
- Lenticular display systems. We have a 16 view lenticular display. Can you make it interactive?
All ideas along these lines will be entertained as potential projects.
Tesselations of Space(BSc/MSci/MSc CGVI)
Lots is known about tesselations of a plane with small sets of tiles. Famous tilings include the Penrose tilings, or the Eternity puzzles. A lot less is known about tiling in 3D. Working with the puzzle designer and geometer Robert Reid, we will explore some simple classes of geometric shape that fill space in interesting ways.
This project is suitable for a student with a mathematical inclination, but the work is mainly 3D visualisation and model construction in a CAD package or Matlab. There may be an opportunity to make some physical models on a rapid-prototyping machine.
3D Interaction with Eye Gaze (BSc/MSci/MSc CGVI)
In the CAVE we have a 3D eye-tracker built in to one set of shutter glasses. In this project, you will create a system to interact with the virtual world by using eye gaze. This involves both selecting and manipulating objects with eye gaze, but also communicating with other people through using eye gaze. This might involved, for example, annotating objects with recent eye gaze behaviour.
My students have done the following types of project in the past:
- Social VR for groups
- Virtual hand illusions
- AR for city modelling
- VR for city exploration
- Online 3D editor in HTML
- Glove-based interaction
- Measuring video latency
- Virtual hand illusion
- AR for make up
- New distributed server systems for large-scale multi-player games
- Collaboration between HMD and tablet
- Low latency telepresence
- Impact of locomotion techniques on memory
- Streaming OpenGL
- Procedural city generation
- Procedural facade generation
- Shape grammars for building layout
- Novel marker-based tracking system for AR
- Music synthesizer for a multi-touch table
- Image-based imposters for city models
- 3D internaction on a multi-touch table
- Motor-imagery-based brain-computer interfaces
- Active appearance models for face tracking
- Image-based modelling of building facades
- Reproducing the rubber hand illusion in the CAVE
- Brain computer interface for a multi-touch table
- Brain computer interface for the CAVE
- Procedural clutter
- Animating building destruction
- Automatic camera placement
- Real-time gesture capture from video for the CAVE
- Football game
- Game level layout editor
- Online multi-player games
- Relief map rendering for cities
- Latency impact in networked games
- Presence in augmented reality experiences
- 3D Widgets for the CAVE
- Effective occlusion culling
- Physics simulation on the ReaCTor
- Mixed-Reality Games using GPS
- Time critical rendering for PDAs
- Simulating water effects on surfaces
- Shape grammars for building facade generation
- 3D Departmental Browser Service
- Computing Texture Maps with Photon Tracing
- Visibility for 3D Games
- Virtual Kitchen Designer
- A Fast Splat Based Volume Renderer
- Multicast Support for Massive Scale Games
- Extensible Online 3D Gaming Worlds
- Extensible 2D Vector Shape Editor
- Better Buildings (Making 3D Models of London from Maps)
- 3D Graph Layout Visualisations
- A Virtual Tate Gallery
- Virtual Paddington Station
- Streaming Download Server for 3D Games
- Partitioning Strategies for Online Games
- Handheld Maps for Navigation in the ReaCTor
- Experiments on Presence in the ReaCTor
- Visualisation and Population of New Computer Science Building
- Scene Graph Partitioning Strategies for Graphical Load Balancing
- Client/Server Streaming of 3D Graphical Objects
- Partitioning scheme for Network Games
- PDA Interfaces for the CAVE
- Vision Based Modelling of Buildings from Photographs
- Music Visualisation
- Occlusion Culling for Massive Urban Models
- Imposters for Massive Urban Models
- Animation of People Inside Buildings
- Evolving Garden
- Visibility computation inside buildings
- Cluster rendering for the CAVE
- Cellular texturing for buildings
- Worlds in miniature on a hand-held computer
- Vision-based modelling
- Mixed-Reality Interfaces