General Project Areas
I am interested in all areas of immersive and non-immersive virtual reality systems. This includes real-time rendering, model generation, systems programming, distributed application building, collaborative interaction and usability engineering for VE systems and presence.
I will potentially supervise projects in the following areas:
- Augmented & mixed reality systems
- Real-time computer graphics
- Telepresence robots and haptics
- Social virtual reality
- Experiments on single user interaction in HMD and/or CAVE
- Experiments on presence, immersion and embodiment
- Game-focused projects (but I will try to dissuade you from actually writing a game)
Most of the project suggestions are written as individual projects. However, many of them can work as a group project as well. In particular the projects that involve system building (e.g. telepresence robot, city-scale AR and metaverse protocols) can involve several aspects.
Specific Project Suggestions
The suggestions at the top are related to ongoing research projects.
Social VR Platform
My top priority for projects in the academic year 2023/2024 is building of tools or experiments on top of our new Social VR platform, Ubiq. Topics can include:
- Server extensions, such as implementations of scalability algorithms (individual or group project, programming focussed)
- Building of social VR experiments (several independent individual projects, but could operate as a team with some common tools, suitable for MSc HCI students as well as others)
- Adding agent-based simulations for non-player characters (some AI modelling, probably a group project)
- Adding functionality for better personal avatars (e.g. based on facial tracking, voice tracking, physiological tracking) (individual or group project, programming focussed)
- Games as experiments – this is a confidential suggestion
- Building a virtual conference centre
If you are interested in social VR/metaverses, we have many other suggestions.
Within Ubiq we share avatars of users. We can use our own CC0 assets, or Microsoft’s open source RocketBox avatars. There is an urgent need for open source animation software that can animate the full body of a user from only head and hand tracking. Currently we, as well as lots of other academic groups and developers, use a commercial asset called FinalIK. This project would assess the various algorithms for leg animation that have appeared in the past couple of years and build a simple, configurable system that could be open source (though this decision is ultimately up to the student!). There is a potential for then running user studies to evaluate how users feel about their representation and how other users rate the animations.
If you are interested in Large Language Models (LLMs) there is an enormous amount of proof-of-concept work exploring how a full character can be animated to create interactive characters in a mixed-reality situation. We have our own demonstration Ubiq-Genie which combines speech to text, ChatGPT with some careful prompt engineering, text to speech and a basic animation engine. We have further work on extending this that is under peer review. Projects could take up any idea in this area and run with it (e.g. how to generate speech with emotion, longitudinal modelling of relationship with speakers, better procedural animation, etc.)
Modern HMDs include eye-tracking. This is mostly to support foveated rendering. Unfortunately on many of the consumer HMDs, the eye-tracking is not available directly to programmers. However, we do have HMDs where it is available. If we can animate the eyes of an avatar what impact does it have on users? Can eyetracking data be used directly? Can we use the eyetracking to inform user interaction? (There is a little bit of recent literature on each of these).
One of our large research efforts in the next couple of years will be to develop next-generation telepresence systems where a person in a virtual reality system controls a remote robot that can physically interact with objects. We have done preliminary work on this in a couple of funded projects and also a pair of MSc projects last year.
There are several related proposals:
- We have existing systems for streaming video from a stereo camera mounted on a fast pan-tilt head and separately from 360o cameras. One project for someone more interested in graphics, can look at latency compensation and blending of the three video sources
- Build a lower latency control system for our pan-tilt head
- Look at alternative video sources including a novel hardware suggestion (suitable for someone who doesn’t mind hacking hardware).
We have an increasing interest in novel forms of haptics. We recently introduced the concept of docking haptics and also auxetic surface shape displays. We have a number of projects (some confidential as they involve industrial collaborators), but we are interested in the following types of project.
- In-hand haptics using shape-changing systems that dock with other devices
- Extending controllers with multidimensional vibrotactile feedback
- Worn devices
- Interfaces based on mobile robots
These projects are suitable for MSc students doing robotics programmes, or MSc CGVI students who want to work on the software and simulation (with some help from the team on the hardware interfaces).
Tactile Interface for Learning, Communication and Control
Kurv is a wearable tactile interface for the hands. It uses stimulation to help users learn a new skill ‘automatically’; examples of accelerated learning include to play music, touch-typing and Braille.
We are interested in exploring novel tactile applications complimentary to audio and visual modalities in AR/VR using a 3D graphics engine such as Unity or Unreal. Potential applications include,
- Training for e-sports
- Music learning
- Assistive tech
Stable Diffusion Models for VR Content Creation
A very broad suggestion, but this developer is doing really interesting things with diffusion models creating mixed reality scenes. See their Twitter feed for more examples. They are using the deforum stable diffusion notebook/code. The goal would be to do something similar, perhaps focusing on high resolution models, navigation through the environments, the role of transitions, etc. But any suggestions considered.
AR versus VR
Prior work at UCL has shown the efficacy of virtual training for tasks that involve manipulation of objects within arms reach. Extend this to explore different scenarios of display, including different types of augmented reality (e.g. using HoloLens, MagicLeap, Varjo XR3). Alternatively look at the potential of AR in some unusual
This task is mostly focused on assessing user experience and thus has little need for programming unless you want to get involved in building the user interface itself.
Cities are becoming smarter with lots of data now captured about the physical environment and human interactions. London has a large resource of data about things as diverse as traffic, demographic data and reservoir levels. Projects in this area would pick one or more data sources and come up with novel ways of visualising the data using augmented reality and/or virtual reality.
We have built a few different research prototypes over the years of using 3D models of cities as a resource for smart city applications. For example A very early demonstration (2004) was a visualisation of crowd-sourced pollution data in 3D models procedurally generated from map data. All the resources now exist to do this at a much higher fidelity or for on-the-street AR visualisation.
We have an internal platform for looking at mixed-reality collaboration, so the project could be around how to interact with data about the city in AR or VR.
Alternatively, see how such applications could be built on existing platforms such as Niantic’s Lightship, OpenARCloud, Meta’s Project Aria, etc.
Experience of Social VR
We have recently run a very successful pilot study (originally an MSc HCIE project, now an ACM VRST paper) of groups of family and friends using social VR for an extended period. We would like to extend this study to a longer period, and potentially more users.
This project is more suited to someone interested in human-computer interaction and user studies. There will be little programming unless we get agreement to add some monitoring to one of the social platforms.
VR Controller Design
I have some confidential project suggestions around controller design. Would suit someone interested in procedural design and/or mechantronics. Please contact me.
~Zero Latency Virtual Reality
At UCL we have built an ultra-low latency virtual reality system (see Sebastian Friston’s page). We have several project suggestions around improving this system or evaluating it.
- Evaluate performance of tasks where you can alter the latency at very low levels.
- Measure the impact of rendering artefacts on the display (e.g. straight lines are not straight on the panel if you are moving your head).
- Evaluate the system with different types of display panel (e.g. OLED, CRT, g-sync, etc.).
- Compare this to alternative compensation methods such as asynchronous time-warping or space-warping.
- Re-run Mike Meehan’s heart-rate response experiment.
Virtual reality can exploit the user’s proprioceptive knowledge of their movements: when you move your hand or head, you understand the likely visual implications. While motion of the user is well captured, we would like to better capture the individual user so that we can represent them in the VR.
To this end, as a first step, we would like to build a scanner for hands and forearms. It should capture and fit a 3D model of the hand, and capture good texture maps for the surface. The goal would be to make a 3D model that can be rigged and tracked in real-time inside a VR system.
Suitable for someone who would like to work with hardware as well as software. Will need familiarity with computer vision and real-time graphics.
Hands and Keyboards
If VR is going to be used for long periods of time, one feature we would like to enable is “normal” office work inside VR. Various demonstrations have been made of enabling typing in virtual reality by real-time modelling of the fingers, calibration of video tracking, etc.
In this project, you would investigate low latency ways of tracking the fingers to investigate how latency of tracking or calibration of tracking affects typing speed.
There are some interesting interface issues to overcome, so this project would suit someone who was interested in a good engineering challenge that would lead to an experiment with users. High potential for being able to write up the results afterwards for a paper.
Probably more suited to someone who likes engineering and building physical systems (perhaps an MRes project) …
Build a smell display for a virtual reality system of your choice. The smells should be controllable from a 3D scene based on user location.
Virtual Hand Illusion
We have now run a few experiments on the virtual hand illusion – the participant in an immersive virtual reality can start to believe that the virtual body that they see represents themselves, and then they treat it is as their body. In particular, if you threaten the virtual body, the participant will flinch or react.
We are interested in exploring this further. We have a new result that is extremely interesting, but contradictory. We would like someone to help run a new set of experiment to build a theory about why this contradiction occurs.
This project is more suited for someone interested in experimental methodologies, data analysis and statistics. There will be very little programming.
A few years ago we built the AR Rift system. We have now built a second version for the Oculus CV1. This is a prototype video augmented reality system. It allows occlusion between real and virtual objects unlike optical see-through systems such as HoloLens. There are a few suggestions of projects:
- Run presence experiments using this system to test how we alter perception of the real world.
- Do run-time relighting of virtual objects with respect to real lighting (we have a recent paper on this)
- Use the stereo input (plus depth camera) to do a better job of occlusion of the hands of the user.
- Do latency compensation on the video.
- Investigate integrating mobile bio-signal monitoring to measure human reaction.
Pervasive Augmented Reality
There is a wave of augmented reality systems coming. Computer vision systems already recognize objects and people; soon they will map cities. We would look at how services would get built to allow shared applications between users, and between businesses and users. Examples:
- Build a document sharing systems for multiple hololens users, so that they can collaborate synchronously and asynchronously.
- Write a campus-scale sticky-note system that can be experienced cross-platform (hololens, ARKit-enabled phones, etc.).
- Write a toolkit to track and visualize smart sensor readings on the HoloLens.
OnLine 3D Editing
Our Open3D project is pioneering online editing of massive 3D models. One of our demonstrations is an online editable model of London. We are interested in developing new user-interfaces for editing. You could work on one of the following:
- Build an augmented-reality editor. Allow users on the streets to capture photographs and mark up buildings. From these inputs our existing backend server could then generate full 3D models that the user can compare to the model.
- Write a geo-location game to capture data for the model. Not all users will want to edit models. Write a geo-location game that facilitates data capture.
Metaverse Protocol (BSc/MSci/MSc CGVI)
Implement a prototype collaborative virtual environment on top of my Yther research prototype or our Ubiq system. If you are intrigued by concepts such as Gibsonian Metaverses, or have recently read Ready Player One, this project might be for you.
- Look at code migration between services, that is, how worlds authored in different environments can interact.
- Build a proof of concept inter-world “location service” and support it in a “presence protocol”
- Look at code and data persistence in large-scale systems.
Investigate the potential of various WebVR standards by doing an analysis of the capabilities of each platform (e.g. Project Sansar, VRChat, Improbable). For example, build a Virtual UCL in one or more of these systems to critique what types of programming are easy, and what types of feature are missing.
I have a variety of confidential project suggestions around 3D printing: new things to do with current high-end customisable 3D printers. Experiment with new printing strategies, new 3D modelling tools or tools to assess prints.
We are interested in novel displays of various types. In this project we would build a new display type and demonstrate it. This project would suit someone who itches to make physical artefacts that integrate a broad range of camera and display techniques. We would use the new Engineering MakeSpace. You should not be afraid of using a saw or a soldering iron! The project could go in a number of directions but some suggestions:
- Build an intangible mist-based projection system. Can you turn this type of display (Fog Projection by Yagi et al. at Siggraph Asia 2011) in to an interactive display.
- Dynamics camera/projector systems. Can you track a moving or deforming surface sufficiently well to project on to it.
- Telepresence display using projection on to physical mannequins. Can you take multiple video of a person’s head and create a realistic impression of them by projecting on to a proxy head (e.g. a shop dummy’s head).
- Lenticular display systems. We have a 16 view lenticular display. Can you make it interactive?
All ideas along these lines will be entertained as potential projects.
Tesselations of Space (BSc/MSci/MSc CGVI)
Lots is known about tesselations of a plane with small sets of tiles. Famous tilings include the Penrose tilings, or the Eternity puzzles. A lot less is known about tiling in 3D. Working with the puzzle designer and geometer Robert Reid, we will explore some simple classes of geometric shape that fill space in interesting ways.
This project is suitable for a student with a mathematical inclination, but the work is mainly 3D visualisation and model construction in a CAD package or Matlab. There may be an opportunity to make some physical models on a rapid-prototyping machine.
My students have done the following types of project in the past:
- Video for telepresence systems
- 360o video streaming
- Augmented reality for driver warnings
- Photogrammetry in VR
- VR game performance
- Face and eye animation
- Longitudinal social VR exploration
- Virtual hand illusions
- AR for city modelling
- VR for city exploration
- Online 3D editor in HTML
- Glove-based interaction
- Measuring video latency
- Virtual hand illusion
- AR for make up
- New distributed server systems for large-scale multi-player games
- Collaboration between HMD and tablet
- Low latency telepresence
- Impact of locomotion techniques on memory
- Streaming OpenGL
- Procedural city generation
- Procedural facade generation
- Shape grammars for building layout
- Novel marker-based tracking system for AR
- Music synthesizer for a multi-touch table
- Image-based imposters for city models
- 3D internaction on a multi-touch table
- Motor-imagery-based brain-computer interfaces
- Active appearance models for face tracking
- Image-based modelling of building facades
- Reproducing the rubber hand illusion in the CAVE
- Brain computer interface for a multi-touch table
- Brain computer interface for the CAVE
- Procedural clutter
- Animating building destruction
- Automatic camera placement
- Real-time gesture capture from video for the CAVE
- Football game
- Game level layout editor
- Online multi-player games
- Relief map rendering for cities
- Latency impact in networked games
- Presence in augmented reality experiences
- 3D Widgets for the CAVE
- Effective occlusion culling
- Physics simulation on the ReaCTor
- Mixed-Reality Games using GPS
- Time critical rendering for PDAs
- Simulating water effects on surfaces
- Shape grammars for building facade generation
- 3D Departmental Browser Service
- Computing Texture Maps with Photon Tracing
- Visibility for 3D Games
- Virtual Kitchen Designer
- A Fast Splat Based Volume Renderer
- Multicast Support for Massive Scale Games
- Extensible Online 3D Gaming Worlds
- Extensible 2D Vector Shape Editor
- Better Buildings (Making 3D Models of London from Maps)
- 3D Graph Layout Visualisations
- A Virtual Tate Gallery
- Virtual Paddington Station
- Streaming Download Server for 3D Games
- Partitioning Strategies for Online Games
- Handheld Maps for Navigation in the ReaCTor
- Experiments on Presence in the ReaCTor
- Visualisation and Population of New Computer Science Building
- Scene Graph Partitioning Strategies for Graphical Load Balancing
- Client/Server Streaming of 3D Graphical Objects
- Partitioning scheme for Network Games
- PDA Interfaces for the CAVE
- Vision Based Modelling of Buildings from Photographs
- Music Visualisation
- Occlusion Culling for Massive Urban Models
- Imposters for Massive Urban Models
- Animation of People Inside Buildings
- Evolving Garden
- Visibility computation inside buildings
- Cluster rendering for the CAVE
- Cellular texturing for buildings
- Worlds in miniature on a hand-held computer
- Vision-based modelling
- Mixed-Reality Interfaces