Our paper, “Learning to Forecast and Refine Residual Motion for Image-to-Video Generation” was published at the European Conference on Computer Vision (ECCV 2018).
We are developing computational models of autonomous virtual humans with applications in computer animation, intelligent virtual agents, and interactive narratives.
We develop models for simulating realistic, believable crowds. To this end, we identify and address fundamental limitations in how individuals in a crowd are represented and controlled. Our research results have widespread application in visual effects, games, urban planning, architecture design, as well as disaster and security simulation.
We are developing enhanced computer-aided design pipelines which empower architects, urban planners, and designers, to account for how human crowds inhabit and occupy functional spaces.
We develop computational tools to assist end users to create and experience compelling, interactive, digital stories. Our computational tools help mitigate the complexity of creating immersive digital stories without sacrificing any authorial precision.