Unsupervised conversion of 3D models for interactive metaverses

Jeff Terrace, Ewen Cheslack-Postava, Philip Levis, Michael J. Freedman

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations


A virtual-world environment becomes a truly engaging platform when users have the ability to insert 3D content into the world. However, arbitrary 3D content is often not optimized for real-time rendering, limiting the ability of clients to display large scenes consisting of hundreds or thousands of objects. We present the design and implementation of an automatic, unsupervised conversion process that transforms 3D content into a format suitable for real-time rendering while minimizing loss of quality. The resulting progressive format includes a base mesh, allowing clients to quickly display the model, and a progressive portion for streaming additional detail as desired. Sirikata, an open virtual world platform, has processed over 700 models using this method.

Original languageEnglish (US)
Article number6298517
Pages (from-to)902-907
Number of pages6
JournalProceedings - IEEE International Conference on Multimedia and Expo
StatePublished - 2012
Event2012 13th IEEE International Conference on Multimedia and Expo, ICME 2012 - Melbourne, VIC, Australia
Duration: Jul 9 2012Jul 13 2012

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Computer Science Applications


  • 3D Meshes
  • 3D Models
  • Content Conditioning
  • Metaverses
  • Texture Mapping
  • Virtual Worlds

Cite this