Learning how to match fresco fragments

Thomas Funkhouser, Hijung Shin, Corey Toler-Franklin, Antonio Garćia Castañeda, Benedict Brown, David Dobkin, Szymon Rusinkiewicz, Tim Weyrich

Research output: Contribution to journalArticlepeer-review

44 Scopus citations


One of the main problems faced during reconstruction of fractured archaeological artifacts is sorting through a large number of candidate matches between fragments to find the relatively few that are correct. Previous computer methods for this task provided scoring functions based on a variety of properties of potential matches, including color and geometric compatibility across fracture surfaces. However, they usually consider only one or atmost a few properties at once, and therefore providematch predictions with very low precision. In this article, we investigate a machine learning approach that computes the probability that a match is correct based on the combination of many features. We explore this machine learning approach for ranking matches in three different sets of fresco fragments, finding that classifiers based on many match properties can be significantly more effective at ranking proposed matches than scores based on any single property alone. Our results suggest that it is possible to train a classifier on match properties in one dataset and then use it to rank predicted matches in another dataset effectively. We believe that this approach could be helpful in a variety of cultural heritage reconstruction systems.

Original languageEnglish (US)
Article number7
JournalJournal on Computing and Cultural Heritage
Issue number2
StatePublished - Nov 2011

All Science Journal Classification (ASJC) codes

  • Conservation
  • Information Systems
  • Computer Science Applications
  • Computer Graphics and Computer-Aided Design


  • Cultural heritage computer-assisted fresco reconstruction
  • Machine learning
  • Shape matching


Dive into the research topics of 'Learning how to match fresco fragments'. Together they form a unique fingerprint.

Cite this