Adaptive hashing for model counting

Jonathan Kuck, Tri Dao, Shengjia Zhao, Burak Bartan, Ashish Sabharwal, Stefano Ermon

Research output: Contribution to conferencePaperpeer-review

9 Scopus citations

Abstract

Randomized hashing algorithms have seen recent success in providing bounds on the model count of a propositional formula. These methods repeatedly check the satisfiability of a formula subject to increasingly stringent random constraints. Key to these approaches is the choice of a fixed family of hash functions that strikes a good balance between computational efficiency and statistical guarantees for a hypothetical worst case formula. In this paper we propose a scheme where the family of hash functions is chosen adaptively, based on properties of the specific input formula. Akin to adaptive importance sampling, we use solutions to the formula (found during the bounding procedure of current methods) to estimate properties of the solution set, which guides the construction of random constraints. Additionally, we introduce an orthogonal variance reduction technique that is broadly applicable to hashing based methods. We empirically show that, when combined, these approaches lead to better lower bounds on existing benchmarks, with a median improvement factor of 213 over 1,198 propositional formulas.1

Original languageEnglish (US)
StatePublished - 2019
Externally publishedYes
Event35th Conference on Uncertainty in Artificial Intelligence, UAI 2019 - Tel Aviv, Israel
Duration: Jul 22 2019Jul 25 2019

Conference

Conference35th Conference on Uncertainty in Artificial Intelligence, UAI 2019
Country/TerritoryIsrael
CityTel Aviv
Period7/22/197/25/19

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Adaptive hashing for model counting'. Together they form a unique fingerprint.

Cite this