Proc. 1st Intl. Work. on Comparative Empirical Evaluation of Reasoning Systems (COMPARE'12)
Sprache des Titels:
It has become accepted wisdom that regular comparative evaluation of reasoning
systems helps to focus research, identify relevant problems, bolster development,
and advance the field in general. Benchmark libraries and competitions
are two popular approaches to do so. The number of competitions has been
rapidly increasing lately. At the moment, we are aware of about a dozen benchmark
collections and two dozen competitions for reasoning systems of different
kinds. It is time to compare notes.
What are the proper empirical approaches and criteria for effective comparative
evaluation of reasoning systems? What are the appropriate hardware and
software environments? How to assess usability of reasoning systems, and in particular
of systems that are used interactively? How to design, acquire, structure,
publish, and use benchmarks and problem collections?
The aim of the workshop was to advance comparative empirical evaluation
by bringing together current and future competition organizers and participants,
maintainers of benchmark collections, as well as practitioners and the general
scientific public interested in the topic.
We wish to sincerely thank all the authors who submitted their work for
consideration. All submitted papers were peer-reviewed, and we would like to
thank the Program Committee members as well as the additional referees for
their great effort and professional work in the review and selection process. Their
names are listed on the following pages. We are deeply grateful to our invited
speakers - Leonardo de Moura (Microsoft Research) and Cesare Tinelli (University
of Iowa) - for accepting the invitation to address the workshop participants.
We thank Sarah Grebing for her help in organizing the workshop and compiling