The submission can be
The generator is a program to be launched on a Linux environment with some scaling parameters and a random seed (when applicable). For instance, a random 3-SAT generator will have two scaling parameters (number of variables, number of clauses) and a random seed parameter. A generator must output the generated instance on the standard output in the DIMACS format (see below). The command line parameters must appear in the first commented lines of the instance.
c
c start with comments
c
c
p cnf 5 3
1 -5 4 0
-1 5 3 4 0
-3 -4 0
This category is limited to usual random k-SAT instances. Only generator submissions are allowed for random instances. Any generator must be able to produce many essentially different benchmarks for the same scaling parameters (by varying the random seed).
Instances encoding various ``application'' problems in CNF. Here we should find instances from various applications, such as model checking, planning, bioinformatics, etc. The instances must encode concrete problems of practical interest.
These instances are typically large (containing up to tens of million of variables and clauses). The motivation behind this category is to provide hints about the kind of application SAT solvers may be useful for.
Instances that are often designed to give a hard time to SAT solvers, or represent otherwise problems which are challenging to typical SAT solvers (including, e.g., instances arising from difficult puzzle games).
Only instances known to be either SATISFIABLE or UNSATISFIABLE should be submitted to this category.
A set of instances should be submitted as a single tar or (g)zip archive. Several different sets of similar instances should be partitioned into logical subdirectories within the archive. Each instance file should be given a logical name, including the most important parameter used for generating the instance (when applicable).
When submitting either a set of instances or a generator, we suggest that, at minimum, a README file is included in the archive, describing the problem domain that the instances encode, parameters used, the expected hardness of the instances, etc.
Optimally, instead of a simple README file, we hope that benchmark submitters submit a short PDF file (around 1-2 pages) describing the problem domain, generation process, parameters, expected hardness, etc. Similar to competition solver submission descriptions, benchmark submission descriptions will be made available on the competition result website.