Prospective competitors may register on the web page and download the dataset. The dataset consists of two parts: training data - set of known (labeled) samples, and testing data - set of samples with unknown classification (unlabeled). Competitors may analyze the labeled samples, build classifiers and try to classify the unlabeled samples from the test data file.
Results of classification should be sent in a text file. The file should contain a line for every sample in the test file containing the result in format below:
sid- subject's identifier
prob- calculated probability that this sample belongs to
sid(in range 0-1)
sid:prob pairs is not specified. It may be just one
sid:1 for strong classifiers or a list of all
sids and probabilities for weak classifiers.
We encourage competitors to send sets of
sid:prob pairs to enable participation in metrics other than just a simple accuracy. The probabilities don't need to sum up to 1.
The main metric used for evaluation will be ACC1. ACC1 is defined as the number of test samples classified correctly to the number of all test samples. Correct classification is when correct subject_id gets the highest probability. ACC1 will be used to order submissions, however, there will be several additional metrics tested as well.
It is possible to send more than one submission but the number of submissions is limited to one per day.
Authors of the best algorithms will be awarded (details will be available soon) and invited to take part in preparation of a monograph about eye movement biometrics. Tests results and description of methodologies will be published on this web page and will be the subject of presentation during IEEE International Joint Conference on Biometrics (IJCB 2014). We also encourage participants to publish the results of their work as separate publications as it was during the previous edition.