BioEye 2015

Competition on Biometrics via Eye Movements

An official competition of BTAS 2015

New user? Register

Competition Procedure

The competition includes two phases: the development phase and the evaluation phase.

Development Phase

During the development phase the participants may download the development datasets from the Database page. They can use these fully labeled data in order to develop and test their algorithms. The development datasets contain all recordings from half of the subjects (i.e. 153 subjects for RAN_30min and TEX_30min datasets, and 37 subjects for RAN_1year and TEX_1year datasets).

Evaluation Phase

During the evaluation phase the participants may download the evaluation datasets from the Database page. They should apply the solutions that they developed during the development phase, and submit the results. The evaluation datasets contain all recordings from the other half of the subjects (i.e. 153 subjects for RAN_30min and TEX_30min datasets, and 37 subjects for RAN_1year and TEX_1year datasets). In the evaluation datasets, one recording is labeled (gallery) and the other one is unlabeled (probe).

Required Task
During each submission, the participants should upload their result files for the evaluation datasets, in which they will assign an ID at each of the unlabeled recordings. Attention! None of the subjects in the evaluation datasets is also a member of the development datasets. As a result, the identified IDs should correspond to one of the IDs of the labeled recordings contained in the evaluation datasets.

In every submission, the participants should upload a separate result file for each of the four datasets. The result files can be uploaded in the Submission page.

File Format of a Submission
The result files for the submissions uploaded by a participant should be .txt files (name of your choice) and contain an identified ID for each of the unlabeled recordings (files). The format should be as in the following example:

ID_xxx_1 : ID_004
ID_xxx_2 : ID_007
ID_xxx_3 : ID_255
ID_xxx_4 : ID_101
...etc.
Evaluation Metrics
After a participant uploads the result files, we will calculate and return the performance results. The performance metric that will be used in the competition is the Rank-1 Identification Rate (IR or Rank-1 IR), defined as: the total number of correctly identified unlabeled recordings divided to the total number of unlabeled recordings.

The performance results for each of the four datasets will be presented separately. The final winner of the competition will be decided using the following weighting formula:

IR_final = wD1*IR_D1 + wD2*IR_D2 + wD3*IR_D3 + wD4*IR_D4

where D1 = RAN_30min, D2 = RAN_1year, D3 = TEX_30min, D4 = TEX_1year, and wD1 = 0.3, wD2 = 0.2, wD3 = 0.3, wD4 = 0.2.

* The case of a 'tie' will be resolved by selecting as the final winner, the participant that submitted first the results that led to the equal performances.