drawing
Video Face Recognition Challenge@MCS2019
This challenge aims to improve face recognition in video streams provided a pre-trained single-frame CNN-based face descriptor

Welcome to the Video Face Recognition Challenge@MCS2019!

Automatic face recognition has recently achieved impressive progress surpassing human performance and scaling up to millions of faces. While recent methods mostly process still images, video provides additional information in terms of varying poses, facial expressions or illumination. Face recognition in video, hence, presents opportunities for improvement but also challenges due to e.g., difficult imaging conditions such as motion blur. In this challenge participants will aim to improve the performance of face recognition in video by aggregating single-frame pre-trained CNN face descriptors.

To facilitate future video face recognition research, we release a dataset from YouTube videos with different resolutions, video quality, head rotation, etc. Dataset contains 20 000 tracks 1 500 unique identities with 5 facial landmarks marking.

To take part in competition click on Join in. Please authorise through the GitLab. At the first authorization, you will be prompted to create a team (button "Create team"). When you create a team, a repository will be automatically created and the first commit with baseline will be made.

DETAILS

We assume to be given а set of reference images with faces $I_{ref}$. We also assume the access to face tracks obtained by running face detection and tracking algorithms on video streams. Each track is represented by a sequence of ten face images $I_{track}^i,\, i=1,\ldots,10$ uniformly sampled along the track. The goal is to recognize people in video by comparing sets of images in face tracks $S_{track}=\{I_{track}^i\}_{i=1}^{10}$ to reference images $I_{ref}$.

To compare pairs of face images, we are providing a face CNN $f$ generating $L_2$-normalized descriptors $d=f(I)$ of size 512. The neural network $f$ has a modified ResNet34 architecture and is pre-trained on the MSCeleb1M face dataset such that the distance between resulting descriptors $||d^i - d^j||_2^2$ is minimized for images $i,j$ of the same person and maximized if images $i,j$ belong to different people.

Participants should design a function $g(S_{track})\rightarrow d_{track}$ that generates $L_2$-normalized face track descriptors $d_{track}\in\mathbb{R}^{512}$ aggregating information from multiple face images in the track. Moreover, $d_{track}$ should minimize the distance $$D^{i,j} = ||d_{track}^i - d_{ref}^j||_2^2 = ||g(S_{track}^i) - f(I_{ref}^j)||_2^2$$ if the face track $i$ and the reference face image $j$ belong to the same person, and maximize $D^{i,j}$ otherwise.

EVALUATION

The performance will be measured by evaluating distances $D^{i,j}$ for all pairs of tracks $i$ and reference images $j$ in the test set. Given positive and negative pairs $(S_{track},I_{ref})$ for the same and different people respectively, submissions to the VFR challenge will be ranked by maximizing the True Positive Rate at 10e-6 False Positive Rate (TPR@FPR=10e-6) according to ROC. In (unlikely) cases of identical TPR, the methods will be futher ranked by minimizing the average distance $D^{i,j}$ for positive pairs$(S^i_{track},I^j_{ref})$ in the test set.

Results in the VFR public leaderboard will be evaluated on the 50% of the test set. The remaining 50% of the test set will be reserved to determine the final results of the challenge. Participants are requested to submit the evaluation script eval.py, which should accept input files test_df.csv and track_order_df.csv defining the description and the order of tracks respectively. To evaluate submissions, the code of participants will be executed on the VFR challenge server. More details are available in How to submit section.

To develop solutions, participants are allowed to use the provided CNN $f$ as well as any other functions extracting information from input images. To encourage original and resource-friendly solutions as well as to avoid assemblies of many models, we restrict the evaluation time to 15min on servers with the following hardware configuration:

OS: Ubuntu 18.04

CPU: i7-8700 (6 cores)

RAM: 16 Gb

GPU: 2080Ti

Execution time: 15 min

In case of questions regarding the task description, evaluations measures or other competition details, please consult our FAQ and Discussion sections. Also you can ask questions in chat

Data

Train data

File name Description Size
train_df.csv The file contains paths to the original and warped face images, the coordinates of the face box, the coordinates of 5 points and other meta information 155340 x 21
train_gt_df.csv The file contains meta information of GT images to calculate recognition accuracy 4675 x 18
#
Δ
Team
TPR
Mean dist
Submits
Date
1
0.9483
0.7571
12
2 months ago
2
0.9193
0.7774
32
2 months ago
3
0.8971
0.6615
20
2 months ago
4
0.8915
0.8262
25
1 month ago
5
0.8895
0.7318
32
2 months ago
6
0.8617
0.9094
17
1 month ago
7
0.8136
1.0214
11
1 month ago
8
12
0.8104
0.9831
3
1 month ago
9
1
0.7878
0.7999
10
1 month ago
10
1
0.7861
0.6841
24
2 months ago
11
1
0.7218
0.8400
6
2 months ago
12
1
0.7045
0.7983
8
2 months ago
13
1
0.6968
0.8342
3
1 month ago
14
1
0.6957
0.8348
4
2 months ago
15
1
0.6851
0.7892
3
2 months ago
16
1
0.6851
0.7892
3
2 months ago
17
1
0.6556
0.7771
3
2 months ago
18
1
0.6527
0.7835
3
2 months ago
19
1
0.6514
0.7835
B
3 months ago
20
1
0.6514
0.7835
4
3 months ago
21
0.6514
0.7835
B
3 months ago
22
0.6514
0.7835
2
3 months ago
23
0.6514
0.7835
B
3 months ago
24
0.6514
0.7835
B
2 months ago
25
0.6514
0.7835
B
2 months ago
26
0.6514
0.7835
B
2 months ago
27
0.6514
0.7835
B
2 months ago
28
0.6514
0.7835
B
2 months ago
29
0.6514
0.7835
B
2 months ago
30
0.6514
0.7835
B
2 months ago
31
0.6514
0.7835
B
2 months ago
32
0.6514
0.7835
B
2 months ago
33
0.6514
0.7835
B
2 months ago
34
0.6514
0.7835
B
2 months ago
35
0.6514
0.7835
B
2 months ago
36
0.6514
0.7835
B
2 months ago
37
0.6514
0.7835
B
2 months ago
38
0.6514
0.7835
2
2 months ago
39
0.6514
0.7835
B
2 months ago
40
0.6514
0.7835
B
2 months ago
41
0.6514
0.7835
2
2 months ago
42
0.6514
0.7835
B
2 months ago
43
0.6514
0.7835
B
2 months ago
44
0.6514
0.7835
B
2 months ago
45
0.6514
0.7835
B
2 months ago
46
0.6514
0.7835
2
2 months ago
47
0.6514
0.7835
B
2 months ago
48
0.6514
0.7835
3
2 months ago
49
0.6514
0.7835
2
2 months ago
50
0.6514
0.7835
B
2 months ago
51
0.6514
0.7835
B
2 months ago
52
0.6514
0.7835
2
2 months ago
53
0.6514
0.7835
5
2 months ago
54
0.6514
0.7835
B
2 months ago
55
0.6514
0.7835
B
2 months ago
56
0.6514
0.7835
B
2 months ago
57
0.6514
0.7835
B
2 months ago
58
0.6514
0.7835
3
1 month ago
59
0.6514
0.7835
3
1 month ago
60
0.6514
0.7835
B
1 month ago
61
0.6514
0.7835
B
1 month ago
#
Team
TPR
Submits
2
0.9193
32
4
0.8915
25
5
0.8895
32
6
0.8617
17
7
0.8136
11
9
0.7878
10
10
0.7861
24
11
0.7218
6
12
0.7045
8
13
0.6968
3
14
0.6957
4
15
0.6851
3
16
0.6851
3
17
0.6556
3
18
0.6527
3
19
0.6514
B
20
0.6514
4
21
0.6514
B
22
0.6514
2
23
0.6514
B
24
0.6514
B
25
0.6514
B
26
0.6514
B
27
0.6514
B
28
0.6514
B
29
0.6514
B
30
0.6514
B
31
0.6514
B
32
0.6514
B
33
0.6514
B
34
0.6514
B
35
0.6514
B
36
0.6514
B
37
0.6514
B
38
0.6514
2
39
0.6514
B
40
0.6514
B
41
0.6514
2
42
0.6514
B
43
0.6514
B
44
0.6514
B
45
0.6514
B
46
0.6514
2
47
0.6514
B
48
0.6514
3
49
0.6514
2
50
0.6514
B
51
0.6514
B
52
0.6514
2
53
0.6514
5
54
0.6514
B
55
0.6514
B
56
0.6514
B
57
0.6514
B
58
0.6514
3
59
0.6514
3
60
0.6514
B
61
0.6514
B
#
Δ
Team
TPR
Mean dist
Submits
2
4
0.8787
0.8529
1
3
1
0.8571
1.0081
1
4
0.8298
0.9323
1
5
3
0.8257
0.9741
1
6
3
0.8166
0.7905
1
7
0.8151
1.0213
1
8
3
0.8116
0.7765
1
9
1
0.7851
1.2219
1
10
4
0.7310
0.8069
1
11
1
0.7278
0.7900
1
12
1
0.7267
0.8238
1
13
10
0.7211
0.7891
1
14
4
0.7040
0.7776
1
15
15
0.7039
0.7776
1
16
15
0.7039
0.7776
1
17
15
0.7039
0.7776
1
18
7
0.7039
0.7776
1
19
7
0.7039
0.7776
1
20
13
0.7039
0.7776
1
21
13
0.7039
0.7776
1
22
13
0.7039
0.7776
1
23
13
0.7039
0.7776
1
24
13
0.7039
0.7776
1
25
13
0.7039
0.7776
1
26
13
0.7039
0.7776
1
27
13
0.7039
0.7776
1
28
13
0.7039
0.7776
1
29
13
0.7039
0.7776
1
30
13
0.7039
0.7776
1
31
4
0.7039
0.7776
1
32
12
0.7039
0.7776
1
33
12
0.7039
0.7776
1
34
12
0.7039
0.7776
1
35
12
0.7039
0.7776
1
36
14
0.7039
0.7776
1
37
11
0.7039
0.7776
1
38
11
0.7039
0.7776
1
39
19
0.7039
0.7776
1
40
10
0.7039
0.7776
1
41
10
0.7039
0.7776
1
42
10
0.7039
0.7776
1
43
15
0.7039
0.7776
1
44
9
0.7039
0.7776
1
45
9
0.7039
0.7776
1
46
23
0.7039
0.7776
1
47
8
0.7039
0.7776
1
48
8
0.7039
0.7776
1
49
8
0.7039
0.7776
1
50
21
0.7039
0.7776
1
51
7
0.7039
0.7776
1
52
7
0.7039
0.7776
1
53
7
0.7039
0.7776
1
54
7
0.7039
0.7776
1
55
44
0.6933
0.7852
1
56
39
0.6886
0.7720
1
57
41
0.6851
0.7801
1
58
34
0.3448
0.8812
1
59
40
0.0369
1.1882
1
60
45
0.0000
22.6369
1
61
40
0.0000
1.4137
1
#
Team
TPR
Submits
2
0.8787
1
3
0.8571
1
4
0.8298
1
6
0.8166
1
7
0.8151
1
8
0.8116
1
9
0.7851
1
10
0.7310
1
11
0.7278
1
12
0.7267
1
14
0.7040
1
15
0.7039
1
16
0.7039
1
17
0.7039
1
18
0.7039
1
19
0.7039
1
20
0.7039
1
21
0.7039
1
22
0.7039
1
23
0.7039
1
24
0.7039
1
25
0.7039
1
26
0.7039
1
27
0.7039
1
28
0.7039
1
29
0.7039
1
30
0.7039
1
31
0.7039
1
32
0.7039
1
33
0.7039
1
34
0.7039
1
35
0.7039
1
36
0.7039
1
37
0.7039
1
38
0.7039
1
39
0.7039
1
40
0.7039
1
41
0.7039
1
42
0.7039
1
43
0.7039
1
44
0.7039
1
45
0.7039
1
46
0.7039
1
47
0.7039
1
48
0.7039
1
49
0.7039
1
50
0.7039
1
51
0.7039
1
52
0.7039
1
53
0.7039
1
54
0.7039
1
55
0.6933
1
56
0.6886
1
57
0.6851
1
58
0.3448
1
59
0.0369
1
60
0.0000
1
61
0.0000
1

Submission

To submit a solution, you will need to add it to a GitLab repository.

Working with Git

If you have not previously worked with Git, we recommend using SmartGit, see instructions here.

SmartGit will create two branches in your repository: master and leaderboard. The leaderboard branch is protected and you can add changes there only by the merge request.

Submitting a solution via GitLab

  1. Create a team with at least one member (you)
  2. Make a commit (git commit) to the master branch or to any other branch except the leaderboard and send the commit to the server (git push).
  3. Go to the merge request section in your repository and create a merge request from the master branch to the leaderboard branch (or from another branch).
  4. Send a link to the merge request teammates, discuss the comments with them, make additional changes, and click the merge button. Your commit will appear in the CI / CD section of GitLab.
  5. Submission result will appear on the leaderboard

Important note on submits: you can submit only 3 times a day

Baseline example

Your solution should meet the following requirements: the launch script should be named eval.py, four arguments are required - the paths to the csv files test_df.csv, track_order_df.csv, test_descriptors.npy and the path to the saved descriptors agg_descriptors.npy.

If you have your own solution (other than eval.py), you can use the custom run.sh run script. It should be noted that if you want to use a custom GPU solution, then the id of the video card will be 0.

Detailed description

1. File name Rename the main script (which is responsible for training and predictions) to eval.py.

2. Reading the data Configure eval.py to read data from CSV files.

When running the script, the command line should take 4 arguments: the paths to test_df.csv, track_order_df.csv, test_descriptors.npy, and the fourth is the path to save results of aggregated descriptors.

Example:

python3 eval.py test_df.csv test_track_order_df.csv test_descriptors.np agg_descriptors.npy

3. Saving results Prediction results should be saved in a npy-file agg_descriptors.npy in the directory of eval.py.

Example:

import numpy as np
np.save(agg_descriptors.npy, results)

Additional information

In the repository you will find the file .gitlab-ci.yml. It is already configured to send solutions to the leaderboard.

Important: do not change the contents of the .gitlab-ci.yml file. If you need to do this, contact support team.

Prizes

The winners will share a cash pool of 500,000 RUB and will receive desktops with a depth camera and a neural network module VPU from our partner Intel. The prizes will be awarded at the annual conference Machines Can See organized in Moscow, 25 June 2019.

1st place - 250,000 RUB + Intel NUC L10 (CPU: i7) + depth camera Realsense D415 + Neural Compute Stick 2

2nd place - 150,000 RUB + Intel NUC L10 (CPU: i5) + depth camera Realsense D415 + Neural Compute Stick 2

3rd place - 100,000 RUB + Intel NUC L10 (CPU: i5) + depth camera Realsense D415 + Neural Compute Stick 2

Terms and Conditions

General rules

  1. One account per participant. Submitting from multiple accounts is not allowed.
  2. No private code sharing outside teams.
  3. External open-source data is allowed but source should be listed on the competition forum.
  4. No competition data/models sharing outside competition.

Winners rules

Competition winners or their authorized representatives will be required to attend the Machines Can See 2019 conference to receive prizes.

Prize winners will also be requested to:

  1. Provide the Competion Organizer with the final code used to generate the winning Submission together with the corresponding documentation. The provided code should enable to reproduce results of the winning Submission and should contain a description of resources required for the code execution.
  2. Grant Competition Organizer the Non-exclusive license to the winning solution and ensure the prize winner has unrestricted rights to grant such a license.
  3. Sign and return all Prize acceptance documents as may be required by Competition Organizer.

In case of questions regarding the task description, evaluations measures or other competition details, please consult our FAQ and Discussion sections.

Frequently asked questions

Organization

Q: When is the competition being held?

A: The competition is open from April 29 until June 22, 23:59 MSK, 2019. Winners will be announced at the Machines Can See conference.

Q: What are the ranking criteria?

A: The methods will be ranked according to TPR at FPR=1e-6. Details

Q: How to take part in the competition?

A: Choose "Log in" on the competition site. You will be able to create a team once registered at GitLab.

Q: How to register for the Machine Can See conference?

A: Participants of the competition are automatically registered to the Machines Can See conference.

Q: Is it possible to register for the competition after its starting date?

A: Yes, the registration will be open until the last day of the competition.

Q: What should I do to join the team to take part in a contest?

A: Please, contact support team.

Q: I’m not able to participate in the conference. Can I still take part in the competition?

A: At least one member from each top-ranked team should participate and present results of the team at the conference.

Technical questions

Q: How to access the repository?

A: Once a team is created, the repository should appear on the projects tab in GitLab.

Q: How to make a submission?

A: Submission details can be foundhere.

Q: When will my results appear on the leaderboard?

A: The progress status will appear in the CI / CD section of GitLab. The processing time will depend on the time it takes to train and test your solution.

Q: What should I do if the submission status is indicated as "Done" but my results do not appear on the leaderboard?

A: Check the output in the jobs section for errors. If there are no errors and the script has successfully terminated, the problem could be caused by the network or hardware issues. Please restart the submission or send the name of your team and the commit identifier to the support team.

Q: How to install packages for Python (pip packages)?

A: Add the list of packages to the file requirements.txt in the standard way.

Q: What environment will be used?

A: Docker based on nvidia/cuda:9.0-cudnn7-runtime

Q: How to install apt packages?

A: Add apt packages in the file apt-packages.txt one package per line.

Q: Any advice on getting started?

A: In the repository you will see the eval.py file, which contains the baseline model. File get_scores.py - you can check your result locally (main metrics of the competition).

Also you can ask questions in chat