The Visual Object Tracking VOT2015 Challenge Results
Kristan, M
Matas, J
Leonardis, A
Felsberg, M
Čehovin, L
Fernández, G
Vojír˜, T
Häger, G
Nebehay, G
Pflugfelder, R
Gupta, A
Bibi, A
Lukežič, A
Garcia-Martin, A
Saffari, A
Petrosino, A
Montero, AS
Varfolomieiev, A
Baskurt, A
Zhao, B
Ghanem, B
Martinez, B
Lee, B
Han, B
Wang, C
Garcia, C
Zhang, C
Schmid, C
Tao, D
Kim, D
Huang, D
Prokhorov, D
Du, D
Yeung, DY
Ribeiro, E
Khan, FS
Porikli, F
Bunyak, F
Zhu, G
Seetharaman, G
Kieritz, H
Yau, HT
Li, H
Qi, H
Bischof, H
Possegger, H
Lee, H
Nam, H
Bogun, I
Jeong, JC
Cho, JI
Lee, JY
Zhu, J
Shi, J
Li, J
Jia, J
Feng, J
Gao, J
Choi, JY
Kim, JW
Lang, J
Martinez, JM
Choi, J
Xing, J
Xue, K
Palaniappan, K
Lebeda, K
Alahari, K
Gao, K
Yun, K
Wong, KH
Luo, L
Ma, L
Ke, L
Wen, L
Bertinetto, L
Pootschi, M
Maresca, M
Danelljan, M
Wen, M
Zhang, M
Arens, M
Valstar, M
Tang, M
Chang, MC
Khan, MH
Fan, N
Wang, N
Miksik, O
Torr, PHS
Wang, Q
Martin-Nieto, R
Pelapur, R
Bowden, R
Laganière, R
Moujtahid, S
Hare, S
Hadfield, S
Lyu, S
Li, S
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the IEEE International Conference on Computer Vision, 2015, 2015-February pp. 564 - 586
- Issue Date:
- 2015-12-07
Closed Access
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Full metadata record
Field | Value | Language |
---|---|---|
dc.contributor.author | Kristan, M | en_US |
dc.contributor.author | Matas, J | en_US |
dc.contributor.author | Leonardis, A | en_US |
dc.contributor.author | Felsberg, M | en_US |
dc.contributor.author | Čehovin, L | en_US |
dc.contributor.author | Fernández, G | en_US |
dc.contributor.author | Vojír˜, T | en_US |
dc.contributor.author | Häger, G | en_US |
dc.contributor.author | Nebehay, G | en_US |
dc.contributor.author | Pflugfelder, R | en_US |
dc.contributor.author | Gupta, A | en_US |
dc.contributor.author | Bibi, A | en_US |
dc.contributor.author | Lukežič, A | en_US |
dc.contributor.author | Garcia-Martin, A | en_US |
dc.contributor.author | Saffari, A | en_US |
dc.contributor.author | Petrosino, A | en_US |
dc.contributor.author | Montero, AS | en_US |
dc.contributor.author | Varfolomieiev, A | en_US |
dc.contributor.author | Baskurt, A | en_US |
dc.contributor.author | Zhao, B | en_US |
dc.contributor.author | Ghanem, B | en_US |
dc.contributor.author | Martinez, B | en_US |
dc.contributor.author | Lee, B | en_US |
dc.contributor.author | Han, B | en_US |
dc.contributor.author | Wang, C | en_US |
dc.contributor.author | Garcia, C | en_US |
dc.contributor.author | Zhang, C | en_US |
dc.contributor.author | Schmid, C | en_US |
dc.contributor.author |
Tao, D https://orcid.org/0000-0001-7225-5449 |
en_US |
dc.contributor.author | Kim, D | en_US |
dc.contributor.author | Huang, D | en_US |
dc.contributor.author | Prokhorov, D | en_US |
dc.contributor.author | Du, D | en_US |
dc.contributor.author | Yeung, DY | en_US |
dc.contributor.author | Ribeiro, E | en_US |
dc.contributor.author | Khan, FS | en_US |
dc.contributor.author | Porikli, F | en_US |
dc.contributor.author | Bunyak, F | en_US |
dc.contributor.author | Zhu, G | en_US |
dc.contributor.author | Seetharaman, G | en_US |
dc.contributor.author | Kieritz, H | en_US |
dc.contributor.author | Yau, HT | en_US |
dc.contributor.author | Li, H | en_US |
dc.contributor.author | Qi, H | en_US |
dc.contributor.author | Bischof, H | en_US |
dc.contributor.author | Possegger, H | en_US |
dc.contributor.author | Lee, H | en_US |
dc.contributor.author | Nam, H | en_US |
dc.contributor.author | Bogun, I | en_US |
dc.contributor.author | Jeong, JC | en_US |
dc.contributor.author | Cho, JI | en_US |
dc.contributor.author | Lee, JY | en_US |
dc.contributor.author | Zhu, J | en_US |
dc.contributor.author | Shi, J | en_US |
dc.contributor.author | Li, J | en_US |
dc.contributor.author | Jia, J | en_US |
dc.contributor.author | Feng, J | en_US |
dc.contributor.author | Gao, J | en_US |
dc.contributor.author | Choi, JY | en_US |
dc.contributor.author | Kim, JW | en_US |
dc.contributor.author | Lang, J | en_US |
dc.contributor.author | Martinez, JM | en_US |
dc.contributor.author | Choi, J | en_US |
dc.contributor.author | Xing, J | en_US |
dc.contributor.author | Xue, K | en_US |
dc.contributor.author | Palaniappan, K | en_US |
dc.contributor.author | Lebeda, K | en_US |
dc.contributor.author | Alahari, K | en_US |
dc.contributor.author | Gao, K | en_US |
dc.contributor.author | Yun, K | en_US |
dc.contributor.author | Wong, KH | en_US |
dc.contributor.author | Luo, L | en_US |
dc.contributor.author | Ma, L | en_US |
dc.contributor.author | Ke, L | en_US |
dc.contributor.author | Wen, L | en_US |
dc.contributor.author | Bertinetto, L | en_US |
dc.contributor.author | Pootschi, M | en_US |
dc.contributor.author | Maresca, M | en_US |
dc.contributor.author | Danelljan, M | en_US |
dc.contributor.author | Wen, M | en_US |
dc.contributor.author | Zhang, M | en_US |
dc.contributor.author | Arens, M | en_US |
dc.contributor.author | Valstar, M | en_US |
dc.contributor.author | Tang, M | en_US |
dc.contributor.author | Chang, MC | en_US |
dc.contributor.author | Khan, MH | en_US |
dc.contributor.author | Fan, N | en_US |
dc.contributor.author | Wang, N | en_US |
dc.contributor.author | Miksik, O | en_US |
dc.contributor.author | Torr, PHS | en_US |
dc.contributor.author | Wang, Q | en_US |
dc.contributor.author | Martin-Nieto, R | en_US |
dc.contributor.author | Pelapur, R | en_US |
dc.contributor.author | Bowden, R | en_US |
dc.contributor.author | Laganière, R | en_US |
dc.contributor.author | Moujtahid, S | en_US |
dc.contributor.author | Hare, S | en_US |
dc.contributor.author | Hadfield, S | en_US |
dc.contributor.author | Lyu, S | en_US |
dc.contributor.author | Li, S | en_US |
dc.date.issued | 2015-12-07 | en_US |
dc.identifier.citation | Proceedings of the IEEE International Conference on Computer Vision, 2015, 2015-February pp. 564 - 586 | en_US |
dc.identifier.isbn | 9781467383905 | en_US |
dc.identifier.issn | 1550-5499 | en_US |
dc.identifier.uri | http://hdl.handle.net/10453/120648 | |
dc.description.abstract | © 2015 IEEE. The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website. | en_US |
dc.relation.ispartof | Proceedings of the IEEE International Conference on Computer Vision | en_US |
dc.relation.isbasedon | 10.1109/ICCVW.2015.79 | en_US |
dc.title | The Visual Object Tracking VOT2015 Challenge Results | en_US |
dc.type | Conference Proceeding | |
utslib.citation.volume | 2015-February | en_US |
utslib.for | 0801 Artificial Intelligence and Image Processing | en_US |
pubs.embargo.period | Not known | en_US |
pubs.organisational-group | /University of Technology Sydney | |
pubs.organisational-group | /University of Technology Sydney/Faculty of Engineering and Information Technology | |
pubs.organisational-group | /University of Technology Sydney/Students | |
utslib.copyright.status | closed_access | |
pubs.publication-status | Published | en_US |
pubs.volume | 2015-February | en_US |
Abstract:
© 2015 IEEE. The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.
Please use this identifier to cite or link to this item:
Download statistics for the last 12 months
Not enough data to produce graph