Google Landmarks Dataset v2 — A Large-Scale Benchmark for Instance-Level Recognition and Retrieval

By Tobias Weyand, Andre Araujo, Bingyi Cao, Jack Sim

Abstract —

While image retrieval and instance recognition techniques are progressing rapidly, there is a need for challenging datasets to accurately measure their performance — while posing novel challenges that are relevant for practical applications. We introduce the Google Landmarks Dataset v2 (GLDv2), a new benchmark for large-scale, fine-grained instance recognition and image retrieval in the domain of human-made and natural landmarks. GLDv2 is the largest such dataset to date by a large margin, including over 5M images and 200k distinct instance labels. Its test set consists of 118k images with ground truth annotations for both the retrieval and recognition tasks. The ground truth construction involved over 800 hours of human annotator work. Our new dataset has several challenging properties inspired by real world applications that previous datasets did not consider: An extremely long-tailed class distribution, a large fraction of out-of-domain test photos and large intra-class variability. The dataset is sourced from Wikimedia Commons, the world’s largest crowdsourced collection of landmark photos. We provide baseline results for both recognition and retrieval tasks based on state-of-the-art methods as well as competitive results from a public challenge. We further demonstrate the suitability of the dataset for transfer learning by showing that image embeddings trained on it achieve competitive retrieval performance on independent datasets.

Talking-Heads Attention

By Noam Shazeer, Zhenzhong Lan, Youlong Cheng, Nan Ding, Le Hou

Abstract —

We introduce “talking-heads attention” — a variation on multi-head attention which includes linear projections across the attention-heads dimension, immediately before and after the softmax operation. While inserting only a small number of additional parameters and a moderate amount of additional computation, talking-heads attention leads to better perplexities on masked language modeling tasks, as well as better quality when transfer-learning to language comprehension and question answering tasks.

#transfer-learning #data-science #artificial-intelligence #machine-learning #programming

5 AI/ML Research Papers on Transfer Learning You Must Read
1.35 GEEK