Modern machine learning systems such as image classifiers rely heavily on
large scale data sets for training. Such data sets are costly to create, thus
in practice a small number of freely available, open source data sets are
widely used. We suggest that examining the geo-diversity of open data sets is
critical before adopting a data set for use cases in the developing world. We
analyze two large, publicly available image data sets to assess geo-diversity
and find that these data sets appear to exhibit an observable amerocentric and
eurocentric representation bias. Further, we analyze classifiers trained on
these data sets to assess the impact of these training distributions and find
strong differences in the relative performance on images from different
locales. These results emphasize the need to ensure geo-representation when
constructing data sets for use in the developing world.