Data Portal @ linkeddatafragments.org

ScholarlyData

Search ScholarlyData by triple pattern

Matches in ScholarlyData for { ?s ?p Can we automatically generate representative and diverse views of the world's landmarks from community-contributed collections on the web? Community-contributed collections of media on the web are a becoming a vast, rich resource for image and video on a long-tailed array of topics. We use a combination of context- and content-based tools to generate representative sets of images for location-driven features and landmarks, a common search task. To do that, we using location and other metadata, as well as tags associated with images, and the images' visual features. We present an approach to extracting tags that represent landmarks. We show how to use unsupervised methods to extract representative views and images for each landmark. This approach can potentially scale to provide better search and representation for every landmark, worldwide. We evaluate the system in the context of web image search using a real-life dataset of 110,000 images from the San Francisco area.. }

Showing items 1 to 1 of 1 with 100 items per page.