Authors : Patrick Obrien, Kenning Arlitsch, Leila Sterman, Jeff Mixter, Jonathan Wheeler, Susan Borda
A primary impact metric for institutional repositories (IR) is the number of file downloads, which are commonly measured through third-party Web analytics software. Google Analytics, a free service used by most academic libraries, relies on HTML page tagging to log visitor activity on Google’s servers.
However, Web aggregators such as Google Scholar link directly to high value content (usually PDF files), bypassing the HTML page and failing to register these direct access events.
This article presents evidence of a study of four institutions demonstrating that the majority of IR activity is not counted by page tagging Web analytics software, and proposes a practical solution for significantly improving the reporting relevancy and accuracy of IR performance metrics using Google Analytics.
URL : Undercounting File Downloads from Institutional Repositories
DOI : http://dx.doi.org/10.1080/01930826.2016.1216224
Authors : Sara Mannheimer, Leila Belle Sterman, Susan Borda
This article analyzes twenty cited or downloaded datasets and the repositories that house them, in order to produce insights that can be used by academic libraries to encourage discovery and reuse of research data in institutional repositories.
Using Thomson Reuters’ Data Citation Index and repository download statistics, we identified twenty cited/downloaded datasets. We documented the characteristics of the cited/downloaded datasets and their corresponding repositories in a self-designed rubric.
The rubric includes six major categories: basic information; funding agency and journal information; linking and sharing; factors to encourage reuse; repository characteristics; and data description.
Our small-scale study suggests that cited/downloaded datasets generally comply with basic recommendations for facilitating reuse: data are documented well; formatted for use with a variety of software; and shared in established, open access repositories.
Three significant factors also appear to contribute to dataset discovery: publishing in discipline-specific repositories; indexing in more than one location on the web; and using persistent identifiers.
The cited/downloaded datasets in our analysis came from a few specific disciplines, and tended to be funded by agencies with data publication mandates.
The results of this exploratory research provide insights that can inform academic librarians as they work to encourage discovery and reuse of institutional datasets.
Our analysis also suggests areas in which academic librarians can target open data advocacy in their communities in order to begin to build open data success stories that will fuel future advocacy efforts.
URL : Discovery and Reuse of Open Datasets: An Exploratory Study
DOI : http://dx.doi.org/10.7191/jeslib.2016.1091