I just returned from the 31st meeting of IATUL. IATUL provides a forum for the exchange of ideas relevant to librarianship in technological universities throughout the world. Data was on everyone’s mind! It was amazing to see the work being done on DataCite, out of Germany, and also, ANDS, an Australian effort. These types of projects make datasets extremely visible. There is obviously still a lot of work to be done by subject bibliographers and collection development specialists to acquire datasets, and also, to work with faculty to make sure that datasets get deposited into institutional repositories. But I have a lot of faith that with deep collaboration, large amounts of data will become publicly accessible over the next few years. The recent NIH and NSF mandates make this work crucial. The Library wants to be at the center of a strategic data management plan! Hard work between Datacite and publishers, such as Elsevier, make sure that the user has easy linking from an article, to the publicly accessible data behind the research. Even if they don’t have access to the article, they will have access to the abstract, and to the data that supports the article. A recent research paper in NaturePrecedings has shown that 48% of open data received 85% of total citations, “48% of trials with publicly available microarray data received 85% of the aggregate citations.”
Libraries can incorporate this work into their local catalogs, and make data easily findable.
We will see lots of efforts around data curation in the next few years. We are in a whole new dataland!