Session 1: Crowdsourcing and Open Source
Can crowdsourcing solutions serve many masters? Can they be beneficial for both, for the layman or native speakers of minority languages on the one hand and serious linguistic research on the other? How did an infrastructure that was designed to support linguistics turn out to be a solution for raising awareness of native languages?
Since 2012 the National Library of Finland has been developing the Digitisation Project for Kindred Languages, in which the key objective is to support a culture of openness and interaction in linguistic research, but also to promote crowdsourcing as a tool for participation of the language community in research. In the course of the project, over 1,200 monographs and nearly 111,000 pages of newspapers in Finno-Ugric languages will be digitised and made available in the Fenno-Ugrica digital collection. This material was published in the Soviet Union in the 1920s and 1930s, and users have had only sporadic access to the material.
The publication of open-access and searchable materials from this period is a goldmine for researchers. Historians, social scientists and laymen with an interest in specific local publications can now find text materials pertinent to their studies. The linguistically-oriented population can also find writings to delight them: (1) lexical items specific to a given publication, and (2) orthographically-documented specifics of phonetics. In addition to the open access collection, we developed an open source code OCR editor that enables the editing of machine-encoded text for the benefit of linguistic research. This tool was necessary since these rare and peripheral prints often include already archaic characters, which are neglected by modern OCR software developers but belong to the historical context of kindred languages, and are thus an essential part of the linguistic heritage.
When modelling the OCR editor, it was essential to consider both the needs of researchers and the capabilities of lay citizens, and to have them participate in the planning and execution of the project from the very beginning. By implementing the feedback iteratively from both groups, it was possible to transform the requested changes as tools for research that not only supported the work of linguistics but also encouraged the citizen scientists to face the challenge and work with the crowdsourcing tools for the benefit of research.
This presentation will not only deal with the technical aspects, developments and achievements of the infrastructure but will highlight the way in which user groups, researchers and lay citizens were engaged in a process as an active and communicative group of users and how their contributions were made to mutual benefit.
Jussi-Pekka Hakkarainen graduated from the University of Turku (General History) and the University of Helsinki (West and South Slavonic Languages and Cultures; Czech). In recent years, he has been employed by the National Library of Finland, where his assignments have related to e-publishing, the Polonica collection and international co-operation on digitisation projects. He is also preparing a PhD thesis on ‘The Scientific and Political Networks of the Finnish Slavists in 1921-1925’. Currently, he is leading the Digitisation Project for Kindred Languages (2012-2016) at the National Library of Finland. Funded by the Kone Foundation, this project will produce digitised materials in the Uralic languages as well as development tools to support linguistic research and citizen science. The resulting materials will constitute the largest resource for the Uralic languages in the world. Through this project, researchers will gain access to corpora which they have not been able to study before, and to which all users will have open access regardless of their place of residence. The materials will be made available to both researchers and citizens through the National Library’s Fenno-Ugrica collection.
Open access, open source software and open data have been attracting increased attention in the library scene for years. The development of the first open source library system started almost fifteen years ago, but the whole of the library community has not started to adapt to open source alternatives until recently. Meanwhile, other open source iterations have gained in popularity on the library scene.
Compared to commercial systems, open source software hold numerous compelling advantages, such as cost, flexibility, and freedom which make switching from proprietary software to open source software a tempting strategy for many libraries. Although the software might be free, it still features associated costs. Keeping the software up and running requires both human and technical resources, and that is where the costs come in. Thus, it is important to be aware of the costs, commitments, and risks that come along with the strategy. Open source can be a great asset to any library, but libraries need to be ready to accept the risks associated with it.
Choosing a piece of open source software can be tricky due to the ever-growing number of open source projects. A bad choice can result in a waste of money and valuable time being spent pursuing wrong directions. The question is how to avoid the oversights and find the right alternative? What should be taken into consideration when choosing open source software?
This paper is about evaluating open source software, and it presents a group of guidelines, which can be divided into five categories:
- Evaluating features and functionality
- Evaluating technologies and software architecture
- Evaluating software licensing
- Evaluating the community
- Evaluating my organization and its resources
The paper will discuss each category in detail and introduce useful guidelines for libraries considering an open source strategy. It will also present a case study which focuses on evaluating open source software as a part of the New Library System (NLS) project coordinated by the National Library of Finland (NLF). The aim of the project is to build a modern multi-tenant library service platform, which with the public interface Finna will offer a core service infrastructure for all Finnish libraries. The emphasis of the presentation is on the guidelines and the case study, but the benefits and shortcomings of open source software will also be discussed.
Petteri Kivimäki is an Information Systems Specialist in the National Library of Finland. He has a BSc in software engineering and has worked on customer service in libraries since 2000, and in library information technology since 2004. He has worked on software projects of different sizes and has both used and developed open source software. He specialises in Java/J2EE technologies.
Session 2: Linked Open Data
2.1 Nuno Freire, The European Library, The Netherlands; Michael Mertens, Research Libraries UK (RLUK), UK
Insights and Outcomes from the Experience of The European Library and Research Libraries UK in Providing Linked Open Data
The benefits of Linked Data are nowadays widely accepted within the library community, with many parallel activities on-going to establish standards and best practices. This paradigm of data representation brings many new challenges to libraries. The generic nature of data representation used in Linked Data, while it seamlessly allows any community to manipulate the data, also brings many open paths to its implementation, and within the library sector there is not yet an established practice of how optimally to publish library linked data.
The European Library makes available a dataset of Linked Open Data (LOD), derived from the collections aggregated from member libraries, in order to promote and facilitate the reuse of the data by other libraries and communities. This presentation describes the experience of The European Library in the creation of this linked dataset, the close co-operation with the RLUK consortium in this line of work, and the perspectives, of these two organisations, on the benefits of linking and opening library data in large aggregation contexts.
The dataset includes national bibliographies, library catalogues, and research collections including both digital and physical resources. An additional subset covers the linking of subject heading systems widely used in Europe, which is conducted by several libraries co-operating under project MACS (Multilingual Access to Subjects). Making the outcomes of MACS available as LOD is likely to be a key factor in promoting its re-use, since ontologies, vocabulary building and alignment is a key aspect in many areas, including research infrastructures, and it is not restricted to bibliographic resources.
The task of creating LOD is demanding in terms of human and computational resources, and requires a large range of expertise in information science and semantic technology. Given this context, library aggregators provide an organizational environment where conducting LOD activities becomes less demanding for libraries. This kind of organisation can leverage existing information and communication technologies as part of their operations; the centralisation of data, and their expertise in both library data and the semantic web.
The presentation will also cover the practical upshots of using LOD at the Hackathon that RLUK will be conducting in May 2014, as well as point to potential services that could be built on LOD and eventually supplied or facilitated by The European Library.
Nuno Freire is a Chief Data Officer at The European Library. He holds a PhD in Informatics and Computer Engineering from the Instituto Superior Técnico at the Technical University of Lisbon. During his whole career he has been involved in data-oriented projects in the area of digital libraries. His areas of interest include information systems, information retrieval, information extraction, data quality, and knowledge representation, particularly in their application to digital libraries and bibliographic data.
Mike Mertens is Deputy Executive Director and Data Services Manager of RLUK. He has held posts at the University of Birmingham, in Bibliographical Services (NFF-funded and RSLP projects) and Research Support. At that time he also undertook web portal work for Intute (Eurostudies) and the Foreign & Commonwealth Office. He currently acts as an adviser on the Archives Hub Steering Committee, the Copac Steering Committee, the LIBER Steering Committee for Digitisation and Resource Discovery and the JISC/RLUK Discovery Advisory group, and has served on the Digital Preservation Coalition Board. He is still research active, and has recently collaborated on the volume Rolf Gardiner: Folk, Nature and Culture in Interwar Britain (2010), and has also worked with the artist Olivia Plender on the project ‘Life in the Woods’ (2011) and her exhibition ‘More or :Less Witches 2/3: Tests Ritualized’, (Paris, 2012), using folk culture for new forms of collective art practice and education.
Session 3: Innovation
3.1 Lorraine Joanne Beard, Nick Campbell, University of Manchester, UK
Innovation and Ice Cream – Leadership, Strategy and Engagement at the Heart of Innovation at the University of Manchester Library
This practice-based fun and interactive workshop will outline the University of Manchester Library’s strategic approach to embedding innovation into the core of Library service provision and will highlight recent examples of exciting projects that have emerged from our drive to become more innovative. These will include our Eureka Library Innovation Challenge event held last summer which attracted national interest, our plans to introduce gamification into mainstream library services and our recent Library Wellbeing campaign. It will focus on how to unlock customer potential and creativity, establish your Library’s innovation agenda and culture and form new partnerships with user groups.
As the lead of the Library’s Innovation agenda and project manager of Eureka!, we should like to tell you about what we have learnt from these projects and share how this will help you in your quest to optimise opportunities for innovation and to maximise student and staff engagement. The Eureka competition has proved a catalyst in encouraging creative thinking across our Library staff and dovetails perfectly with our Library’s strategic aims and objectives. The Eureka! engaged the library with students, staff and other University stakeholders, whilst at the same time providing us with key marketing opportunities to promote the Library. As a consequence of this, it heightened the profile of the Library across the rest of the University and perhaps most importantly, it allowed us to gain ideas for service innovation from a customer perspective ~ and all with a little help from some ice-cream!
Want to know more?
Eureka! You-tube Videos
Guardian newspaper article by Librarian and Director Jan Wilkinson
In June 2013 we presented this workshop at the Summer SCONUL Conference in Dublin, detailing the Eureka! model to library staff from throughout the UK and Ireland, indicating how they might map this student/staff driven idea into their own innovation policy: the feedback from this session was excellent.
Lorraine Beard leads a team which manages and develops the Library’s eLibrary infrastructure, digitisation infrastructure, the Library Management System and the Institutional Repository. Lorraine is a member of both the Library Leadership Team and the IT Leadership Team for the University of Manchester, and in these roles has been involved in the development of the Library and IT Strategies for the University. Recently, Lorraine has been closely involved in improving the user experience at Manchester by implementing a new LMS and developing a research data management service. She has also led the development of the Library’s digitisation strategy and led the team which has developed the University’s institutional repository, Manchester eScholar, which is now one of the largest in the UK. She is also the Library’s lead on innovation, and has led a number of new projects and services emerging from this including the Eureka innovation challenge competition and gamification. She has had various roles in academic libraries over the last eighteen years, including Faculty Librarian and Electronic Resources Librarian. Before pursuing a career in libraries, she worked for several years in biological sciences research, after finishing her degree in biology at the University of Manchester.
Nick Campbell is Academic Engagement Librarian in the University of Manchester Library, managing the Library’s relationship with all six Medical and Human Sciences Research Institutes. A key member of the Library’s innovation group, he has been the lead Project Manager for both Library Innovation Challenge Events. The ‘Eureka’ competition asked students to devise an innovative concept or design to enrich the library experience for all users. The host for the last event was TV performer, Phill Jupitus, and the event in March 2014 attracted both national and international interest. He is an active member of the Library’s Leadership Development Network and is also part of the sub-group which plans events and training for aspirational career-oriented library colleagues. Prior to becoming a librarian, he worked in the finance sector.
This paper presents options for implementing gamification techniques in academic libraries. The answers to such fundamental questions as why gamify services in academic libraries or how to do so are provided. An illustrative theoretical solution for gamification of the DART portal (www.dart-europe.eu) is shown, based on research for a Master’s paper defended at the University of Belgrade in November 2013. Gamification is the practice of introducing game elements into a non-game environment in order to achieve some specific business goal or for behaviour modification. It has been successfully applied in many industries, but librarianship is still lagging behind. The paper provides the basic theoretical background and singles out some of the existing services in academic libraries that are inherently suitable for gamification. Game elements such as game dynamics and game mechanics are presented with the motivating drivers they produce in users. Conflicting motivators, which are often the reason for unsuccessful gamification, are explained.
A theoretical solution for a two step process of gamification of the DART portal is presented. Step one deals with gamification of existing DART services such as searching and downloading theses. It is a good example of how similar services may be gamified relatively easily and with only limited resources. Step two deals with options for the creation of a user community that may have a profound impact on broadening the DART partner base, and the use and visibility of its resources. It is a good example of how wide-ranging and long-term business effects can be accomplished by the implementation of gamification. The use of game components such as points, badges and leaderboards are explained. The importance of triangle connecting motivation, action and feedback is underlined, along with the necessity of applying appropriate game elements and tools in an academic environment with its specific existing motivators. The story that provides a framework for user participation in a gamified system is defined. The importance of fun in a successfully gamified system is explained along with the role of tools that introduce the element of chance into the system. The paper advocates the use of gamification in academic libraries and underlines the need for education for academic librarians implementing gamification in their libraries.
Adam Sofronijevic has an MSc in management, an MSc in LIS and a BSc in IT. In December 2012, he completed his PhD in LIS, Faculty of Philology, University of Belgrade. He also holds the CISCO CCNA industrial certificate and numerous Coursera, Udacity and MITx online certificates in the fields of ICT, management and humanities. His first Master’s thesis was on the library business model, and the second was on Enterprise 2.0, both from the University of Belgrade. He has six years’ experience in research in Serbian libraries and in the implementation of IT solutions in the library environment. He has four years’ experience of managing a department in the Svetozar Marković University Library; and implementation experience in kick starting Enterprise 2.0 tools in the library environment. He regularly publishes in national and international journals, including IF journals, and he presents at international conferences.
Session 4: Open Access
4.1 Laurence Bebbington, University of Aberdeen, UK
Whose Property Is It Anyway? Part 1: How Do Changes to UK Copyright Law Enhance Access to Research and Stimulate Green Open Access to Scholarly Research Outputs?
This paper firstly reviews changes to UK copyright law introduced in April 2014. The impact of the changes on European research and researchers is summarised. How changes to educational, research, and library and archive exceptions will impact on researcher activities in the UK is discussed. Their effects on access to UK print and digital research collections by European and international researchers is also covered. How exceptions new to the UK (e.g. text and data mining) will affect research and create new research opportunities is explained. The paper reviews how UK Government plans for further changes to UK law (on orphan works, extended collective licensing, etc) will continue to transform the legal context of the UK research landscape, either reinforcing or reducing territorial and other barriers erected by copyright to the conduct and pursuit of individual or collaborative cross-border research. Will the changes foster and promote opportunities to advance the European research agenda (e.g. by mass digitisation projects) or will existing barriers to research merely be reinforced?
The second part of the paper provides key legal background for Chris Banks’s linked paper on Open Access in the UK. Discussion of how to advance OA to research outputs is once again of fundamental concern to research libraries and researchers. As governments and research funding bodies seek to promote access to publicly funded research, particularly by Green OA, Green OA itself is under attack by attempts to restrict the Green route by removing Green archiving options, and other tactics. The legal background to OA but particularly Green OA will be outlined. The continuing role, and implications, of assignments and licences in scholarly publishing are discussed. The legal status of previous drafts of scholarly articles (as opposed to final versions of record) is systematically considered, including discussion of relevant legal precedents. The key role of standard instruments (such as the SPARC and CALTECH Author Addenda, the Edinburgh Licensing Addendum) to license back rights essential for progression and protection of open access to research outputs is highlighted, as is the role of Creative Commons in licensing access to research outputs.
Copyright, contract and licensing provide the framework for lawful access to scholarly research outputs: this paper provides a state-of-the-art review as to how healthy the current position is in promoting open access to research.
Laurence Bebbington has been Deputy Librarian and Head of Library Services at the University of Aberdeen since December 2009. Previously, he has worked at the universities of Glasgow, Strathclyde, Cambridge, Birmingham and Nottingham. He has experience of many subject disciplines including medical sciences, criminology, science, social sciences, business, education, and law. He also has experience of managing all key library support systems and services including acquisitions, cataloguing, systems, serials, circulation and document supply, and subject and academic liaison work. He has published in the UK on legal issues in library and information work, including copyright and other IPRs, freedom of information, data protection, accessibility law, and contracts and licences. He has also presented papers, seminars and workshops in the UK and abroad on all of these legal areas as they affect the information and library profession. Currently he chairs the Scottish Confederation of University and Research Libraries (SCURL) Legal Issues Group; is a member of the Society of College and National University Libraries Copyright Group; a member of the Universities UK/Guild HE Copyright Negotiating Group; and sits on two UK Government Working Groups investigating diligent searching, and pricing and licensing of orphan works under the 2013 Enterprise and Regulatory Reform Act.
4.2 Chris Banks, Imperial College London, UK
Whose Property Is It Anyway? Part 2: The Challenges in Supporting the UK’s Main Research Funder Agendas which Seek to Ensure that the Outputs from Publicly-Funded Research are Published Open Access
The Open Access movement has been underway for well over a decade. In 2012 a UK group led by Dame Janet Finch produced what is now referred to as the Finch Report. The Report argued that the outputs of publicly funded research should be published Open Access and that, in order to achieve a step change, and to ensure immediate access to published findings, Gold Open Access was their preferred route. The UK’s Research Councils (RCUK) quickly stepped in with a mandate that from April 2013 all publications arising from research funded by one of the RCUK funders should be published Open Access, ideally without embargo and with a license which permits maximum use and re-use. At the time of writing, it looks likely that the Higher Education Funding Council will mandate that all outputs submitted for the next Research Excellence Framework (REF) (the means by which research funding is allocated directly to UK Universities) will also need to have been published as Open Access at, or very soon after, the date of publication.
This paper will summarise the various funder requirements, the administrative challenges for libraries and publishers alike, and the opportunities that have arisen enabling university libraries to forge new working relationships with other university departments. It will explore academic and publisher responses to the funder mandates, and in particular, the new publisher models that are emerging, apparently in response to this new more directive approach to open access publishing. It will outline the opportunities, as seen from the library perspective, for new business relationships between university libraries, publishers, subscription agents and for CRIS systems. It will explore the challenges of monitoring compliance with the new funder mandates. It will also outline how some libraries are developing new services to support researchers. In particular, the issues of copyright, licensing and creative commons mentioned in the earlier paper by Laurence Bebbington will be highlighted to show where new services can be developed, which can both support academic license choices and maximise the early access to the peer-reviewed research arising from public funding.
Chris Banks is Director of Library Services at Imperial College London, joining in September 2013. She has previously worked at the University of Aberdeen where she was University Librarian and Director of Library, Special Collections and Museums, and at the British Library, where she spent over twenty years in a variety of curatorial, management and strategic roles. At Aberdeen she was the Library lead on the award-winning £57m Library project and The Sir Duncan Rice Library was opened by Her Majesty The Queen in September 2012. She is a Fellow of the Royal Society of Arts (FRSA) and contributes to the wider library and information profession. She is currently an elected Board Member of Research Libraries UK, as well as acting on the advisory boards of several major publishers.
Session 5: Improved Access to Information
Over the last couple of years, Leiden University Library has carried out a number of projects to improve access to its Special Collections, both in physical and digital ways. This presentation will focus on the infrastructure behind these projects, which was designed to meet two key requirements: better and faster service for the clients; and a more efficient workflow for the employees of the Library. What are the principles underlying this infrastructure?
- All metadata are synchronized between the various systems automatically, starting with OCLC GGC as starting point. Aleph and DigiTool, both by Ex Libris, are used for services and digital image repository respectively. All records in OCLC GGC will also be available in WorldCat, thus improving the discoverability of our records.
- Ex Libris’ Primo is used for discovery, ordering scans and view requests. For this purpose, order buttons are added to all special collections in the catalogue. Clients can place their orders 24/7 from anywhere in the world and pay with creditcard, paypal or bank transfer.
- Clients can order scans of all materials, both catalogued and non-catalogued, digitized or non-digitised.
- All scans, both made in projects and in the digitisation on demand workflow, can be viewed for free. Original TIF scans and PDF can be ordered through the reproduction service.
- To organise the digitisation on demand workflow we use the open-source software application Goobi. Scan requests in Primo are sent to Goobi automatically and at the same time connected to the bibliographic metadata imported from the catalogue. The software is also used to make METS files and add structural metadata. Scans of completely scanned materials are exported to DigiTool and delivered to the client automatically.
- Scans made in projects by external vendors are batch uploaded into DigiTool with a locally developed tool called MEGI (Mets ingester and uploader).
- An FTP server is used to deliver the ordered scans fast, safe to the client.
By connecting the available systems on the one hand and developing local additions on the other, researchers and students are offered easier and faster ways to view materials in the reading room, and to order, pay and receive scans.
Saskia van Bergen works as a Senior Project Manager for the Innovations and Projects Department in Leiden University Library. She is responsible for projects focusing on digital access to Special Collections, and deals with the management of digitisation, cataloguing and digital collections. She has also participated in several national projects, such as Early Dutch Books Online (now Delpher) and the Dutch portal for academic heritage Academischecollecties.nl.
Libraries are all facing the task of ensuring their relevance in today’s technologically advanced society. Strategic planning sessions, staff meetings, etc., provide a venue for asking questions and developing goals for improvement. However, what is less often considered is seeking input from the actual library users themselves. To keep constant, libraries have to consider the needs and wants of their patrons, and modify their services to meet these needs. Without seeking input from our actual users, we are ‘shooting in the dark’, blocking ourselves from beneficial suggestions and ideas that our users can contribute, and unknowing as to whether or not we are meeting their needs.
One way to identify the needs of users is through User Experience (UX), a process that is becoming common (according to the literature) at many academic libraries in North America. The User Experience process allows us to examine many aspects of the patrons’ interaction with the library, such as:
How the physical layout of the library contributes to user satisfaction
The digital and personnel services offered
Website and its ‘user-friendliness’
How patrons engage or interact with these services
UX provides an opportunity for libraries to develop their own questions, shape the interview to understand better what is pertinent to their needs.This paper presents a review of two UX studies conducted at the British Columbia Institute of Technology (BCIT). BCIT is British Columbia’s premier polytechnic and has five campuses located in the Metro Vancouver region.
One of the studies was a walk through of the library with students examining the various service points. The other study examined the library home web page; analysing usability in terms of how easily, efficiently and satisfactorily some of our services are used by students to achieve their goals. Participants will come away from this workshop with a knowledge of:
Identifying the target audience and selection of student participants
Preparation of library staff (how did you sell it to them?)
Methodological considerations used
Results and how we analyzed them
Impact of results (i.e. what did we change as a result)
Evaluation of UX project
Recommendations for a future UX projects
The review and its findings are a further reflection of the need for user experience and usability studies for BCIT Library, and for extending these studies to faculty and staff.
Anthony James O’Kelly is the User Experience Coordinator and Reference Librarian at the British Columbia Institute of Technology (BCIT), Western Canada’s premier polytechnic, offering programmes in Business, Health and Computer Sciences as well as many more. While at BCIT he has worked in many capacities as Marketing Coordinator and Public Services Coordinator. He is also responsible for helping faculty and students with research in programmes such as Forensics, Communication and Liberal Studies.
Libraries’ mission is to provide every person with equal access to information. A variety of international and national laws mandate the accessibility of physical and digital public services to people of differing abilities. In the context of digital libraries, accessibility requires special attention to the design and implementation of functionalities for users who have perceptual impairments. This paper focuses on the needs of blind people who use screen readers to access online services. The paper presents a case study of making a digital library service accessible and usable for this target group. The case is the public interface of the Finnish Digital Library, also called Finna. The study covered a short period of Finna’s development, including three phases in the cycle of service development iterations. These phases were: 1) the first evaluation of Finna’s accessibility; 2) the implementation of accessibility improvements; and 3) the re-design of accessibility features. These phases aimed at ensuring the accessibility of the service when the digital library was launched to a wider public.
The first phase, evaluation, was conducted on the test version of Finna. The evaluation was based on data that was acquired with various usability evaluation methods, including user tests. The results indicated several accessibility issues in the Finna service. The second phase, implementation, was analysed using the data from the software development logs. The tasks concerning accessibility indicated the improvements that were feasible to implement. The data for analysing the third phase, re-design, was collected from logs, and an interview with the head interaction designer of the Finna service. The results showed a transformation in the way the developers understood the concept of accessibility. Instead of aiming at providing an adequate level of technical access, they now strive for a better service experience for the blind users of Finna.
The findings from this case study highlight the challenges of providing usable accessibility for impaired users of a digital library. It is essential to understand that the formal accessibility requirements and guidelines for public services do not guarantee the same level of usability for all users. This notion will guide the further development of Finna. The paper ends with a discussion about the applicability of the experience of this case study in the development or procurement of other online library services.
Heli J. Kautonen is Head of Services at the National Library of Finland. Her work experience includes coordination of online services at the Finnish Literature Society, and management of extensive software development projects in the field of cultural heritage and education (e.g. the EU Project CUBER). She has an MA in Art Education from Helsinki University of Art and Design, and is continuing her PhD studies at the Aalto University School of Science. Her research interests focus on strategic usability in the field of digital libraries, digital services, and cultural heritage.
Session 6: Digital Preservation
The identification of digital and non-digital resources must be unique, stable over time, certified, sustainable, globally usable and resolvable. Persistent Identifier (PI) systems address these user needs. However, most of the current PI systems offer only a service to resolve one type of PI to a URL; from our point of view this is not enough. PI systems must evolve towards service providers for trust, certification, integrity and provenance, cross relations and domains applications. Moreover, most of the current PI domains have no relations with others, in some cases they are in strong competition, and this fragmented approach results in a proliferation of services and lack of interoperability, with clear difficulties for users. This paper presents a proposal for an Interoperability Framework (IF) for PI systems at a service level, as developed within the APARSEN project (www.aparsen.eu).
The IF is built on four main pillars: 1) definition of PI systems or domains; 2) four main assumptions; 3) eight trust criteria; and 4) an ontology model. In the framework for this work we use the term ‘PI system’ or ‘PI domain’ as a synonym to indicate the global combination of the user community interested in PI services and, in some cases, providing the content to be identified, the bodies offering PI services, the technology used, the roles and responsibilities architecture, and the policies for different parts of the appropriate long-term preservation plan. Currently, we are considering PI for four types of entities: digital objects, physical objects, bodies and players. The IF has been evaluated and reviewed by a High Level Expert Group (HLEG), consisting of 44 experts in the domain of PI and digital libraries. HLEG has improved the IF, including establishing a common terminology, refining the ontology and delivering a wide consensus on the criteria required for a trusted PI system; and these constitute a significant outcome of the work undertaken.
We developed a demonstrator with PI systems for digital objects and players to address two main objectives: 1) to test the feasibility of the IF model implementation; 2) to measure user satisfaction and refine services across different PI domains. The architecture is distributed over three SPARQL end-points collecting data from seven content providers and two implemented services. On top of this model we can develop services tailored to user needs combining all the trusted PI for objects and for players.
Maurizio Lunghi was born in Florence and has a degree in Electronic Engineering, Telecommunications and Telematics. Since 2005 he has been Scientific Director of the not-for-profit Fondazione Rinascimento Digitale (www.rinascimento-digitale.it), promoting Internet technology within the cultural heritage community. He has worked on the life-cycle of digital objects, from digitisation to colour certification, publication on the Internet with accessibility and copyright criteria, up to all the long-term digital curation and preservation related issues. Since 2007 he has focused on PI all related technologies and issues. Recently, his interests and activity have been focused mainly on trusted digital repositories criteria and certification through project participation: APARSEN (www.aparsen.eu), DigCurV (http://www.digcur-education.org/); Digital Preservation in Europe (http://www.digitalpreservationeurope.eu/); and Magazzini Digitali (http://www.depositolegale.it/). He has been a consultant for the Ministry of Culture in The Netherlands and Luxembourg for activities under the EU Presidency (2004-2005) for international events and workgroups. He has worked at the European Commission DG-INFSO for three years (2000-2003) in Luxembourg, acting as project officer, and has long experience of events organisation, e.g. the ‘Cultural Heritage Online’ international conference in 2006, 2009, 2012 (http://www.rinascimento-digitale.it/Conference2012.phtml), in co-operation with the Library of Congress and other international institutions active in digital preservation-related issues, and other events like the LIBER Workshop on May 2012, the PREMIS workshop (February 2009) (http://www.rinascimento-digitale.it/eventi.phtml) and the conference on the DigCurV project on May 2012 (http://www.digcur-education.org/eng/International-Conference). He has been a member of the iPRES programme committee since 2008 and has submitted scientific papers, e.g. at iPRES 2013 (http://ipres2013.ist.utl.pt/workshops.html).
Nestor, the German-language competence network for digital preservation, was founded in 2003. The Working Group on Costs is one of currently eight working groups and was founded in 2011. Seven members from five different institutions are actively involved, four of them are also engaged on other cost-related projects such as the EU project 4C, DP4Lib or Radieschen.
Being embedded in projects researching preservation costs or investigating already existing cost models, the Working Group’s goal is to provide individual and detailed use cases from our daily work which include some price tags and the number of staff hours needed for ingest, storage, preservation planning and access to archived data. This task is comparable to having to count all the calories consumed and publishing them online, openly available to everyone. Not only is it difficult to weigh food and calculate the exact calories, especially with meals consisting of many ingredients, but such openness and honesty makes one vulnerable to all sorts of criticism and scepticism.
The same is true for preservation use cases. Are we spending too much time on fixing invalid PDF files? Should we really confess to how much time is spent on curating CD-ROMs? If writing a new software tool, are the costs for human resources higher than the costs for a good ‘out-of-the-box’ solution? Transparency about daily preservation work has its price. Spending too much time on a task might lead to criticism about efficiency. Being suspiciously fast can raise the question of what one is doing with the rest of the time? Counting calories?
However, we are able to present an interesting set of individual and detailed use cases from our daily work, revealing the time and money spent, as e.g.:
- Automated ingest via Submission Application
- Technology watch
- Preservation planning with pdf-files
- Costs and priced of persistent identifier (DOI)
- Access to derive copies and the digital master, costs for support
- Several use cases legal advice to digital preservation
The presentation describes the Working Group’s approach and presents the use cases investigated so far.
nestor, URL: www.langzeitarchivierung.de/
4 C, URL: http://4cproject.eu/
DP4lib, Digital Preservation for Libraries, URL: http://dp4lib.langzeitarchivierung.de/
Radieschen project, URL: http://www.sub.uni-goettingen.de/projekte-forschung/projektdetails/projekt/radieschen/
Yvonne Friese has been Project Manager for Digital Preservation at the German National Library of Economics (ZBW) since 2011. She studied Library and Information Science and Spanish at the Humboldt University of Berlin from 2007 to 2011. She is especially engaged on preservation issues with invalid PDF files and rare file formats, which are found on CD-ROMs in the stack rooms of the Library. She co-leads the nestor Working group on Costs and is currently planning to form a new nestor group on format identification and characterisation.
Guidelines for good scientific practice require the safeguarding of data in an accessible and usable form. Funding organisations also increasingly demand data management plans from researchers and, generally, a trend towards facilitating scientifically sound reuse of research data is clearly observable. Researchers are more and more confronted with these requirements and are often not able to fulfil them – or only after undue effort.
The Executive Board at ETH Zurich had commissioned ETH-Bibliothek in 2010 to address these issues by developing a suitable service at an institutional level. An important base for the practical implementation of the Digital Curation project were the results of a broad survey conducted with researchers of all disciplines at ETH Zurich. A technical solution for long-term digital preservation was evaluated and implemented at ETH-Bibliothek. In a pilot with selected research groups from various disciplines, work flows for data deposit were first outlined and then implemented step by step. In addition to the focus on research data, work flows for digital records to be delivered to ETH Zurich University Archives as well as for other digital objects were developed. These included, for example, full text holdings from the Institutional Repository or data originating from digitising projects.
The Digital Curation Office at ETH-Bibliothek now offers services to researchers of ETH Zurich, who either wish to preserve their data in the long term or to safeguard it for a limited period of time in line with the guidelines for good scientific practice. The presentation will show examples of use cases for research data and sum up the questions engaged on during the implementation and the challenges that had to be tackled. It will also discuss the routes taken in order to store the data in a structured form and enrich it with metadata in a long-term archive at the preliminary end of its life cycle. Questions such as the essential aspects of the profile for future data curators will be discussed.
Dr Arlette Piguet is Head of the Customer Services Department at ETH Library and Collections, the largest library in Switzerland, and one of the leading scientific and technical libraries in Europe. In this position she is responsible for all customer-related services of ETH library, with a focus on digital information provision. Arlette Piguet received her master degree in biology at ETH Zurich in 1987, and a postgraduate qualification as Information Specialist from the Institute for Information and Documentation at the University of Applied Sciences in Potsdam (Germany) in 1998. She holds a PhD from the Institute of Library and Information Science at Humboldt University in Berlin (2010). Her doctoral thesis dealt with the subject ‘e-books’. She has held several organizational positions in ETH Library. She was responsible for building up and managing the central office of the Consortium of Swiss Academic Libraries from 2000-2005. In addition to licensing experience, she is a proven expert in managing and executing projects for designing and developing customer-oriented electronic information products.
Session 7: Future and Emerging Technologies
The global revenue of the e-resources market is evaluated at $25 billion and is growing at a rate of 4% to 5% per year. In contrast, a survey conducted by Couperin in 2012 shows that the allocated budgets in French higher education and research institutions are often dramatically reduced, with an average cut of 9%. The COUNTER reports provided by publishers, while useful, have two shortcomings: they are not always available; and they only allow raw quantification of accesses to subscribed e-resources.
EzPAARSE addresses these shortcomings in two ways: 1) it counts accesses locally, allowing the creation of COUNTER-like reports when they are not available; and 2) by connecting with user databases, it allows these numbers to be crossed with locally gathered, reliable and categorised data; and makes it possible, after post-treatment, to build new, high value, strategic indicators: measurement of usage by user categories (students, professors, researchers); by department or unit, etc. It provides libraries with better knowledge of their public; proves useful when negotiating subscription prices and helps in conducting a better documentary policy. To answer that need, the ezPAARSE/AnalogIST project was launched in September 2012. It is built on an existing and tightly integrated solution at INIST-CNRS, and offers an open national solution. It is a partnership between INIST-CNRS, the Couperin consortium and the University of Lorraine.
ezPAARSE is open-source and cross-platform. It publishes a web service to ingest log files collected on a proxy server (e.g. EZProxy, Apache, Squid), and shows how users access suscribed e-resources. It filters, extracts and enriches the consultation events that were identified and produces a result file following the COUNTER Code of Practice for e-Resources. ezPAARSE needs editors’ platform- specific plugins called parsers.
AnalogIST is the portal where the collaborative work of analysing editors’ platforms and creating the corresponding parsers takes place. A live and fully usable demo of ezPAARSE is also available there, to show the progress made at every new iteration. After much positive feedback, we hope to create an international emulation, persuade more institutions to install ezPAARSE, integrate it into their workflows, and participate in this collaborative effort on AnalogIST to enhance the software capacities for the benefit of all.
Thomas Jouneau is E-resources librarian and electronic subscriptions manager for the Université de Lorraine (UL). He is negotiator for the Couperin consortium and also co-manages the consortial e-book team. As a member of the ezPAARSE team, he oversees the deployment of the software and the pilot project for the UL.
Many research libraries have large parts of their collections in closed stacks and lend out numerous inter-library loans. In most of these libraries, orders are printed out on paper and then managed manually by librarians. For the patron, the pathway is all digital: their order is placed via the web and an email arrives when the material is available for collection. At Stockholm University Library we started to question the logistics of our work with closed stacks: could we make this work-flow easier, seamless, faster and all digital? And could we combine all types of orders into one work-flow? A project was started which has now resulted in an all-new work-flow and a new system.
Today we have a web-based system called Viola. The system collects the different types of orders, from closed stacks and inter-library loans, and mixes and sorts them into a digital list. From that list, the librarian selects the orders he or she will collect that day. The chosen orders are downloaded to a hand computer or smart phone. The information in the phone contains all the data needed to find the book. The smart phone can read the barcode or RFID-tag in the book and save that information. When all the books in the list have been gathered, the librarian transfers the list of orders back to his/her computer and the Viola system. Viola updates the library system, so the book’s status becomes lent or reserved. Viola also sends an email to the patron stating that the material is available for collection. The librarian can simply put the material on the shelf to await collection.
There are many benefits with the system:
- One work-flow for several different types of orders
- An all digital road – fewer manual steps
- Faster – less time collecting books, patrons gets the book faster
- Five persons can do the work that ten persons did before
- Competence development of technical skills for the work force
The system has been created by developers working in the Library in association with librarians who know the functionalities they need. It has been a close collaboration, where the librarians have specified the functions needed and the developers have built a system that meets these needs. Working with ‘user stories’ has been key in the collaborative work between librarians and technicians. The system has been tailor made based on workflow requirements and with the end user in mind. The presentation will describe the functionality in detail, show a film of how librarians use Viola and walk through the technical architecture.
Eva Dahlbäck is Librarian and Assistant Manager of the Department of Customer Services in Stockholm University Library, and has worked on library logistics for ten years. She has a degree in Geography and a Master’s in Information and Library Science. She has a keen interest in how the application of technology facilitates work in a modern academic research library.
7.3 Lina Blovesciuniene, Vytautas Magnus University Library, Research Council of Lithuania, Lithuania; Anatanas Streimikis, Lithuanian Academic Libraries Directors’ Association, Kaunas University of Technology, Lithuania
The ‘Lituanistika’ Database as a Research Infrastructure for Lithuanian Studies
The Research Council of Lithuania has been creating an international peer-reviewed research database, ‘Lituanistika’ (LDB), since 2006. The creation of the LDB is one of the national strategic priorities set out in the Law of the Education and Research (2009). The LDB helps to accumulate and preserve the best results of research on the history and status of the state of Lithuania, its society, culture, nation and language. By January 2014, LDB contained 34,247 bibliographic records, 19,059 of them with links to full text documents that are openly accessible. The best librarians from Lithuania’s thirteen universities undertake a global search for texts on Lithuanian studies. They also archive the digital versions of these texts, provided by their publishers, and make links between bibliographic data and full texts. Moreover, they derive information on citation from the text and make it public. It has to be emphasized that LDB is the only database that provides information on citation in Lithuania. The well-known products of the Ex Libris company are used for LDB data cataloguing and providing new services. Such products as the Aleph integrated library system, the Primo modern discovery and delivery system and the SFX scholarly linking service. Fedora, an open source repository software, enables long-term access to the data.
Data are compiled as e-objects that encompass both the descriptive data as well as the metadata on the structure of the e-objects. A full text is added to the e-object if there is a licence agreement with the publishers. LDB stores the texts that have been additionally evaluated by experts before their inclusion in LDB. A special sub-system to mine and visualise the information on citation has been constructed, using the pattern of the Thomson Reuters database, and implemented in the LDB. The reason is that data on the social sciences and humanities, and specifically, research on Lithuanian studies, is not included in the database mentioned above. LDB is supplied with new methods of data mining in the virtual library, dissemination of information using open access protocols and modern WEB technologies. The LDB data compiled over the last nine years is popular with researchers and students in Lithuania and abroad. As a result, LDB is considered to be one of the best research infrastructures for social sciences and humanities involving both librarians and information providers.
Lina Blovesciuniene graduated in library science and bibliography from Vilnius University in 1988. Until 2006, she was employed at Kaunas University of Technology Library as Head of the Cataloguing Department, Chair of Library Council and Systems Analyst. She has been Library Director at Vytautas Magnus University from 2006. From 1995 she has worked with library information systems,adapting and testing, database planning, implementation, and maintenance in the libraries in the Lithuanian Academic Libraries Network, and has organised training seminars for network staff as well. Since 2001 she has been involved in planning the Lithuanian Virtual Library and ithe mplementation of its basic software, and has planned, implemented, and maintained Lithuanian Science and Study Publications, Lithuanian Electronic Theses and Dissertations Information Systems. Since 2005 she has given expert advice on the creation of the Lithuanian Academic e-Library, and has lectured on information literacy for scientists, researchers and students. She has led the Lithuanian Research Council‘s ‘Lituanistika’.database project since 2011.
Antanas Streimikis graduated in engineering mathematics from Kaunas University of Technology in 1980. Since 1980 he has worked in Kaunas University of Technology in various positions related to information systems. Since 2012 he has been Head of the Library Information Systems Unit of Information Technology Department of Kaunas University of Technology. From 1995 he has worked with library information systems, adapting and testing, database planning, implementation, and maintenance in the libraries of the Lithuanian Academic Libraries Network, and has also organised training seminars for network staff. From 2001 he has participated in planning the Lithuanian Virtual Library and the implementation of its basic software, and has planned, implemented, and maintained Lithuanian Science and Study Publications, Lithuanian Electronic Theses and Dissertations Information Systems. Since 2005 he has been involved as an expert in the creation of the Lithuanian Academic e-Library and the ‘Lituanistika’ database, and from 2011 he has been responsible for the information and communication technology infrastructure of the Lithuanian Research Council‘s ‘Lituanistika’ database project.
Session 8: Open Access
The UK’s drive towards Gold Open Access (OA) has generated significant activity in UK universities. The purpose of this paper is to analyse the opportunities for and threats to UK university libraries which, along with their parent institutions, are embracing UK Government policy. As such, this case study will provide evidence of a particular approach to Open Access which be instructive for all European countries.
RCUK (Research Councils UK) has an Open Access policy which allows both Green and Gold OA publishing options. RCUK will fund Gold OA payments for those authors who choose to publish as Gold OA, and has set firm targets for all universities in receipt of OA funding as to the percentage of RCUK-funded research papers which have to appear as Gold OA outputs. This will be enforced with a rigorous reporting regime by universities to the research funder. UCL (University College London) has responded by creating an Institutional Publication fund from its research budget to complement RCUK funding. All these funds are administered by UCL Library Services, which has had to create extensive infrastructure and pan-UCL workflows to manage and administer several million pounds of new funding.
National Policy Developments
HEFCE (Higher Education Funding Council for England) is currently out for consultation on its own Open Access policy, and a formal decision will be published in the Spring of 2014. HEFCE is likely to issue a policy which sees the institutional repository as pivotal to storing and making available full-text research outputs assessed by the national REF (Research Excellence Framework) programme. REF delivers millions of pounds to universities in research income, and libraries will be responsible for making the full-text of REF submissions available online.
OA Monograph Publishing
Open Access is sometimes seen as relevant only to Science, Engineering and Medicine. However, UCL has started its own University Press, a department of the Library, publishing Gold OA monographs across all subject areas, particularly in the Arts, Humanities and Social Sciences. An Open Access journal platform is also available. The Gold-Lite partnership of European OA university-based publishers, run by UCL and OAPEN, will also deliver OA monographs at a European level.
Policy development in the UK has created a range of new roles for libraries in research support. These are re-defining the role of a library in the university.
Paul Ayris has been Director of UCL Library Services since 1997. He is also the UCL Copyright Officer. He is President of LIBER (Association of European Research Libraries) and Chair of the LERU (League of European Research Universities) Community of Chief Information Officers. He chairs the OAI Organizing Committee for the Cern Workshops on ‘Innovations in Scholarly Communication’ and JISC Collections’ Electronic Information Resources Working Group. On 1 August 2013, he became Chief Executive of UCL Press. He has a PhD in Ecclesiastical History and publishes on English Reformation Studies.
OAPEN Foundation operates two platforms, the OAPEN Library (www.oapen.org) and the Directory of Open Access Books (DOAB – www.doabooks.org). Both platforms are dedicated to Open Access, peer-reviewed books. In 2014, OAPEN will launch a Deposit service for OA books. The OAPEN Deposit service aims to support OA policies of research funders and practitioners (universities and their libraries) by providing a central repository dedicated to hosting and disseminating OA peer-reviewed books. The goal is to co-ordinate and facilitate the transition to Open Access book publishing by bringing together supporters of OA books and providing a central infrastructure for services.
The Deposit service provides a range of services in three main areas:
- Establishing and maintaining admission criteria
- Review of license policies and peer review procedures
- Verification and identification of publications
Aggregation and Deposit:
- Aggregation of OA books in the OAPEN Library
Uploading service (technical review, attaching metadata, support)
Harvesting service for repositories of participants
- Management information for participants (usage reports, tracking services for research grants)
- Preservation and archival access
Discovery and Dissemination:
- Full text retrieval of publications through the OAPEN Library system
- Metadata conversion and export for third party aggregators, intermediaries and libraries
- Discovery service through the Directory of Open Access Books (DOAB) (www.doabooks.org)
The OAPEN Deposit service will launch with a number of Research Councils in Europe; in some countries the aim is to develop a national license for all research funders and universities.
This paper describes how the Deposit service works and how it supports OA policies. Special attention is given to the interaction of the service with the institutional repositories of universities and the added value of a centralised approach to the deposit of OA books for universities and their libraries.
Eelco Ferwerda is Director of OAPEN, a foundation dedicated to OA books. He has been active in the area of Open Access for monographs since 2008, when he started managing OAPEN as a EU co-funded project with six European university presses. Before that he worked as Publisher of Digital Products at Amsterdam University Press. Before joining AUP in 2002, he worked in various new media subsidiaries at the former Dutch newspaper publisher PCM, and as Manager Business Development for PCM Interactive Media. He is co-founder of the Association of European University Presses (AEUP, 2010) and of the Directory of Open Access Books (DOAB), which he launched with Lars Björnshauge in 2012. In 2013, he organised a conference on OA monographs in the Humanities and Social Sciences with Caren Milloy of JISC Collections and hosted by the British Library.
In 2006 John Nicoll criticized the practice of charging fees for the use of images of public domain works that resided in tax-subsidized institutions. The answer – that there are, in fact, ways for institutional players to re-engineer the publishing ecosystem – represents a chronicle worth telling, with lessons worth learning. Hilary Ballon and Mariet Westermann, also writing about the struggles of publishing in art history noted that ‘It is a paradox of the digital revolution that it has never been easier to produce and circulate a reproductive image, and never harder to publish one.’ If publishing in general is in crisis because of the seismic re-ordering in a digital world, the field of art history represents the extreme of the spectrum; rights holders are accustomed to licensing image content for limited edition print runs. As a result e-publishing is paralyzed in the field since e-journals and e-books cannot possibly be issued with a promise that they will disappear after three years. Given this particularly challenging corner of the publishing world, a project initiated by the Metropolitan Museum offers some hope of a collaborative way forward. What sociological re-engineering enabled progress on this problem? It is possible that there are other lessons here too, that might throw at least streaks of light on other process re-engineering provoked by digital innovation in publishing?
In this talk we begin by reviewing how a leading repository of art (The Metropolitan Museum of Art) and a non-profit intermediary (Artstor) created an alternative pathway to provide primary source content in support of image-intensive publishing. This venture is framed in the context of a publishing system moving toward greater freedom and an aim to bring about ever lower (or no) fees to readers. A huge challenge for is the significant cost of managing the technology infrastructure around digital assets and associated data. Internally, a great deal of re-alignment was needed to justify the investment in enabling infrastructure and then to shift museums’ mindset about the ‘value’ of the images.
I will discuss the need for collaborative relationships between content owners and educational users; the technological barriers to lowering image permissions costs and how they can be overcome; the service models that support sustainable free services.
James Shulman serves as ARTstor’s President. Working with his colleagues, he developed and implemented plans for creating an organisation that now provides a digital library of over 1.6 million images to over 1,500 colleges, universities, schools, and museums around the world; ARTstor also provides the Shared Shelf cataloging and asset management service, and manages a number of free and open services including Images for Academic Publishing and the Built Works Registry (with the Avery Library and the Getty Research Institute). ARTstor was among the initial content hubs that supported the efforts of museums and libraries that provided content to the Digital Public Library of America. He received his BA and PhD from Yale and writes and speaks about issues associated with the educational use of images and digital technology, innovative non-profits, and high impact philanthropy. He serves on the Board of Smith College and on the Content and Scope Working Group for the Digital Public Library of America.
Session 9: Open Access
Utrecht University Library has developed a new strategy to support Open Access journal publishing. Based on ten years of experience in publishing over twenty Open Access journals with Igitur publishing, we have recently shifted our focus towards developing OA and transition journals in a fixed period of time.
We currently limit the residence time of a journal in our Library to six years. After this period of development and quality management, a journal should either be ready for self-publishing or for transfer to the (commercial) OA market. We develop the journal in predetermined stages by stimulating quality and reach, and work towards impact and sustainability. A major advantage of this approach is that it allows journals to work on (online) strategies gradually and to adapt new financial models. Once a journal has proven to be sustainable in every way, we prepare in association with the editorial board for the upcoming course change and support the transfer to the publishing market.
This paper explains how Utrecht University Library supports OA by developing high-quality journals. It discusses the expertise needed to implement the service successfully and run it. It addresses the required financial investment as well as the contractual requirements to guarantee OA for the journal title and license. But more importantly, as a mediator between scientists and publishers, our Library is able to try and bring together the interests of both worlds and at the same time act as advocate and adviser on OA. In our experience so far, this role and expertise is much needed. If the Library supports the best dissemination of scientific results in Open Access, scholars can focus on their core business: research.
Inge Werner is Publishing Consultant for Utrecht University Library (the Netherlands). She has a background in the Humanities, and received her PhD in Renaissance Studies in 2009. Inge is a Board member and Secretary of the Open Access Scholarly Publishers Association (OASPA). Currently, she runs the library’s publishing service, which publishes over 20 Open Access journals. She advises individual scholars on publishing, and editorial boards on starting and developing a journal in Open Access. She also advises on Open Access policy issues within the library. Her main fields of interest are the changing role of the university library, Open Access policy making and scholarly communication in general.
QOAM – Quality of Service Matched Against Price
When scientific and scholarly publishing is no longer seen as copyright exploitation but as a service, as is the case in the OA paradigm, there is a need for a market where quality of the service can be matched against price. So, once the Journal Score Card was developed (to be demonstrated during presentation), it was a natural step to create a place where information about actual journal prices, that is publication fees, was available as well. The idea of the Quality Open Access Market – QOAM – was born. As QOAM will be operating in a nascent market, both its journal quality indicator and the information about pricing are in their infancy and may need further development. But one has to start somewhere.
For Whom Is QOAM Useful?
First and for all, QOAM aims at authors who want to publish their article in open access in a high-quality journal and for a reasonable price. Then there is the library community. In the subscription period libraries were invaluable to academia when building up high-quality collections. The Big Deals with their ‘take-it-or-leave-it’ approach seemed to make this capacity redundant. QOAM enables libraries to apply this competence anew in the open access world by publishing Journal Score Cards and give journals feedback on how to improve. Last but not least, QOAM makes the world of academic publishing transparent to policy-makers, journalists, and the public at large.
During the presentation, the Quality Open Access Market (www.qoam.eu) wil be demontrated, librarians will be informed how they can get involved themselves, and how they can involve the academics (authors and editors) in their institutions.
Saskia C.J. de Vries became an academic publisher with Kluwer in the 1980s after a short period teaching Dutch Language and Literature. In summer 1992, the Board of the University of Amsterdam asked her to start up Amsterdam University Press (www.aup.nl), and she was its first Director. Over the twenty years of her directorship, Amsterdam University Press grew into an international, academic publisher with 20 employees in 2012, who were responsible for the approximately 200 books and 9 academic journals a year, 60 % of them in English, that were published. In 2006 she was co-founder of Leiden University Press, which has functioned as part of the University of Leiden since 2009. From 2008 till 2011, Amsterdam University Press was coordinator of the EU project Open Access Publishing in European Networks (www.oapen.org). Thanks to this European project, AUP grew into one of the most innovative university presses in the world giving high priority to Open Access publishing. As business models in the academic publishing world are rapidly changing, and since there seems to be a vast need within academia to explore new ways of disseminating academic research results that are primarily funded with public money, she started her own business in 2012: Sampan – Academia & Publishing. At present, she is working with the Royal Netherlands Academy of Arts and Sciences, the National Library of the Netherlands, the Centre of Science and Technology (CWTS) at the University of Leiden and Radboud University of Nijmegen.
Nearly ten years ago, Goettingen State and University Library initiated a university-wide policy encouraging researchers to adopt Open Access whenever possible. This is now underpinned by core services that provide support to researchers, including Goettingen University Press (founded in 2003), as well as repositories for theses and peer-reviewed publications. In addition, a central Open Access fund has been established, which covers article processing charges and monitors the uptake of Gold Open Access at the University of Goettingen. These service areas are combined with strategic involvement in national and international initiatives, such as the Confederation of Open Access Repositories (COAR) and OpenAIRE, the European-wide Open Access infrastructure for publications. These institutional activities work in both directions: they are crucial for enhancing local services and vice versa feed experiences and lessons learned into international collaborations.
However, the realities of developing and running these services and activities– i.e. based on user needs while simultaneously bringing about behaviour change – continue to be a challenge. Even the most ambitious repository requires constant enhancement and marketing to maintain the awareness of researchers. This has special resonance given the current drift of open access discussions towards Gold open access. Promoting Open Access is an on-going effort: technologies and stakeholders continue to change, and researchers’ perceptions and practices evolve during their careers. Open Access services therefore seem to require ongoing evaluation and alignment.
Close attention to user needs has proven to be a helpful navigation system in an otherwise increasingly complex landscape. Our presentation will illustrate how the Library explores these needs, provides feedback to existing services and rethinks approaches where adequate. Among these methods are beta-tests for software solutions, trainings for multipliers and in-depth qualitative social research that has served as a preparation for a larger quantitative survey which took place early this year.
Margo Bargheer is a trained designer and holds a Master’s degree in Social Anthropology and Media Sciences. She is works at Göttingen State and University Library and is head of the Department for Electronic Publishing, which includes the University Press, electronic theses, institutional repositories, projects like OpenAIRE, the COAR office and the publication funds for open access articles. She is an OAPEN board member and currently the spokesperson for the Working Group of German-Speaking University Presses. She teaches electronic publishing and Open Access to students and librarians.
Session 10: New Skills for Librarians
Librarians in universities around the world face continuing changes and challenges in the higher education and research environment, including technological advances, publishing innovations, global recession and policy developments. Some libraries are offering or planning novel research services, often involving coordination with administrative partners, integration with institutional processes, and embedding in scholarly workflows. Others are concerned not to reach beyond their ability to deliver, but risk falling behind and becoming irrelevant in the process. Drawing on ideas from management literature and evidence from library practice, we argue that an overextension strategy, based on the accumulation of invisible assets, is not only advisable, but vital, for research libraries to survive and thrive in the 2020 information landscape.
Published studies have identified technical skills gaps and shortages as constraints on development of library support for research, but have also pointed to core professional competencies that are transferable to new tasks. Practitioner reflections on pioneering efforts in service innovation have recognized additional dimensions of organization behavior that can support successful delivery, including management structures and stakeholder relationships. An intellectual capital perspective enables us to focus holistically on these intangible resources that represent strategic advantage for libraries adopting emergent models of research support. Our project aimed to discover the human, structural and relational assets enabling higher-end service interventions.
A case study design was chosen as a strategy enabling in-depth investigation of complex phenomena in context. Two sites were selected as examples of large and mid-sized libraries in highly-ranked research universities. Semi-structured interviews were used to collect data from a purposive sample of library workers engaged in scholarly communications, open access, bibliometrics, and research data management activities. A review of related literature informed the interview questions. The data was analyzed thematically, using the OECD’s 2008 classification of intellectual assets as an analytical framework. The research confirmed the enduring value of traditional library skillsets when combined with boundary-spanning capacity, based on institutional know-how, and personal networks. The findings have implications for library strategy development and job specification.
Sheila Mary Corrall was appointed Professor and Chair of the Library & Information Science Program at the University of Pittsburgh, USA, in 2012, following eight years as Professor of Librarianship & Information Management at the University of Sheffield, UK, where she was a founding member of the Centre for Information Literacy Research and Head of the Information School for four years. She previously served as director of library and information services at three UK universities, and as head of science, technology, patent, and business information services at The British Library. She teaches courses on Academic Libraries and Research Methods, and her research areas include the application of business management concepts and tools to library and information services; roles, competencies, and education of information professionals; and collection development and information resource management in the digital world. Recent work includes a review of evolving academic library specialties, a study of the contribution of virtual internships to LIS professional education, an international survey of library engagement with bibliometrics and research data management, and an essay on future design of library space from a researcher perspective. She is a member of the UK Arts and Humanities Research Council Peer Review College, she serves on the editorial boards of five international journals, and also on the advisory boards of Credo Reference and Facet Publishing. In 2002 she was elected as the first President of the Chartered Institute of Library and Information Professionals, and in 2003 she was presented with the International Information Industries Lifetime Achievement Award.
This paper will present the results of the ARL/CARL/COAR/LIBER Joint Task Force on Librarians’ Competencies in Support of E-Research and Scholarly Communication.
Rapid changes in technology and associated shifts in research and scholarly communications are profoundly changing the role of libraries in the 21st century. The emergence of e-research, for example, is bringing about new ways of doing science across the globe, compelling libraries to adopt new services, such as assisting with the development of research data management plans, hosting collaborative virtual research environments, in addition to managing institutional repositories, and providing support in open access publishing and digital humanities. These novel services require a range of new skills and expertise within the library community as well as a shift in organizational models for libraries.
In August 2013, the Association of Research Libraries (ARL), the Canadian Association of Research Libraries (CARL), the Confederation of Open Access Repositories (COAR), and the Association of European Research Libraries (LIBER) came together to work jointly on defining professional librarians’ competency needs to support e-research and scholarly communication. The aim of the task force is to outline the competencies needed by librarians in this evolving environment.Since then, the Task Force has been working on identifying emerging specialty roles, performing a literature review and collaboratively preparing a series of service areas and competencies documents for research data management, scholarly communication and publishing, digital curation and preservation and support for digital scholarship.
The task force will also produce a toolkit that will help to build capacity in libraries for supporting new roles in the area of scholarly communication and e-research. The toolkit will allow library managers to identify skill gaps in their institution, form the basis of job descriptions, enable professionals to carry out self-assessments, and act as a foundation for the development of training programmes for librarians and library professionals. In addition, the toolkit will provide an outline of new organisational models that are evolving in this dynamic environment. This paper will update the research library community on the work of the Task Force, and will seek feedback from the community.
Iryna Kuchma is the EIFL Open Access Programme Manager. Her responsibilities include advocacy of open access to research results and support in developing open access policies, training and support in setting up open access journals and open repositories, organising workshops and other knowledge-sharing and capacity-building events. Previously Iryna Kuchma worked as an Information Programme Manager at the International Renaissance Foundation (part of the Soros Foundation network in Ukraine) and coordinated the Arts and Culture Programme there. Iryna is a member of the Directory of Open Access Journals (DOAJ) Advisory Board, DSpace Community Advisory Team (DCAT), IFLA’s Open Access Taskforce, NDLTD (Networked Digital Library of Theses and Dissertations) Board of Directors, the Open Library of Humanities Internationalisation Committee, and PLOS International Advisory Group. She chairs a Working Group ‘Repository and Repository Networks Support & Training’ in the Confederation of Open Access Repositories (COAR) and a Joint ARL/CARL/COAR/LIBER Task Force on Librarians’ Competencies in Support of E-Research and Scholarly Communication.
In 2009-2010 she was a Steering Committee (and Task Group) member, InterAcademy Panel on International Issues (IAP) Programme on Digital Knowledge Resources and Infrastructure in Developing Countries. She has also served on the Access to Learning Award (ATLA) Committee of the Bill and Melinda Gates Foundation’s Global Library Initiative.
Library associations, including LIBER and ARL, have determined that supporting researchers in the discovery, use, and management of data is an emerging critical role for libraries. In order to serve these needs, librarians need to develop a deeper understanding of the ways researchers interact with data throughout their workflow. The integrated way that researchers work with scholarly publications, data and discovery tools also demands that academic librarians integrate data and information literacy in workshops to educate the next generation of scientists. Collaborating with teaching faculty to develop these sessions can increase our own competencies in using data resources, enhance relationships with researchers and lead to improved understanding. Librarians learn how researchers interact with data to solve problems and generate knowledge, and researchers learn how librarians’ understanding of how information works can contribute to their teaching and scholarship.
The presenter will provide a case study of one such collaboration with a Biochemist that has developed over time. Students in a genetics course learn how to use bioinformatics databases such as OMIM and ExPASy, and how those resources provide answers about genes and diseases. In the following semester, a session within a biochemistry course bridges gaps between the two disciplines and introduces students to protein repositories such as the Protein Data Bank and the European Bioinformatics Institute tools. This innovative approach to data literacy has had benefits for the librarian’s work with other faculty and graduate students. The greatest beneficiaries, however, are the students who gain real life experience with current biomedical resources.
The collaboration has also resulted in joint publications and presentations at biology education conferences. Working with data at this level promotes the role of librarians in teaching students to use data resources in concert with more traditional bibliographic sources. Adapting our knowledge and retuning our skills ensures we remain relevant in a changing information environment. The presentation will also outline program best practices, data competencies, and assessment techniques.
Don MacMillan is Liaison Librarian for Biological Sciences, Math, Physics, Astronomy & Mathematics and Biomedical Engineering at the University of Calgary’s Taylor Family Digital Library. He has a BSc and an MLS. He provides program-integrated information literacy instruction and advanced reference and training to students and faculty in those disciplines as well as in Spatial and Numeric Data Services. He also conducts research on student learning, information literacy and the incorporation of discovery tools, data and technology in information literacy instruction.
Session 11: Research Infrastructures
Data in e-Science projects is frequently created and transformed by different methods, tools and schemas, from multiple provenances, and reused in not always expected scenarios. To help to manage this, the concept of the Data Management Plan (DMP) was conceived. However, since the guidelines for a DMP do not cover all concerns, we suggest that this can be done by a complementary Risk Management Plan (RMP). The best practice for a RMP must comprise the definition of a method and techniques which best serve for each phase in the particular domain of application. For these purposes, the ISO/FDIS 31000 and the ISO/IEC 31010 are, respectively, the most relevant. The creation of two documents, the DMP and the RMP, goes to the heart of two distinct problems. In order to avoid duplication, we also propose to unify these plans through a coordinated Data Governance concern. Accordingly, as is becoming usual for DMPs, where we can already see trained librarians playing important roles in many organisations, so librarians trained in Risk Management (RM) could, in the future, play the role of the risk expert, fundamental for the development and execution of the RMP.
To assume this role, we propose the following set of skills for librarians:
- Data Management: know the DM principles, techniques, initiatives, standards and the project’s data life cycle
- Security: A good background in breaches that threatens data is fundamental to assess risks and mitigate them
- Metadata: Knowing how to produce, collect, manage and secure metadata
- Advocacy, copyright and intellectual property rights: In e-Science, data dissemination is important, so copyright or property infringement brings risks threatening that goal
- Technical skills: Relevant to determine technical risks and controls related to the technology and infrastructures in use for e-Science projects
- Data value: Know how to assess the value of the data objects worth protecting
- RM skills: Knowledge of principles, processes and techniques to identify, analyze, evaluate and treat any risk surrounding data
- E-Science focus: Knowledge of the field in question is mandatory
In this paper we illustrate our arguments with examples from the development of a RMP for a real e-Science project, MetaGenFRAME, focused in the domain of Metagenomics.
Raquel Bairrão is a Master’s student in Information Systems and Computer Engineering at Tecnico, University of Lisbon, Portugal. She has a BA in Information Systems and Computer Engineering from the same University in 2013. Currently, she is working on her Master’s thesis on ‘A Risk Management Method to Support Decision Taking in Digital Curation’, and she aims to present a method based on the ISO/FDIS 31000 and ISO/IEC 31010, aligned to Risk Management for the Digital Curation scope. She also works as a researcher in INESC-ID Lisbon, Portugal, being a part of the team of the 4C Project – Collaboration to Clarify the Costs of Curation. She has made a contribution to the paper ‘Data Management in Metagenomics: A Risk Management Approach’, which will be presented in the International Digital Curation Conference (IDCC 2014). The principle areas of study are Risk Management, Digital Curation and Digital Preservation.
Supporting good management of research data is a new initiative for most institutions. In addition to developing new policies and investing in infrastructure, the institution needs a locus for outreach and day-to-day support for researchers trying to meet these new requirements by funders, publishers and the institution. Although many roles across the institution are needed to help researchers fully address these new requirements – IT consultants, records management officers, the Research Office, Ethics Committees – librarians are the best placed to organise awareness-raising campaigns and handle everyday queries regarding best practice in research data management for current and planned projects. The University of Edinburgh Information Services, and particularly the Data Library, has been developing services and building capacity for support for Research Data Management (RDM) since 2007, including training of all our academic service librarians and provision of an online training course for researchers, MANTRA. This paper will reflect on that experience and offer lessons learned to others now engaging in RDM support in their institutions.
Robin Charlotte Rice is Data Librarian for the University of Edinburgh, based in EDINA and Data Library, part of Information Services. She received a Master’s in Library and Information Studies from the University of Wisconsin-Madison and worked as a data librarian there before moving to Scotland. She is the service manager of the University’s Data Library and the Edinburgh DataShare repository, and serves on the Research Data Management ‘action group’ for rolling out RDM services to the University. Over the past fifteen years she has led a number of Jisc-funded projects to do with data sharing and curation, learning and teaching with data, building institutional data repositories, identifying institutional data assets, and developing online learning materials in research data management. She was involved in the development of the University of Edinburgh’s Research Data Management Policy, the first of its kind in the UK, and created a programme to train the University’s academic service librarians in research data management. The materials she co-authored for training both researchers and librarians in RDM are available on the MANTRA website with an open licence and have been utilised by other institutions in the UK and elsewhere (http://datalib.edina.ac.uk/mantra).
The Open Access (OA) movement has progressed well beyond whether OA is desirable towards pragmatic actions and decisions about how best to achieve OA to scholarly literature and research data. Support for OA from the Global Research Council, the G8 Science Ministers, the European Commission as well as numerous countries, funding agencies and institutions around the world signal that open access will soon become the default mode for scholarly output. Countries and regions around the world are adopting open access policies and developing repository networks to support these policies. While there are unique requirements in each jurisdictional context and local infrastructures may differ, repository networks must be aligned across the world in order to support the truly global nature of research and scholarly communication.
Repository networks can be aligned on a number of levels: strategically, laws and policies, and at the technical and service level. Through a number of strategic and pragmatic activities, the Confederation of Open Access Repositories (COAR) is already working on many aspects of alignment. In March 2014, COAR organised a meeting in Rome, Italy with high-level representatives from regions around the world to discuss the alignment of open access repository networks.
The objectives of the meeting were to:
- Establish formal contacts between repository networks across the globe
- Identify areas in which we can further collaborate and align our activities
- Agree on a mechanism to ensure on-going dialogue
This paper will articulate in detail the benefits of aligning repository networks, present the outcomes of the March 2014 meeting, and outline the next steps for further actions in this area.
Kathleen Shearer is Executive Director of the Confederation of Open Access Repositories (COAR), an international association of repository initiatives launched in October 2009. COAR is located in Gottingen, Germany – with a membership of over 100 institutions worldwide from 35 countries in four continents. Its mission is to enhance the visibility and application of research outputs through a global network of open access digital repositories.
Shearer has her Master’s in Library and Information Studies (MLIS) and has worked in the areas of open access and research data management for over a decade. She is Chair of the international Open Access Licenses and Agreements Task Force, which monitors consortial licensing practices for deposit into repositories. She is also co-chair of the RDA Long Tail of Research Data Interest Group, which is looking at good practices for managing multidisciplinary data sets in repositories. She is also a part-time Research Associate with the Canadian Association of Research Libraries and a member of the Steering Committee for Research Data Canada.
Session 12: Legal Aspects
Researchers need text and data which can be accessed and reused for data mining purposes. Research libraries play a role not only in providing them with material, but also with legal advice on which actions can or cannot be undertaken on the material. They are negotiating licensing agreements with content providers. Research libraries are also advising governments, participating in consultations such as the Licensing for Europe Text and data mining Working Group and the European Commission public consultation on copyright.
The regulation of Text and Data Mining (TDM) is affected by legislation on the creation and the usage, access to and reuse of data related to research, copyright, public sector information, data protection, education and sector domains such as the environment. It also considers licensing agreements imposed by data providers and licensing options available to the research institution as data producers, between all rights reserved and public domain including the various open licenses. The legal framework of law and licenses (regulation by law) is completed by opportunities and restrictions embedded in the technical architecture (regulation by technology) of the platforms hosting the data, which can make it practically impossible or difficult to perform certain actions. The discrepancies between this techno-legal framework and the requirements of researchers’ applications to process data, investigate queries, mining, visualisation or other analytical tasks without restriction indicate points of frictions which should be solved. The most important issues are attribution, non-commercial and share alike requirements, the lack of definition of data, the framing of TDM as an exception instead of a right and technical restrictions.
The methodology associates legal research and argumentation to produce policy recommendations. The geographic focus is Europe, but US and Latin American Open Access legislations are included in the sources as they should be analysed with a critical perspective. While most literature and projects deal with Open Access to publications, this article targets more specifically Open Access to research data and includes recent developments: the 4.0 Creative Commons licenses available since November 2013, the Horizon 2020 pilot published in December 2013, the Elsevier TDM policy and the Twitter Data Grant both released in February 2014.
Mélanie Dulong de Rosnay is a researcher working on the international governance of digital commons and access to knowledge. She is a permanent researcher at the French National Centre for Scientific Research (CNRS) Institute for Communication Sciences, Visiting Fellow at London School of Economics and Political Science Department of Media and Communications, associated researcher at CERSA (CNRS University Paris 2), where she has been Creative Commons France legal lead since 2003. In 2011 she co-founded the Communia international association on the digital public domain, which she currently chairs and has represented at WIPO. She teaches copyright law and participates in research projects on commons-based peer production, public sector information, distributed architectures, open scientific data and public domain digitisation. A graduate in political sciences and a PhD in law, she has been a research fellow at the Berkman Center for Internet & Society of Harvard Law School and Science Commons and then at the Institute for Information Law of the University of Amsterdam. Her publications are available at http://www.iscc.cnrs.fr/spip.php?article1558.
The availability of large digital text collections online is changing the uses of texts and the relevance of library catalogues and bibliographic data. General search engines such as Google are gradually substituting catalogues as the place to start for those who are interested in finding general information about a particular work. Increasingly, as more and more books become digitally available, general search engines are also used by people intending to read a book.
However, the large-scale digitisation of texts also emphasises the need for reliable metadata in order for the digital library to function not only as a trusted repository, extending its bibliographic control into the digital domain, but to organise textual corpora permitting quantitative analyses with reliable results.
The National Library of Norway plans to digitise its entire collection of books, and is currently halfway. Due to a radical collective licencing agreement called The Bookshelf, the Library can make available both works in the public domain and books under copyright. By 2018, the Library will be able to give users with Norwegian IP-addresses access to the complete national literature up until 2001.
This paper will examine how a digital library collection, the digitised texts and their corresponding metadata, is something more and other than a collection of digital versions of printed texts, and invites uses which differ so radically from what one can do with a printed text collection that it becomes something completely different, a new entity in our cultural ontology.
The paper will present the first results of a project at the National Library of Norway: the establishing of an n-gram reader connected to the complete digital collection of printed material in the National Library of Norway. It will show how the uses of n-grams are not limited to linguistics and ICT, but are also increasingly being used by researchers in the humanities and social sciences working in the rapidly expanding fields of digital humanities, content analysis and culturomics.
Jon Arild Olsen is director of the Department for Research and Dissemination in the National Library of Norway. He has a PhD in literature from the University in Oslo and has published L’Esprit du roman (Peter Lang 2004), as well as contributions on literature and film to journals such as Poétique and Norsk litteraturvitenskapelig tidsskrift. After working as adviser and head of section for academic affairs in the Faculty of Humanities, University of Oslo, he joined the National Library of Norway in 2009 where he headed the section for film and music before becoming the director of the Department for Research and Scholarship in 2011.
Many libraries in Europe have built large digital collections over the past twenty years. But how do you turn the collections into living archives that are widely accessible and also actually used for different academic purposes? How do you handle the copyright issues, design a business model, control who can access – and how do you engage the academic community in the process? This case study summarises the experience from two Danish approaches. So do they really come?
In Denmark a fortunate mix of legal, technical, economic and academic circumstances have paved the way for two independent and very different portals for researchers and students to work with large collections of digital radio and television.
The collections: The State and University Library in Aarhus, Denmark, has responsibility for collecting, preserving and making accessible radio and television broadcasts. Denmark has had a legal deposit law covering radio and television since 2005, and from the start programmes were captured digitally. Today, the collection holds about 1.3 million broadcasts, also including some content digitised from tape.
The portals: The collections are available through two online portals: Mediestream.dk and Larm.fm. The first is the State and University Library’s own online portal to our digital collections. Apart from radio and television, it holds a collection of 50,000 commercials and we are about to add a sub-portal with 32 million pages of digitised newspapers. The second portal is Larm.fm which draws on the same radio content in the same digital repository, but is developed by a consortia consisting of several universities, the Danish Broadcasting Corporation, DR, and the State and University Library. It not only provides access but also offers tools for working with the collections, developed in close co-operation with the researchers.
The legal framework: Danish copyright law allows copyright agencies to negotiate access rights for vast quantities of content, including radio and television broadcasts. This makes it possible to give access to vast quantities of copyright- protected content, without asking every single copyright owner. It is not free however, so it is still necessary to find business models to cover the costs. The case study will explore and compare the two different approaches and the interaction with academic users before and after launch of the services.
Tonny Skovgård Jensen has an MA in Literature and a BS in Biology from University of Aarhus. He has spent thirteen years in publishing, ten years as marketing manager for an educational publisher. He was CEO and partner in an IT company working with e-book distribution and e-learning for four years. He has been chief consultant at Aarhus State and University Library for four years, working with business development, strategy and digitisation projects. He currently works as Director of the National Library Division at Aarhus State and University Library.