We have taken part in the JISC Content and Discovery Programme Meeting in Birmingham earlier this week. This event brought together projects from different JISC programmes, in particular the JISC Discovery programme and the Content programme. The key aims were for projects to get to know about each other’s work, share experiences and key issues around best practice in producing resources that can be easily discoverable and usable and re-usable by others (e.g. open data, etc). This too was less about formal presentations and more about informal group work.
The participants were asked to get into groups and share their thoughts on the following topics:
- DEMAND: How is your project evaluating/gauging user demand/feedback?
- IMPACT: How will you know if you’ve been successful? (how is success, impact or benefits being measured?)
- SUSTAINABILITY/BIZ CASE: Does your work involve looking at sustainability or a longer-term business case for the project to your institution (how?)
- EXPOSURE ON THE WIDER WEB: How is your project making content open to the wider web and end users? (how are you working to increase chances of discoverability of content?)
- OPEN DATA: What licensing issues are you having to tackle (or ignore) for now?
- SHARED OBSTACLES: Complete the following sentence: Our project objectives would be much easier to achieve if….
The afternoon Technology session, which was lead by Owen Stephens from The Open University, and the discussions were focused around three major issues: Linked Data, data interfaces and data aggregation.
In the session we talked of crowdsourcing approach to metadata; creating own APIs as a mechanism of increased web exposure and sustainability; how to deal with big size data; what to use as identifier for Linked Data; how to deal with migrating of data; how do you deliver your data – human-readable format, machine-readable or mixed; approaches, tools, licensing for data aggregation; anti-duplication semantic algorithms and semantic inconsistency.
Interesting discussion sparked about using SILK for calculation of semantic similarity for aggregation and anti-duplication purposes when pulling together resources from a number of repositories. Google Refine was mentioned being used for dealing with duplicate content shown or linked via multiple distinct URLs.