Today we’ve been using the Google Custom Search API Explorer to generate some sample search results for images, videos, presentations and Flash movies.
Screenshot of the Google API Explorer web page
This is a very effective tool and allows us to filter our searches by many parameters, inclusing file format and site or domains. Images are handled differently to the others and generates a different JSON data structure, which is more detailed than the non-image search results.
We will now parse data from these sample JSON files to test how further meta- and paradata can be retrieved from relevant fileshare sites via their respective APIs. For example, taking the
contextLink element from the raw Google result given in the JSON code below, we are able to use the Flickr API and gather much richer data for this image resource.
"title": "Bessemer converter (iron into steel), Allegheny Ludlum Steel[e ...",
"htmlTitle": "Bessemer converter (iron into <b>steel</b>), Allegheny Ludlum <b>Steel</b>[e <b>...</b>",
"snippet": "Allegheny Ludlum Steel[e]",
"htmlSnippet": "Allegheny Ludlum <b>Steel</b>[e]",
Have a look to see how this example appears in our Flickr API test page.
Today we’ve been trying out the Zend Framework to retrieve some basic meta- and paradata for a specified YouTube video id. Although somewhat less intuitive than the Flickr API, we finally managed to isolate the main elements of interest, such as number of views, comments, number of times favorited and ratings.
We’ve created a page that allows you to retrieve some basic metadata and paradata for the given YouTube video ID. Have a play with it here.
Today we’ve been playing with the Flickr API to retrieve some basic meta- and paradata for a specified Flickr photo id. We’ve been using phpFlickr 3.1, which seems to work well. In due course, these data will be displayed in our user interface, and some of the data will also be submitted to the Learning Registry.
Have a play with it here.
Having now held a couple of sessions with the student group, we have come up with the first set of wireframe designs for the ENGrich user interface. Only 3 basic pages are currently envisaged: a start page, a results listing (or grid) and a results detail page, which will have slideout portions to allow paradata input and sign-in.
The start page follows that of Google in its simplicity; just a text field for search terms, a button plus the option to limit the search to a single resource type.
We’re considering displaying the search results either as a listing, whereby each thumbnail will be accompanied by some limited meta and usage data, or as a grid. We’ll put both designs to the user group and decide whether to adopt one or both of them.
Results as a listing
Results as a grid
The exact choice of icons displayed beneath each thumbnail is subject to ongoing debate. The left-hand column can be used to refine the results based on certain learning object metadata elements.
The full details page will potentially gather together metadata and usage data from different sources. The interface will also allow for users to classify resources (probably to a JACS 3 standard) and add some other metadata, depending on the resource type. Fo those who have already used the resource, a slideout (expanding portion of the page) will allow the addition of some data describing how and when the resource was used.