Wednesday, July 04, 2007

Usability testing: Bernard and Belinda’s route to information – Davina Borman & Frankie Wilson, Brunel University Library


Frankie and Davina have been giving their library’s website an overhaul and they have made great progress improving both the content and the compliance with accessibility guidelines. The content of this presentation was a great deal more practically focussed than some of the more theoretical ones that I attended.

They took us through the whole process but the most interesting aspect of it was that referred to in the title – their approach to usability testing. They started with creating some personas based on the different user groups and then asked librarians to ‘get into character’ and perform a few tasks on the new website. Although it was very time consuming, it turned up several aspects of the website that could be improved. Next, they trialled the new site using volunteer undergrads from the University. Although this group identified a few more issues, none of them contradicted those identified as part of the persona process. I’d say that’s a vote in favour of the persona exercise.

I can see the application of this persona concept really coming into its own when testing a website for which the audience is so large that it would be difficult to engage the end user. In this particular project, the audience is a finite and known group: the students. On top of that, it is a group for which they have the means of contacting every individual member. With such a closed group, I would have thought it would make more sense to engage a representative selection of the student body in the process right at the beginning to ensure that the first pass was as close to what students wanted as possible. I would then have used that same group and second representational one to test the site.

Frankie and Davina are the first to admit that they didn’t really know what they were doing and that it was a lot of enthusiasm, reading and a steep learning curve for them, but I think that their approach was a little naïve. Having said that, the methodology that they used appears sound and the results of the student testing did, to a large extent, prove the effectiveness of the persona approach. As the audience for our website is so large both in number and geographical spread and is largely unknown, I will be suggesting to the project team that we could use this persona methodology when we review our website in the next year.


WSIS: a proxy for government control of the Internet or an opportunity for cooperation? A view from DBERR on Internet governance after WSIS

Martin Boyle introduced us to WSIS – the World Summit of the Information Society. Having never heard of it (Am I alone in this? Sounds like the kind of thing I should know about...!), I was pretty grateful to him for starting here. I’m not going to go into great detail about it here but basically, WSIS covers a range of things including:
  • allocation of IP addresses
  • introduction of new generic top-level domains
  • sovereignty of the country code top-level domains
  • control of the root
  • root servers
Each of these issues is closely linked either to revenue generation (we pay to register a new generic top-level domain) or control (sovereignty of the country code top-level domains) or both.

The most interesting thing in this presentation, from my perspective, was some insight into the politics that operate behind the technologies that I use in my day-to-day work. It goes something like this: in most countries, the government is the principal telecoms operator. In the UK, things are no longer like this but I gather that it is still the case in many countries. Clearly, the Internet crosses political boundaries and so a summit of this nature is clearly an opportunity for international cooperation. At the same time, given the revenue and control elements and the vested interest that many governments have in protecting telecoms income in their country, it is also an opportunity for abuse of position in applying some government control.

Like the session on copyright that I attended, this one was of a more theoretical nature rather than practical and so although I don’t have much to take back and apply in the workplace, I have gained a better understanding of the forces that influence Internet technology.


The ethical nature of copyright – Graham Cornish, Copyright Circle

Graham gave us a sound introduction to copyright. Not being very up on my copyright, I found it really useful, though if you were quite knowledgeable about copyright, this might have seemed a little elementary.

Here is a quick run-through: It is accepted in societies that if someone creates something, it is theirs to keep, sell or give away. If they choose to sell it, they should be compensated for it. This is an ethical arrangement.

Where it becomes a little less clear is with the shift from a physical resource base to a digital one. With a physical resource (e.g. a book), once you’ve purchased it, it is yours to do with as you please – you can keep it, sell it, give it away, etc. In a digital world, though, once you have purchased something, it is possible to transfer it to someone without having to give it up yourself. A mathematical representation of this scenario is:

Physical resource
1-1=0 (gave the book to a friend to read)
0+1=1 (the friend now has the book)
0 (your copy) + 1 (your friend’s copy) = 1 (total number of copies)

Digital resource
1-1=1 (shared the file with a friend)
0+1=1 (the friend now has a file she didn’t have before)
1 (your copy) + 1 (your friend’s copy) = 2 (total number of copies)

With the digital resource, the creator has now effectively been compensated for one copy (yours) but not for the second (your friend’s). One way of ensuring that the creator is compensated for both copies is through licensing. Licensing enables you to pay for the right to use something that you don’t own. That contract will have limitations on it which prevent you from sharing it with a friend. The issue is no longer how do we compensate the creator for the two uses but how many times do we compensate the creator for the reuse of his work. The balance is shifting. At what point is the investment of time, materials, skill and effort on the part of the creator fairly paid for? The third user? The 100th user? And if we can agree this limit, does that mean that the fourth or 101st user doesn’t need to pay? What will the other three or 100 users who are paying think of that arrangement?

So, what is the ethical nature of copyright? It’s ensuring that the creator of something is fairly compensated for their investment and effort and ensuring that the user is not unfairly charged for use. I think this is a very interesting situation. How do we do this? Licensing is an enabler of it but is it the best way of achieving the ideal ethical situation?

I enjoyed Graham’s presentation and learnt a bit about copyright at the same time. Having said that, because the focus of this presentation was of a more theoretical nature, I’m not too sure how I am going to apply this new knowledge at work...more of a learning session for me.


Building your portfolio – Representatives from the Career Development Group

This session was a little like the Chartership and Beyond session that I attended in Lewisham a few months ago. It reinforced the importance of quality over quantity when assembling your portfolio and emphasized the need for evaluation in the written statement (what did you gain?, what did you enjoy?, what will you do differently?, what benefit was there for you and for your employer?)

In addition, there were some handy tips:
  • use the application as a checklist to ensure that all elements of the application are ready and included
  • use the CV as much as you can as it doesn’t have a word restriction
  • be constructively critical in the written statement – it needs interpretation and analysis, must show awareness of wider community, and demonstrate CPD
  • get a proof-reader
  • bind it securely (e.g. comb binding)
This was a useful session as I find the whole gauntlet that we are running a little confusing at times – it served to reassure me that I am aware of all of the aspects of this submission.


I want some porn! – Pat Beech, RNIB

Pat’s presentation provided the audience with some pretty astonishing facts and figures concerning the market for alternative formats and the shocking lack of supply. The one that stood out most for me was that between 1999 and 2003, only 4.4% of title published were also published in an alternative format. If that weren’t bad enough, that’s across all possible alternative formats, so if you wanted something in giant print, it wasn’t even as big as 4.4% of titles.

She went on to encourage buyers of materials to consult their users but I started to take note when she began to discuss the Internet as I don’t purchase materials but I do maintain a Web presence. Now, our site is pretty poor from an accessibility perspective. I have managed to get our e-newsletter to comply with the WCAG 1.0 (opens in new window) double-A criteria but the website has more stakeholders and so doesn’t conform even to single-A. And it would seem that we aren’t alone. 81% of sites are not fully accessible and 97% do not even meet minimum requirements. In the information age where the Internet plays such a central role both at work and at home, it isn’t surprising that 40% of people who become partially sighted or blind have to give up on their hobbies.

It was an interesting presentation with a good message but it shared a session with John Pateman’s and unfortunately, based on the questions asked of the two presenters afterwards, it looks like the audience was thinking more about John’s speech. I hope that I’m wrong about that.


Libraries and the War on Terror: censorship and diversity – John Pateman, Lincolnshire County Council

John changed his subtitle after it had already been published but I failed to make a note or it at the time; it changed from a focus on censorship and diversity to one on human rights. This change was important as one of his points was the erosion of human rights as the UK government inches closer to a police state.

He went on to mention the Patriot Act in the US which allows the FBI to obtain browsing and borrowing records. This in itself is a little worrying as users of the library have come to expect an element of privacy (and clearly it is illegal for librarians to refuse to comply with an investigation), but the fact that it is also illegal for the librarian to inform the user of the request is more worrying to me. It means that the librarian becomes part of the investigation and this isn’t something with which I would be entirely comfortable. Recently, the University and College Union (opens in new window) refused to comply with a request from the UK government to monitor and report unusual behaviour amongst their students. Another of John’s points was the potential good that all of the funding that goes towards the Iraq war could do if it were spent domestically.

One member of the audience pointed out that he had come very close to denying the presence of terrorism (which I also felt at one point during his speech). This gave John the opportunity to make it clear that he did not condone terrorism and that his position was that we need to address the causes of terrorism as a matter of priority, not the terrorism itself.

While I share many of John’s concerns, it was unfortunate that he used the majority of his time to talk about his political opinions and criticise the UK and US governments’ actions rather than focus on how the War on Terror has affected libraries and library services. The one point that he did keep mentioning in connection with libraries was the reduced funding that they received as a result of the war in Iraq and Afghanistan. This point, though, is a misrepresentation of fact – the truth of the matter is, as was pointed out by one of the members of the audience, that funding for libraries has been decreasing since long before either of these wars commenced. While I can see that it is yet another obstacle to better library funding, it seems naïve to conclude that it is the cause. More interesting and relevant were the few aspects of the speech that focussed on changes to legislation in the name of the War on Terror and how that affects libraries.

At the end of John’s speech (and it was a speech as opposed to a presentation), there emerged a polarity of opinion in the audience regarding the use of this forum for promoting his opinions. Unfortunately, John called for a straw poll after one member of the audience objected to his use of this session for sharing his political views rather than focussing on the impact that the War on Terror has had on libraries. While I agreed with many of John’s points, I agreed with this individual – it was not what we had come to the session to hear. My feelings about the use of the session to present a political position aside, in my opinion, calling for this poll was petty, belittling and disrespectful to the individual concerned; I lost all respect for John at this point. He clearly felt as though he and his ideas were under attack and he was unable to handle it well.


Tuesday, July 03, 2007

Practical uses for Web 2.0 in a library environment – Phil Bradley, Information specialist and Internet consultant

Having attempted a definition of Web 2.0 (more of a concept than anything else) and describing it in relation to the existing Web, Phil took us on a canter through the different pieces of software/service that conform to this change. These bits of functionality are Web 2.0 because they sit within a platform that involves the consumer in a creator capacity as well and the output is not dependant on any particular machine and can be reused/relinked any number of times.

Phil spent a bit of time talking about blogs, recommending that every library have one, and that it ought to be treated as a website in its own right rather than a diary or journal. He also touched on RSS, Aggregators, Podcasts (great example of a library that has made its orientation downloadable as a podcast), bookmarks, communities, instant messaging, and mashups.

Phil was a very engaging speaker and the canter through Web 2.0 technology and options contained something for everyone. Of particular interest to me were the customised start pages (pageflakes - http://www.pageflakes.com/, netvibes - http://www.netvibes.com/) – probably more from a personal perspective, though I can see applications in more traditional library settings – and the tailored search functionality (Eurekester - http://www.eurekster.com/about, Rollyo - http://www.rollyo.com/) – from a personal perspective as I have dozens of careful organised bookmarks, as well as a professional one.

Unfortunately, he didn’t talk very much about the potential pitfalls of employing Web 2.0 in a library environment; there were three areas that I would like to have heard more about:
  • issues of ownership / copyright in an environment where the host of the site is not necessarily the sole creator of content,
  • related to that, the legal liability for content that is generated by a disparate group of creators, and
  • the pitfalls of collective intelligence. Phil touched on this last one in his introduction: if we adhered to collective intelligence, we’d all still believe the world to be flat – group think can be damaging in the long-run.
Phil’s session was an introduction to Web 2.0 technologies that are available rather than an overview of Web 2.0 as a concept. To be fair to him though, doing the latter any justice would probably require more time than the session allowed.

His presentation is available at http://www.slideshare.net/Philbradley/umbrella2007


Monday, July 02, 2007

Thesaurus vs taxonomy vs subject headings

A few weeks ago, I attended a course on how to build a thesaurus (opens in new window). Having thought about it a fair bit since the course, I’ve come to the conclusion that what we probably want to do here, at least in the short term, is not to create a thesaurus but to put together a list of subject terms. Here’s my thinking…

A thesaurus is a comprehensive listing of all the possible terms that someone might use to describe content within our subject (not an official definition but it’ll do for my purposes here). Although this means that whatever term someone might use, the thesaurus should make sure that they use the same one as the cataloguer, the problem is in the time required to create one. One of the things that I picked up from my course is that building a thesaurus from scratch is a very large task. Having discovered that there isn’t anything quite right out there already, I think I’d be looking at quite a bit of work. Also, although it is more flexible, I need to use more terms to describe the subject. Where a single subject heading will capture it, I might need to use several terms from the thesaurus.

A taxonomy has the same description limitation as the thesaurus (potentially many terms required) but is much quicker to produce. Of course, a user needs to guess the right term where a thesaurus will direct them to it, providing they’ve had a reasonably good guess in the first place.

Subject headings are a little less quick to produce than a taxonomy but still a quicker process than a thesaurus and they can describe the subject a little more effectively (this is because they provide context which an isolated word does not possess). Subject headings have their own pitfalls, of course, particularly when it comes to consistency of use over time.

So here is how I sort of see things looking:

(I don't know what's going on with this table...scroll down...it's there...I will look into it later as I must get going now!)










































 

Taxonomy

Thesaurus

Subject headings

Time to create

Fast

Slow

Medium

Versatility

High

High

Low

Ease of use

Low

High

Medium

Ease of Maintenance

High

Low

Low

Structure

Low

High

Medium



The time to create is an initial outlay of resource and is not ongoing. As a result, from a long-term perspective, the fact that this might be Slow isn’t too critical. The bit that makes a thesaurus less attractive as an option is the effort required to maintain it (of course there is software available to assist). The appeal to me of the subject headings is their time to create and their relative ease of use. I think that I would start the subject heading creation process by building a taxonomy and use that to develop the subjects. After creating and introducing subject headings initially, I think that I should use that same taxonomy (and the finished subject headings) to develop a thesaurus and aim to use that in the longer-term.

What do you think? Are there flaws in my thinking here? Are there other aspects that I should consider? This hasn’t exactly been a scientific process so there is every chance that I have missed something…

So, was the course a waste of time/money? Not at all. I couldn’t have arrived at a conclusion around the best way forward for my organisation without having attended it, even though the conclusion was to not build a thesaurus. Also, if/when we do get to the point where we want to introduce a thesaurus, I will have an idea as to where to start.