Our success measures for the project were qualitative, focussing on the results of usability testing pre and post development and also on the range of usability measures used in the project.
1. A mobile interface is developed for Locate with increased usability over the desktop version when used on a mobile device. This will be measured by comparison of baseline testing with summative testing following rapid development phase.
As has already been mentioned in our previous post, we were unable to develop a production version of LOCATE as part of this project, however were able to generate mock-ups and wireframes that tested developments identified from the baseline testing. Summative feedback and feedback received during development indicated that the relative usability of the planned mobile version had improved over the desktop model.
It was clear that customer expectation on a mobile device was different to that at a desktop. Tablet devices were more ‘forgiving’ in translating the desktop version of LOCATE, however handheld mobiles such as iphones and android phones were clearly where the desktop interface stumbled due to the difference in screen sizes.
From testing, it was obvious that significant work would be required to truly pare down screen real estate in order to deliver to the customer the information they required in an easily readable, interactive display. Large commercial companies, such as Ebay and Amazon, have developed specific OS dependent applications to deal with the conflict of usability vs screen size. It is in commercial database deployments such as these that we can look to for ideas on how to create our own service.
A broader question, which this project does not seek to answer, is whether customers actually do want to view Library catalogue data on their mobile device. Stakeholder feedback during testing appears to confirm that they can indeed see a use for it, but we should ensure that we are being customer-led in developing a service they will ‘need’ to enhance their study and research, and not technology-led in investing resources developing a service either because we can or because we feel we should in order to remain relevant.
2. Case study utilises a range of usability testing methods focussing on practical usage in a rapid development environment. We will report on the appropriateness and relative value of the approaches taken in achieving the end result as development and testing progresses.
Throughout the project we have utilised a number of recognised usability methodologies. This included;
• Cognitive walkthrough
• Focus groups
• Paper prototyping
• Guerilla testing
• Contextual analysis
The blog entries we have created have shown how we used each of these and the results that were obtained. Although we were not using the skills of a recognised usability expert, we were fortunate to be able to draw on the expertise of colleagues in our e-learning team who have a practical knowledge and awareness of this area and have been invaluable in supporting our project as it progressed. Clearly, access to a Usability expert would have been valuable also, had resources been available.
When looking at the methods we used in this project we feel that they were both appropriate and appropriately employed, however it is clear that the availability of a resource such as the one being developed by the Usability Support Project (UsabilityUK) in Strand A would have been valuable to us both at the project definition stage and ongoing through development and testing.