The goal of this WP is to identify:
- What evaluation measures make sense for evaluation of search tools for children
- How to appropriately gather measures – tools to allow non-experts to gather and analyse evaluation data using these measures.
It will complement WP1 to WP4 by identifying appropriate evaluation measures and by designing an evaluation framework for Information Services for children. Evaluation methods are needed that respond to the particular features and requirements for search tools for children (non-linear search, task control, collaboration) as well as measures that help us estimate system quality according to these features.
We need to ascertain what measures of system usability (engagement, task success, uptake) correlate with observed behaviour and reporting (children’s own reporting of system use and reporting by key adults such as teachers and librarians).
Existing evaluation approaches for child-centred design (such as smileyometers) will be taken as starting point. Interviews and observational studies of children conducting information searches will be conducted. Further, we will develop methods for gathering response from children on system quality.
WP5 will develop generic evaluation methodologies suited for use by developers of search environments designed for children. Evaluation for the kind of services to be developed in PuppyIR will combine user-oriented evaluation, and system-oriented evaluation. Four tasks are distinguished:
- Development of test collections with which to evaluate information services for children
- Development of a suite of user-centred evaluation measures and a toolkit of evaluation tools for use in user-centred evaluations
- Development of non-intrusive analysis tools based on query log analysis
- And research into the effectiveness of these tools within evaluations of demonstrator systems with
Testing of the information services and their underlying components will be carried out as part of the iterative development of these services in WP3.