Skip to content

Experts Testsuite Toolset

Paul Libbrecht edited this page Jan 16, 2015 · 2 revisions

Experts Testsuite Toolset

This note sketches a proposal of workflow and software extension so that the Open Discovery Space project can collect input of domain experts in creating search test-suites and evaluating them.

Basics

For our purpose, search test-suites are made of the following ingredients:

  • a set of documents (generally all known to the expert)
  • a set of queries
  • a set of assessments about the relevance of each document appearing in the top of the results list of each query (relevant/not-relevant)

As Open Discovery Space is a broad EU project, we aim at gathering test-suites in each language and in each school domain. Based on each of these test-suites, we expect to make enhancements to the search engines' configuration, set of processors, and to its inputs provided by the Drupal objects, the harvested records, and the original repositories. Following these enhancements, performance indicators obtained by re-running the test-suites should go up.

Worfklow

We hope to make the following workflow after we have recruited an expert:

  • the expert identifies the documents of interest (as URLs to resources probably, or using a button)

  • the expert transmits this to the test-suite coordinator

  • the coordinator creates the necessary configuration so that the test-suite can now be "run"

  • the expert identifies the set of queries on this collection (as URLs)

  • the expert now "runs" the test-suite: (s)he performs the queries and ticks boxes of relevant documents

  • the coordinator gathers the results for the reports, along with overall qualitative reports

  • the coordinator brings the test-suite to the development environment and reindexes as necessary

  • developers enhance the search system (indexing, querying, configuration, …) so as to offer solutions to the issues observed

  • a release to production happens, potentially a re-indexing and/or a re-harvesting is made necessary; if yes, the test-suite's resources are re-indexed first

  • the expert runs the test-suite again and reports (hopefully) enhancements

Software Adjustments

In order for the workflow above to be made. One needs adjustments to the software so as to support the experts and maybe the coordinator. I see the following components with their optionality:

  • (optional) the expert could be endowed, if part of a group, to create "his test suite" and to choose resources as (s)he browsers by just clicking a link. That action could trigger the necessary re-indexing.
  • (necessary) the expert needs a UI to run his query on his test-suite, and maybe invite others to do so. That UI is simply a duplicate of the search which talks to the same SOLR handler, added with a filter (an fq parameter). Ideally that UI allows him/her to tick checkboxes for the relevant results and submit this assessment. The assessment only needs to be saved to a csv file or so
  • (optional) search results' assessments could be previewed too (e.g. by the same check-boxes) for a better understanding, e.g., by the developers
Clone this wiki locally