Skip to content

How to add a new evaluation model

Aldo edited this page Mar 8, 2018 · 11 revisions

How to add a new evaluation model

  1. Adding a new evaluation model
    1. Generate the evaluation model
    2. Customize the module class
    3. Enable and configure the new evaluation model
  2. Customization options
    1. User Interface
    2. Charts and graphs

The first step to add a new evaluation model is execute the following task:

bundle exec rake generate:evmethod["modelName","multiple","automatic","moduleName"]

Specifying the following parameters:

Param Description
modelName The name of the evaluation model. It can be any alphanumeric string. Mandatory. If the name has an irregular plural (e.g. ends with a 's') it is also necessary to create an inflection rule.
multiple If "true" the model will accept several evaluations of the same Learning Object from a single user. Otherwise, a user will be able to evaluate a Learning Object only once. Default value is "false".
automatic Set this param to "true" to indicate that the model does not require human intervention and that can perform the quality evauations in an automatic way. Default value is "false".
moduleName This param is optional. Allows to specify the name of the internal Rails class that will handle the evaluations of the model. By default is obtained based on the modelName param.

Thereby, if we want to add a new evaluation model called "SUS" that only allows one evaluation per user and that is not automatic, we should add an inflection rule and execute the following:

bundle exec rake generate:evmethod["SUS","false","false"]

or simply

bundle exec rake generate:evmethod["SUS"]

The output of the task should be something like this:

Generating new evaluation model
The model was created in /home/user/workspace/LOEP/app/models/evaluations/sus.rb
The controller was created in /home/user/workspace/LOEP/app/controllers/evaluations/suses_controller.rb
The evaluation model was succesfully generated
{"allow_multiple_evaluations":false,"automatic":false,"id":24,"module":"Evaluations::Sus","name":"SUS"}

If the name of the evaluation model we want to add has an irregular plural (e.g. ends with a 's') we need to add a new inflection rule in the 'config/initializers/inflections.rb' file. The plural should always be different.
For instance, for adding the "SUS" evaluation model we can add the following rule:

ActiveSupport::Inflector.inflections do |inflect|
  [...]
  inflect.irregular 'sus', 'suses'
end

After executing the generate:evmethod task a couple of files are automatically generated:

  • The evaluation module in the app/models/evaluations folder.
  • The evaluation controller in the app/controllers/evaluations folder.

For instance, after executing bundle exec rake generate:evmethod["SUS"] the following files are automatically generated:

app/models/evaluations/sus.rb
app/controllers/evaluations/suses_controller.rb

Next we need to customize the evaluation module file. The controller file does not need to be changed.
The default content of the evaluation module file (app/models/evaluations/sus.rb in the example of this guide) should be something like this:

# encoding: utf-8

class Evaluations::Sus < Evaluation
  # this is for Evaluations with evMethod=SUS (type=SusEvaluation)

  def init
    self.evmethod_id ||= Evmethod.find_by_name("SUS").id
    super
  end

  def self.getItems
    [
      {
        :name => "Item1",
        :type=> "integer"
      },{
        :name => "Item2",
        :type=> "integer"
      }
    ]
  end

  def self.getScale
    return [1,5]
  end

end

These are all the mandatory methods that the module needs to work on the LOEP platform:

Method Description
init Initialization method. Mandatory. Does not need to be changed.
self.getItems Here is the method where all the items of the evaluation model should be defined. See the 'How to define items for the evaluation model' section for details.
self.getScale Defines the scale in which the numeric items should be rated in the model. For instance, if the model requires a 5-point likert scale we should return the [1,5] array in the getScale method.

There are more methods that can be specified in this module for customizing the evaluation model. Check the 'Customization options' section for details.

So, at this point we should define at least the items of the evaluation model and their scale by customizing the self.getItems and self.getScale methods.

To provide the names and descriptions of the items with internationalization support we can define the values in the corresponding config/locales/*.yml files and use them in the module class:

def self.getItems
    items = []
    9.times do |i|
      items << {
        :name => I18n.t("evmethods.myevmethod.item" + (i+1).to_s + ".name") + ".",
        :description => I18n.t("evmethods.myevmethod.item" + (i+1).to_s + ".description"), 
        :type=> "integer"
      }
    end
    items
  end

The self.getItems method of the evaluation module should return an array with all the items of the evaluation model.
Each item is defined as an object with the following fields:

Item field Description
name Name of the item. Mandatory. Any alphanumeric string is valid.
type Type of the item. Mandatory. The following types are supported: "integer","decimal","string","text". Depending on the type, the items are displayed, stored and handled in a different way.
description A description of the item. Any alphanumeric string is valid.
metric For automatic evaluation models only. Defines the metric in charge of calculating the value/score of the item.

Finally, we need to enable the new evaluation model in the config/application_config.yml file. Check the Setting up a LOEP instance: The application_config.yml file page for more details about this file.

If the name of the model is "SUS", we should add this name to the "evmethods" key in the following way:

#Copy this file to application_config.yml
#Configure here all the variables of your LOEP instance.

development:
  domain: "localhost:8080"
  [...]
  evmethods: ["LORI v1.5","LOEM","WBLT-S","WBLT-T",...,"SUS"]
  [...]

If we want the new evaluation model to admit external evaluations we also need to include it in the array of the allow_external_evaluations key according to the instructions described in the Setting up a LOEP instance: The application_config.yml file page.

The previous section explains how to add a new evaluation model to LOEP. This section describes the different customization options available in the LOEP framework in order to customize an added evaluation model.

Options for customizing the web forms generated automatically

The self.getFormOptions method in the module class allow to define some extra options for the UI (User Interface):

  def self.getFormOptions
    {
      :scaleLegend => {
        :min => I18n.t("forms.evmethod.low"), 
        :max => I18n.t("forms.evmethod.high")
      },
      :explicitSkipAnswer => {
        :title => I18n.t("forms.evmethod.na")
      }
    }
  end
Option Description
scaleLegend The text to be displayed in the items of type integer at the end of the scale on the web forms. Default values are "Low" and "High". Likert scales can use the I18n.t("forms.evmethod.low_likert") and I18n.t("forms.evmethod.high_likert") values.
explicitSkipAnswer Disabled by default. If defined allows to include an option to explicitly skip or left in blank an item.
contexts Enabled by default. If false disables the context block in the web forms.
comments Enabled by default. If false disables the comments block in the web forms.
global_score Enabled by default. If false disables the global score block in the web forms.

Use custom web forms

If an evaluation model requires a particular web form that cannot be generated automatically using the LOEP framework or needs additional user interface features, LOEP allows to include a custom web form.
To do that, just create the web form in the file _app/views/evaluations/#{modelName}/survey.html.erb, where #{modelName} is the name of the evaluation model pluralized and in lowercase.

LOEP also provides some features for facilitating the client validation in the web forms as well as other functionalities such as the ajax handling. You can check the 'app/views/evaluations/_form.html.erb' file for details about these features.

LOEP provides several types of charts and graphs for summarizing the results of the evaluations of the different evaluations models. These same graphs are also used when comparing the evaluations of different Learning Objects and to represent the aggregated statistics. An example is displayed below:

By default a radar chart will be generated showing the average rate for each numeric item. Items will be obtained from the self.getItems method of the module class. If the model has less than three items, a bar chart will be used automatically instead of the radar chart. The default engine for generating the charts and graphs is Rgraph (http://www.rgraph.net).

We can customize the graph through the self.representationData method of the module class.
Next is displayed the method provided by the LORI evaluation model (in the Evaluations::Lori class):

  def self.representationData(lo)
    super(lo,Metric.find_by_type("Metrics::LORIAM"))
  end

In this case the module just call super (i.e. the representationData method of the parent class) indicating the metric that should be used to calculate the global score in the radar chart. If no metric is specified, LOEP will use any metric available for the evaluation model.

We can override this method and define a completely new chart. For instance, the following method is used in the SUS model in order to represent the Global SUS using a bar chart:

  def self.representationData(lo)
    evmethod = Evmethod.find_by_module(self.name)
    metric = Metric.find_by_name("Global SUS")
    return if metric.nil?

    loSUSscore = lo.scores.find_by_metric_id(metric.id)
    return if loSUSscore.nil?

    representationData = Hash.new
    representationData["name"] = lo.name
    representationData["averageScore"] = loSUSscore.value.to_f.round(2)
    representationData["iScores"] = [representationData["averageScore"]]
    representationData["labels"] = [metric.name]
    representationData["engine"] = "Rgraph"
    representationData
  end

Notice that we should have the "Global SUS" metric to use this representation.

If we want to disable all the graphs for an evaluation model, we should define the self.representationData method in its module class and return the nil value:

  def self.representationData(lo,metric=nil)
    nil
  end

Finally, we can also implement our own engine for rendering the charts and graphs. To do that we should add a new file in the app/views/evmethods/representations/ folder with the name _#{engineName}representation.html.erb, where engineName is the name of the engine.
As a guideline for implementing the new engine, the code of the Rgraph engine is available at _app/views/evmethods/representations/rgraph_representation.html.erb.