Help Pages (EN)‎ > ‎

Self-Reflection Editor: Configure Feedback

Feedback: Configuring your self-reflection Tutor feedback can be a difficult task. It is important to test your self-reflection several times to determine whether or not the feedback provided by the tutor is helpful and relevant. Here are some important things to keep in mind when you are configuring your self-reflection feedback:

  • How many "turns" do you want to give the user to fully answer the question? Each "turn" equates to one sentence inputted by the user. In order for AutoTutor Lite to detect an end of a turn, a user must end each input with a period. 
  • A set of feedback triggers are needed for each turn your user will be taking. One way of thinking about this is: imagine you have a classroom full of students all trying to provide an answer to this self-reflection question. Some students will provide very detailed and complete answers in the first turn. You will need to provide feedback to these students letting them know they fully answered the question and can move forward. Other students will provide little to no relevant information on the first turn. You will need to provide feedback to these students in the form of hints that help guide them to the correct answer over their next turns.

To insert a feedback trigger, click on the "add rule" button located at the bottom of the "Configure Feedback" screen.

The combination of these rules functions as a micro-model of student knowledge. 


RN: Relevant New -- Relevant & New indicates the relevancy of new student information provided for each turn. In the above example the student provided no relevant and new information for turn 3.
IN: Irrelevant New -- Irrelevant & New indicates new information the student provided per turn, but information that is irrelevant to the target answer (semantic answer).
RO: Relevant Old -- Relevant & Old information indicated relevant information the student is repeating. For example, on turn two the student provided relevant information, but part of the answer has already been stated. 
CO: Total Coverage -- Total Coverage indicates the total % of the answer the student has covered

Let's walk through a few examples of how to create feedback triggers for turn 1.
  1. Click on the "CO" button next to "Add Rule for:" 
  2. Click on the number under "Turn" and select "1" to set this feedback to trigger on turn 1.
  3. Select the "trigger value" by clicking on the number next to the "relation" column. For total coverage (CO), this value corresponds to the % of coverage of the semantic answer. In the above example, the student provided about .5 CO on turn 1, .75 CO on turn 2, and .75 CO on turn 3. Let's set this value to .3 for this trigger.
  4. Select the relation to the value you just set. Let's select "near" for this trigger.
  5. Now we can edit the actual feedback that will be provided. This trigger will provide feedback to the user if he or she provided information that covered about 30% of the semantic answer on the first turn. I would say that he or she is on the right track, and maybe just needs to elaborate on their previous statement. So let's set the feedback to say "Good. You're on the right track. Please try to elaborate on your answer."
Comments