This is reflected in wanting always to get the right answer by selecting the correct component (roots, stems, and leaves) for the current environment.
Simulation event meets the following constraints: (a) user, or tutoring sequence, selects component (e.g., [stem, root, leaves]). (b) correct-value is bound to the correct value for chosen component (e.g., type of stem). (c) selected-value is bound to the user-selected value for the chosen component.
Goal: (bi-valenced). Have the chosen component value match the correct component value.
The developers of Design-a-Plant believe there to be a very strong ``immediate gratification'' element in user interaction with the system. In general there are good mechanisms for providing what is likely to be entertaining (based on the long history of computer games, and entertainment systems of varied types), and heuristics for measuring entertainment levels ad hoc (e.g., if ``it sells,'' it is probably because it is entertaining). What we do NOT have is way for computer agents to assess this dynamically as the interaction is taking place.
Affective user modeling can address this is two ways: (1) It may be provably true that interesting interactive agents are more entertaining than static, or impoverished, agents are, or systems that operate without interactive agents. Agents with rich affective lives of their own can be extremely interesting to interact with. (2) Through observation we have some clues as to what is entertaining. Having an agent that makes inferences about user state by tracking situations believed to be entertaining may help the overall timing of the system. For example, we might believe that events we have tagged as, funny, visual, with audio appeal, exhibiting cartoon effects may be entertaining. A system modeling user state may call on resources to effect its limited arsenal of entertaining actions at more appropriate times (e.g., to cheer up a student perceived to be distressed).
Selected notes: Simulation events in the AR are a frame representation of the salient points of situations that arise in the course of interaction with the user. In the tutoring systems these would take a theoretically equivalent form, regardless of the actual implementation. Agents maintain internal representations of what they believe to be true of the appraisal mechanisms, (e.g.,the dispositions) of the users, and use these for interpreting the supposed effect of simulation events on the user. For example, if an agent believes that user Sarah has a strong desire to succeed on Task A, but that she does not care much about Task B, then the agent might feel pity for Sarah if she fails on Task A, or happy-for her if she succeeded, but would have no fortunes-of-other response for Sarah's relative success with Task B.
As part of the internal models that agents keep of other agents, including the user, they may update mood variables dynamically, which tend to affect the thresholds at which emotions arise. So, if an agent believed Sarah to be feeling particularly anxious, he might, after all, feel pity for Sarah's failure on Task B, because failure on even a relatively unimportant (to her) task such as Task B might have a lowered threshold for activation of the distress emotion. Similarly, if the agent believed Sarah to be feeling particularly invincible (e.g., after a string of grand successes), he might not believe Sarah to be distressed about failure on the important (to her) Task A, and hence might not feel pity for her.