Menu Close

Performance Assessment in Virtual Environments

Since Virtual Environment is a product of new digital technologies, it can be exploited as technology-based learning given the right learning context. Unfortunately, the virtualness of the environment is both its strength and Achilles’ heel because the performance assessment process in the virtual environment would be very difficult. This explains why most currently available virtual training products and serious games do not have an integrated assessment component. Many researchers, therefore, made use of Media Comparison Studies (MCS) to try and measure performance improvement — a rather ineffective approach. This lack of appropriate assessment components to ascertain the effectiveness of the virtual environment and serious games training has, no doubt, resulted in many organizations delaying their adoption of the technology for training/learning. Uncertainty in Return of Investment is not a good way to convince Chief Learning Officers to loosen their purse strings.

Media Comparison Studies is an experimental study with a flawed design (Lockee, et al, 2001; link). This type of studies compare one media against another to ‘see’ which one is more effective (e.g., tradition classroom vs. game-based learning, textbooks vs. Internet, etc.) Unfortunately, after 50 years of research, Richard Clark (1983, p.450; link) deduced that this line of inquiry is meaningless (akin to comparing apples against oranges) and tended to reveal “non-significant differences” statistically; thus, his advice was that further Media Comparison research should be discontinued. Not surprisingly, many technology companies chose to ignore this advice and continue to use media comparison studies to assert the “effectiveness” of their products.

Need for a Better Assessment Method

Most classroom assessment methods to date have been created with physical (face-to-face) instruction in mind. Current online assessment methods are mostly reading/text-based and do not commensurate with the action-based training with virtual environments. By far, the most prevalent method of assessment of learning for serious games is that of the Pretest-Posttest method (Bellotti, et al, 2013; link) — the need to assess training performance with other types of virtual environments has yet to be established. Loh (2012, link) criticized this kind of assessment to be a Black Box method because no one truly knows what effects the technology has on the learners. Moreover, the data collection also occurs outside the training environment and is, therefore, inauthentic testing.

A better method of assessment would be the direct assessment. However, since learning is internal, it is not (ethically) possible to put a probe into the trainees’ mind to measure their learning as they interact with the virtual environment. We will need an external measuring method to assess and infer this internal learning process. Interviews and other qualitative analysis, while rich in data, can have too much self-reported data. The approach is also not scalable and, therefore, be limited in its generalizability and to prescribe a change in policy.


Information Trails: A Better Method Designed for VE

Information Trails has been designed, from the ground up, as an assessment framework for virtual environment training and instruction. Its main difference from the current assessment method is that it uses in situ data collection (or telemetry) — tracing user-generated data directly from within the virtual environment habitats, as the learning is in-progress. Because data are generated on-the-fly by the users as they interact with the training and learning elements in situ the virtual environment, there is no data transcription error (common for qualitative analysis), and the data can immediately be used for empirical analysis (time-saving). The biggest advantage of the Information Trails framework over other assessment methods for virtual environment training is that it can be used for both ad hoc (real-time, formative) and post hoc (after-action, summative) performance assessment, as is.

In a nutshell, the Information Trails assessment framework traces user-generated data in situ the virtual environment and uses the data to create analytics for performance assessment. The resulting analytics can be output graphically using Performance Tracing Report Assistant (PeTRA) for visualization and reporting. The reports may be further tailored to fit the needs of different stakeholders (e.g., administrator, trainers, trainees). Some of the analytics created using data mining or statistical learning methods may include measuring the learners’ decision-making processes, strategic planning skills, persistence in learning, performance compared to an expert’s baseline, shu-ha-ri stages (‘Keeping with the Path’, or ‘Breaking from the Path’), and many others.

Performance Assessment with and within Virtual Environments

Tracing Learners’ Course of Actions

In virtual environment training, we measure how individuals solve sequential problems to achieve learning goals. If a person enters the training domain through Novice and exit as Proficient, then what we are measuring is her path of learning (i.e., course of action) as she builds competency. Because “course of action” can be traced using Information Trails, once we know the Path that was taken/chosen by the learners, it can now serve directly as the evidence of her decision-making process and strategic planning skill.

Performance Tracing Report Assistant (or PeTRA) allows us to visualize the course of actions graphically for communication. further, we have developed an algorithm to convert learners’ course of actions into an individualized Expertise Index – which is a standardized index to compare learners’ performance against an established expert’s baseline. The following are some of our published research articles describe how we use this approach to assess performance within virtual environments.

References:

  • C. S. Loh, Y. Sheng, & D. Ifenthaler, [Eds.] (Jun 2015). Serious Games Analytics: Methodologies for performance measurement, assessment, and improvement. Switzerland: Springer International Publishing. DOI: 10.1007/978-3-319-05834-4 [ Springer | Amazon ]
  • Loh, C. S., & Sheng, Y. (2015). Measuring expert-performance for Serious Games Analytics: From data to insights. In C. S. Loh, Y. Sheng, & D. Ifenthaler (Eds). Serious Games Analytics: Methodologies for Performance Measurement, Assessment, and Improvement. Switzerland: Springer International Publishing. (pp.101-134) [Chapter 5] DOI: 10.1007/978-3-319-05834-4_5
  • Loh, C. S., Sheng, Y., & Ifenthaler, D. (2015). Serious Games Analytics: Theoretical framework. In C. S. Loh, Y. Sheng, & D. Ifenthaler (Eds). Serious Games Analytics: Methodologies for Performance Measurement, Assessment, and Improvement. Switzerland: Springer International Publishing. (pp.3-30) [Chapter 1] DOI: 10.1007/978-3-319-05834-4_1
  • Loh, C. S., Sheng, Y., & Li, I-H. (2015). Predicting expert-novice performance as Serious Games Analytics with objective-oriented and navigational action sequences. Computers in Human Behavior. 49: 147-155. DOI: 10.1016/j.chb.2015.02.053
  • Loh, C. S., & Sheng, Y. (2015). Measuring the (dis-)similarity between expert and novice behaviors as Serious Games Analytics. Education and Information Technologies. 20(1): 5-19. DOI: 10.1007/s10639-013-9263-y
  • Loh, C. S. & I-Hung, Li. (Sep 2015). Predicting the Competency Improvement for Serious Games Analytics: Action-sequences, Game Grids, PLS-DA and JMP. Proceedings of the Discovery Summit 2015. San Diego, CA. [PDF] DOI: 10.13140/RG.2.1.3997.4889
  • Loh, C. S., & Sheng, Y. (2014). Maximum Similarity Index (MSI): A metric to differentiate the performance of novices vs. multiple-experts in serious games. Computers in Human Behavior. 39: 322-330. DOI: 10.1016/j.chb.2014.07.022
  • Loh, C. S. & Sheng, Y. (July 2013). Performance metrics for serious games: Will the (real) expert please step forward? Proceedings of the 18th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES 2013). Louisville, KY. [PDF] DOI: 10.1109/CGames.2013.6632633