Since Virtual Environment is a product of new digital technologies, it can be exploited as a technology-based learning given the right learning context. Unfortunately, the virtualness of the environment is both its strength and Achilles’ heel because performance assessment process in the virtual environment would be very difficult. This explain why most currently available virtual training products and serious games do not have an integrated assessment component. Many researchers, therefore, made use of Media Comparison Studies (MCS) to try and measure performance improvement – a rather ineffective approach. This lack of appropriate assessment component to ascertain the effectiveness of virtual environment and serious games training has, no doubt, resulted in many organizations delaying their adoption of the technology for training/learning. Uncertainty in Return of Investment is not a good way to convince Chief Learning Officers to loosen their purse strings.
Not surprisingly, many technology companies chose to ignore this advise and continue to use media comparison studies to assert the “effectiveness” of their products.
Need for a Better Assessment Method
Most classroom assessment methods to date have been created with physical (face-to-face) instruction in mind. Current online assessment methods are mostly reading/text-based and do not commensurate with the action-based training with virtual environments. By far, the most prevalent method of assessment of learning for serious games is that of the Pretest-Posttest method (Bellotti, et al, 2013; link) – the need to assess training performance with other types of virtual environments has yet to be established. Loh (2012, link) criticized this kind of assessment to be a Black Box method because no one truly knows what effects the technology has on the learners. Moreover, the data collection also occurs outside the training environment and is therefore inauthentic testing.
A better method of assessment would be direct assessment. However, since learning is internal, it is not (ethically) possible to put a probe into the trainees’ mind to measure their learning as they interact with the virtual environment. We will need an external measuring method to assess and infer this internal learning process. Interviews and other qualitative analysis, while rich in data, can have too much self-reported data. The approach is also not scalable and, therefore, be limited in its generalizability and to prescribe change in policy.
Information Trails: A Better Method Designed for VE
Information Trails has been designed, from the ground up, as an assessment framework for virtual environment training and instruction. It’s key difference from current assessment method is that it uses in situ data collection (or telemetry) – tracing user-generated data directly from within the virtual environment habitats – as the learning is in-progress. Because data are generated on-the-fly by the users as they interact with the training and learning elements in situ the virtual environment, there is no data transcription error (common for qualitative analysis), and the data can immediately be used for empirical analysis (time saving). The biggest advantage of the Information Trails framework over other assessment methods for virtual environment training is that it can be used for both ad hoc (real time, formative) and post hoc (after-action, summative) performance assessment, as is.
In a nutshell, the Information Trails assessment framework traces user-generated data in situ the virtual environment and uses the data to create analytics for performance assessment. The resulting analytics can be output graphically using Performance Tracing Report Assistant (PeTRA) for visualization and reporting. The reports may be further tailored to fit the needs of different stakeholders (e.g., administrator, trainers, trainees). Some of the analytics created using data mining or statistical learning methods may include measuring the learners’ decision-making processes, strategic planning skills, persistence in learning, performance compared to an expert’s baseline, ShuHaRi stages (‘Keeping with the Path’, or ‘Breaking from the Path’), and many others.
Tracing Learners’ Course of Actions
In virtual environment training, we measure how individuals solve sequential problems to achieve the learning goals. If a person enter the training domain through Novice and exit as Proficient, then what we are measuring is her path of learning (i.e., course of action) as she builds competency. Because “course of action” can be traced using Information Trails, once we know the Path taken/chosen by the learners, it can now serve directly as the evidence of her decision-making process and strategic planning skill.
Performance Tracing Report Assistant (or PeTRA) allows us to visualize the course of actions graphically for communication. further, we have developed an algorithm to convert learners’ course of actions into an individualized Expertise Index – which is a standardized index to compare learners’ performance against an established expert’s baseline. The following are some of our published research articles describe how we use this approach to assess performance within virtual environments.
- C. S. Loh, Y. Sheng, & D. Ifenthaler, [Eds.] (Jun 2015). Serious Games Analytics: Methodologies for performance measurement, assessment, and improvement. Switzerland: Springer International Publishing. DOI: 10.1007/978-3-319-05834-4 [ Springer | Amazon | Book site ]
- Loh, C. S., & Sheng, Y. (2015). Measuring expert-performance for Serious Games Analytics: From data to insights. In C. S. Loh, Y. Sheng, & D. Ifenthaler (Eds). Serious Games Analytics: Methodologies for Performance Measurement, Assessment, and Improvement. Switzerland: Springer International Publishing. (pp.101-134) [Chapter 5] DOI: 10.1007/978-3-319-05834-4_5
- Loh, C. S., Sheng, Y., & Ifenthaler, D. (2015). Serious Games Analytics: Theoretical framework. In C. S. Loh, Y. Sheng, & D. Ifenthaler (Eds). Serious Games Analytics: Methodologies for Performance Measurement, Assessment, and Improvement. Switzerland: Springer International Publishing. (pp.3-30) [Chapter 1] DOI: 10.1007/978-3-319-05834-4_1
- Loh, C. S., Sheng, Y., & Li, I-H. (2015). Predicting expert-novice performance as Serious Games Analytics with objective-oriented and navigational action sequences. Computers in Human Behavior. 49: 147-155. DOI: 10.1016/j.chb.2015.02.053
- Loh, C. S., & Sheng, Y. (2015). Measuring the (dis-)similarity between expert and novice behaviors as Serious Games Analytics. Education and Information Technologies. 20(1): 5-19. DOI: 10.1007/s10639-013-9263-y
- Loh, C. S. & I-Hung, Li. (Sep 2015). Predicting the Competency Improvement for Serious Games Analytics: Action-sequences, Game Grids, PLS DA and JMP. Proceedings of the Discovery Summit 2015. San Diego, CA. [LINK] [PDF] DOI: 10.13140/RG.2.1.3997.4889
- Loh, C. S., & Sheng, Y. (2014). Maximum Similarity Index (MSI): A metric to differentiate the performance of novices vs. multiple-experts in serious games. Computers in Human Behavior. 39: 322-330. DOI: 10.1016/j.chb.2014.07.022
- Loh, C. S. & Sheng, Y. (July 2013). Performance metrics for serious games: Will the (real) expert please step forward? Proceedings of the 18th International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games (CGAMES 2013). Louisville, KY. [PDF] DOI: 10.1109/CGames.2013.6632633