Authors: David Topps, Corey Wirun, Mahdi Hosseini, Ismail El Hardhoum, Michelle Cullen
We have run a number of small experiments and test scenarios that explore the capabilities of OLab in supporting Team-Based Learning (TBL). We summarize here some of the main issues.
- All team-based marking schemes look at the whole team
- None assess team member contributions
- Global rating encourages the Cheerleading Coasters
- Known problem with all forms of groupwork
We have not been able to find, in our studies on TBL, any assessment schema that looks at individual contributions to the team. does assess the team leader as well as the whole team. There have been a few studies that have attempted to assess the contributions of individuals in discussion forums but these have mostly been based on discourse analysis and similar techniques, which are appropriate for research but less practical for educational assessment.
Such a dependence on global rating schemes is not ignored by team members. It is a sad and painful memory for many of us from high school onwards that most small group projects have one or two active members who do most of the work, cheered on by coasters who contribute little apart from buying the coffee. And yet all receive the same global score. This is a known problem for all forms of groupwork.
Clinical practice is a team sport. Gone are the days of Marcus Welby MD, the lone practitioner against the world. But apart from cardiac resuscitation “code” exercises, there is little time spent in medical schools on teamwork, and even less on the assessment of TBL. This is especially important in the growing arena of inter-professional education (IPE).
The Gamification Gap
Many educational researchers have spoken of the advantages of gamification, and how collaborative role-playing games could be used to teach and assess teamwork. Back in 2006, when at a workshop hosted by Stanford’s SUMMIT (Stanford University Medical Media and Information Technologies), we were exploring how massively multiplayer online role-playing games (MMORPGs) were influencing collaborative learning.
In one presentation, we watched the consummation of 40+ hours of a continuous assault on an enormous game monster by teams of dozens of collaborating players. This huge and intricately choreographed assault was a culmination of hundreds of hours of preparation and research, repeated skill practice, and detailed logistics. They even had timed and scheduled bathroom breaks so that players could rotate off-game briefly without compromising the combined integrity of the assault party.
The level of detail, preparation and planning required was astounding. And we are sure that the intensity has only increased in recent years. Team members were expected to contribute extraordinary levels of time, effort, and energy towards this enterprise. It would be rare to see such intense collaboration and contribution in any workplace endeavor.
In another impressive workshop demonstration by Forterra, we witnessed the use of a collaborative role-playing game to simulate a mass casualty exercise. They had taken a Third-Person Shooter and modified it. Players and non-player characters (NPCs) would interact with each other. The real-time voice communication system built into the game was impressive for its time.
Forterra had modified some of the objects in the game to be more appropriate to a mass-casualty event. A machine gun was turned into a defibrillator; a bazooka was turned into a stretcher; etc. They demonstrated how the player teams, along with their NPCs, would interact, communicate and collaborate in saving the victims of a bombing.
As an amusing aside, we witnessed a 14 year-old para-attendee of the workshop (school holidays, no babysitter) – who shall remain nameless to protect the innocent, but with decent gaming skills of his own – who then proceeded to reverse engineer the interface to turn the objects back into their original weapons. We all chuckled while this very polished demo was rather disrupted as firemen and ambulance crews were being strafed by a lone machine-gunner who then lobbed a mortar shell into the confusion.
Amusing disruptors aside, it was indeed impressive to witness how such games could generate high degrees of engagement and learning in a team-based scenario. But the axiology and practicality were completely missing: none of the groups attending would be able to afford the time and energy, let alone budget, required to host such a simulation. The licences alone were three orders of magnitude beyond the reach of the typical education researcher.
Another gap that manifests itself as soon as you mention gamification: expectations. Students expect the same level of fidelity, physics-engine reality and game authoring complexity as seen in these very sophisticated games, missing the point that their budget is four orders of magnitude above that available to our researchers.
What does OLab have to offer?
Branching scenarios have been central to OLab’s design model from the outset in 2003. Born as OpenLabyrinth, the software has always been able to support complex, branching design patterns. Initially thought of as cute and fun, this has become central to the analysis of decision-making.
Consequence-based learning: good learning designs have moved beyond the process of testing fact recall (an increasingly irrelevant exercise in the context of today’s ubiquitous information providing devices). We must move away from best-of-five answers, with a perfect correct response buried in a bunch of distractors. We must present reasonable and realistic decision options, where some are more reasonable than others, and then a set of consequences that arise from those decisions. OLab easily supports such learning designs.
We have written about the Directed Acyclic Graph (DAG) model and its power in representing complex decision pathways. It is notable that DAG-based software is now finding application in a number of areas, and that DAG-based analytics open up how we can more intricately explore such decision-making.
Activity metrics: navigating the complex pathways of a well-designed OLab DAG-based scenario creates a steady stream of detailed metrics that are open to a wide range of analysis. OLab captures every click to the millisecond, every node visited no matter how briefly, every step retraced, every question response and how long it took to consider, and can use a variety of selected-response and constructed-response options, along with counters to generate scores in a range of parameters e.g. costs of investigations, time waiting for a result, likelihood of effect.
All of these internal metrics are captured to a SQL database, which opens up more detailed analytic possibilities using common, standard tools. The metrics can also be captured to a Learning Records Store (LRS) using xAPI statements. This allows for federation of metrics storage and analysis, as well as simultaneous capture from multiple tools and platforms such as the LMS, edge devices, mannequins, other simulation software.
Each OLab4 scenario map can have multiple entry and exit points to the DAG, allowing case authors the maximum flexibility in their learning designs. This avoids the creation of simple, boring page-turners, which are all too common in the design of virtual patients: the HEIDR model – history, exam, investigation, diagnosis, Rx.
Role-based Node Access: this is a new and unique feature of OLab4. Depending on the role being played in the map (student, teacher, nurse, social worker, paramedic), certain nodes and pathways can be marked as off-limits e.g. teacher tips, role-specific activities. In OLab3, this was possible by using complex rules and variables. Whole sections of a map can now be turned on or off easily.
Real-time Chat: scenario authors can now integrate TTalk text-based chat into their scenarios. This has been feasible in OLab for about ten years but we now have new learning designs and functions which make this TTalk so much more capable.
Scoped Objects such as server-level counters, allow for scenario and learning designs where maps and groups of learners can interact with each other. The activities and decisions of one player or team can affect the case portrayal for another team, either in real-time or asynchronously. Teams can be made to compete or collaborate over scarce resources, such as units of plasma in a mass casualty scenario.
We also tend to think of teams as being fixed and real-time. But in today’s IPE environment, we can have virtual teams (who come together out of necessity) and longitudinal teams (where members may migrate from one group to another). OLab can support all of these contexts and learning structures.