There are several features of OLab which take the concept of branching narratives quite a bit further.
Concept Mapping tool
It is much easier to create complex maps, with multiple branches, lots of different styles of branching, and iterative designs. The concept mapping tool that has been central to OLab since version 2, allows the author to get quite creative in their designs.
The Designer also allows a lot of flexibility in layout. The map’s canvas or appearance is not tightly linked to the logical navigation of the pathways. The author can place nodes anywhere on the page. The logical pathway is determined by the Links’ direction and connections, not by their placement on the page.
This layout flexibility is not just a style issue. It allows the author to map out their decision pathways as true concept maps. We have looked at other tools that promise branching scenarios, such as Moodle and H5P widgets. We find that all of these only support a top-down branching tree, whereas in OLab, your concept map can be as simple or as complex as you like. Check out some of the examples we have created here: https://olab.ca/branching-case-examples/
In OLab3, we have various forms of conditional branching. This provides flexibility but also complexity.
This also provides a lot more power than the branching in Moodle or H5P, which depend on simple scores.
Dandelion choice order
We did find one emergent property that arose from our use of Dandelions in our concept mapping tool. As well a providing a lot of flexibility and control in how users can navigate between choices, we can also track which order the dandelion nodes are explored in.
This allows us to discriminate between first-choice and most-popular-choice when assessing the decision paths taken by our users. This has been described further in the 4R reporting of OLab3.
There are also some interesting things you can do with randomization in OLab. Using link order, conditional links and Script Objects, response order and randomized responses allows our authors an enormously flexible model for designing their case narratives. No other platform provides this degree of flexibility.
The ability to track what your users are doing within your scenarios is much finer in OLab than it is in Moodle or H5P. In most learning tools, if they do track learner performance, this is distilled down to a single grade on course or module completion. While there might be scores within a Quiz for certain questions, the overall tracking that is available outside the module is the grade mark.
This applies to both SCORM and IMS-LTI, the predominant forms of data interchange between learning tools. While this might be sufficient for many purposes, it does not allow a finer degree of assessment. The analogy we have often used is that of the high school math test: we can tell if little Johnny got the right answer but we don’t know how he got there.
In our PiHPES Project, we have been working on ways to track activity metrics across platforms at a much finer level of assessment. Using xAPI statement, sent to a common LRS from a variety of learning tools, modules and widgets, we can fine tune the granularity of our assessments.