Dynamic Scoped Objects

Previously, we gave you a teaser about a whole new level of functionality in OLab4: Scoped Objects. These have turned out to be powerful and will allow much improved reusability of modules and objects, which will speed up case authoring.

Being able to control the scope means that, like in common programming languages, where variables can be local, public, global etc, our objects can also be shared across a variety of levels eg map-level, server-level but also node-level and global-level ( a common set to be shared by all OLab4 servers).

Until recently, the state of some of the objects, like Counters, was only updated when the user changed Nodes. This is somewhat similar to the situation in OLab3. While this concept is nice and simple, it does have lots of limitations.

The new Single Page Architecture (SPA) in OLab4 means that some Counters can be manipulated within the page. It will be easier to illustrate what this means with some example cases when OLab4 is ready.

Now, our development team has come up with the concept of Dynamic Scoped Objects. These can be changed, without having to change Nodes, and server load can be controlled so that complex cases do not bog things down.

For server-level Counters, that will be used in team-based learning, this also opens up some interesting possibilities where Counters can be shared across multiple maps on the same server. This provides a way of having maps interact with each other (and therefore the team members playing their respective roles on such maps). We wrote about this before.

The difference now with Dynamic Scoped Objects is that we can control how often these objects are updated. So, if one team has not changed the Node, they will still see how such a Counter is altered if a competing team does something to change the same Counter.

Again, this will be easier to illustrate with some example cases and scenarios when we have this running in the near future. Feel free to ask questions about this or to make suggestions on cool ways in which to use such shared Counters.

 

OLab4 development resumes

After a pause of several months, we are delighted that development of OLab4 has resumed. We now having funding from a few sources, enough to make a sustained push to get things pushed out the door… and soon.

While we were paused, we took the opportunity to examine where we had gotten to in the development roadmap. It had become clear that sticking with the Entrada framework had too many limitations.

It had also become clear that remaining with the PHP and Javascript base was now too limiting. It had been chosen because that was the most common way to develop a browser-based application in the academic world. But things have moved on and Microsoft’s .Net environment now has so much more to offer.

Since OpenLabyrinth was originally written in Microsoft ASP, it kinda seemed like going back to old ground… and old problems. But .Net is now fully open-source and works across a variety of operating systems. Academic computing is now largely cloud-based, as is so much application software. Using .Net allows us to be either server-based or cloud-based, and affords a whole new degree of scalability if we need it in the future.

Our team has been exploring the difficulties of migrating our codebase across to .Net — it has been great to find that these difficulties are way less than expected. It has also made further development more flexible and has opened up a bunch of new options for us, which I will write about shortly.

There is a big enough change to the functionality of this new codebase that we are releasing it as version 4.5 … and when will this happen?? Soon, young Padewan, soon.

Turk Talk as a service

Breaking news!

Just had lunch with my senior technical advisor about progress on OLab4 development. And, boy, do I have news for you.

He has been exploring a whole new approach to Turk Talk, in-app messaging, and flow control for OLab4. This opens up a whole raft of possibilities for what educators will be able to do with the platform. Many of these will be particularly helpful for small group and team-based learning.

Some of the new functions that will be possible:

  • improved chat messaging between teachers and learners
  • more effective nudges and reminders during case play
  • on-the-fly room allocation
  • rules-based responses and actions
  • scheduled launch of certain objects and modules within a case
  • more flexible display of learner progress within the case

Some of these functions, such as popup reminders, were possible in OLab3 but it was a bit of a hodge-podge to get things working. By making the Turk Talk functionality work as a separate service, rather than being tightly integrated as it was within OLab3, we can make things much flexible and future-proof.

Team-based Learning

We talked before about how OLab4 is better able to handle team-based learning. By using activity metrics, we can measure how effective the contributions are of each team member. All other team rating scales we have looked at will only evaluate the function of the team as a whole. This goes further.

Now, by using Turk Talk as a messaging and control service, we can introduce some neat new capabilities. The Turker or Director can now redirect team members into new areas of the case, if they are struggling (or if they are excelling and could benefit from some bonus points through extra achievements).

Hockey Lines

In blended scenarios working with other resources such as hi-fidelity mannekins, we often find that there is a bottleneck created around these high-cost resources. When running a resuscitation code, there are typically 3-5 active team members working on the mannekin, with the remaining 8-12 watching from the sidelines. This is why we came up with the premise of Hockey Lines, which some of you may be familiar with.

Hockey Lines

In Hockey Lines, the idea is that you can rapidly switch in secondary team members from the “bench”. This improves Crew Resource Management and active communication loops enormously. But up till now, all this has only been feasible with live in-person sim rooms. Now, with this new approach to Turk-Talk-as-a-Service, we will be able to support a similar Hockey Lines approach within our OLab4 scenarios.

Service-based affordances

There are other advantages to the Turk-Talk-as-a-Service approach. It will allow us to add in more sophisticated logic and text parsing to our scenarios. It will enable us to create time-based interactions such as reminders if you are taking too long, scheduled start of a Turk Talk session, or even something like the Tamagotchi cases that we tried to implement years ago.

All of these crazy ideas have been in our melting pot for a while. But previously, it was hard to create the needed functionality and it depended on custom coding because Turk Talk was so tightly embedded within the OLab3 code base. Using the Turk-Talk-as-a-Service approach, it opens up what can be done on the programming side dramatically. But for the average case author, they will not need to get bogged down with such programming. They will simply be able to pick the kind of triggers and alerts that they want to make use of in their Scenarios.

It will even allow us to explore the integration of chatbot services into some of our scenarios. Our previous attempts and explorations had shown us that the amount of work involved in integrating a chatbot service exceeded the time and resources available to the average case author. Great for medium to large businesses; too much for us. But, now in taking the Turk-Talk-as-a-Service approach, we can explore the feasibility of creating more generic chatbot response trees that can be reused across different scenarios. Such a leveraging effect may transform such chatbot services from a curiosity into something practical.

Lastly, we plan to create an API for the Turk-Talk-as-a-Service. This means that other applications and services can more readily interact with Turk Talk and will allow us to extend its capabilities beyond OLab, much as we did with our CURIOS video mashup service.

Role, play? It depends what IAM

Sorry for the pun inserted. The joke would be better if it did not need to be explained. IAM = Identity-Access Management, the system that determines who you are and what you are allowed to do in an online platform. Those annoying passwords etc.

Despite the title, this piece is not much about role play itself. We just published an article last month, OLab and Virtual Roles, about role play in its less technical sense and how we have been experimenting with virtual roles, role-switching during a scenario and stuff like that. The full formal version of the article can be found here: Topps, David; Cullen, Michelle; Wirun Corey, 2021, “OLab and Virtual Roles”, https://doi.org/10.5683/SP2/Q4EGTI, Scholars Portal Dataverse.

No, this post is more about the IAM side of things and the technical aspects that we have been working on. Sadly, this reminds me of another terrible joke. An army corporal is pissed off about the new posting he just received and decides to complain to the Sergeant Major. He quickly looks up the number in the fort’s phone directory (yes, the army is not so technical and still believes in paper for such things… those fickle computers would be wiped out by an EM pulse if we had a war, don’t you know!), rapidly dials and brusquely gives the recipient an earful of barrack room language on what he thinks of his latest posting.

“Do you know who you’re talking to?” thunders the irascible Major, spluttering through his gin and tonic.

“No. Do you know who you’re talking to?” replies the corporal. When the Major grunts a negative, he says, “Well, thank gawd fer that. I ain’t telling ya,” and hangs up right quick.

Ok, enough torture. Back to the main point. Over the years and through many projects, we have found how important it is to combine a variety of tools and applications in our learning designs. Even though OLab and OpenLabyrinth are very powerful tools, thinking that one tool can do it all makes every problem appear to be a nail. You should find the best tool for each purpose and link them together.

In OLab3, we tried to make it do too much. In OLab4, we are moving more towards a services-based approach, which will allow us to integrate OLab more smoothly with other tools like Moodle, WordPress and the whole gamut of educational software applications.

We did make some of our previous functions, like video curation, into separately callable services e.g. CURIOS. But some other functions like Turk Talk were tightly built into the core of OLab3, which creates all sorts of problems. In OLab4, this will be simplified as another callable service, which opens up a lot of possibilities.

Even the IAM aspects were rather crudely managed in OLab3. There are lots of better ways out there of managing who can do what in our platform so why try to replicate the functions of Moodle etc? One of the ways we have tried to tackle this in the past was with IMS-LTI (Learning Tools Interoperability).

LTI is an excellent system, secure and not difficult to code into software. However, for program directors and scenario authors, it poses a problem when defining roles: the specification of what a certain level of agent/actor can do within a system. In OLab3, the roles (or security levels) were crudely limited to four: learner, author, reviewer and super-user. LTI has a much wider variety of roles.

But there is a problem when getting two systems to play nice with each other using LTI. Across the many systems out there who do implement LTI, there is very little common agreement in the number and definition of these roles. About the only constant is ‘student’, who is at the bottom of the heap and can’t do much.

You also have the functional problem: what if the learning designer wants to test how her new module works from the perspective of the learner? She certainly does not want to take the clunky old approach taken by OLab3 where she has to logout, login again using a student account and then play the module until it breaks, have to logoff, login again as an author in order to fix things: a very crude, inefficient debugging cycle.

So, just to confuse you a little more, we are working on virtual roles: not the same as the Virtual Roles article mentioned at the top of this post, but virtualized IAM roles, which will allow us to change the IAM role of that user on-the-fly. This could also be done programmatically, which opens up some interesting possibilities.

We are also exploring the use of Conditional Rendering of Scoped Objects. This means that, depending on the context and the IAM role of the user, certain objects like teacher directions in a text panel, will only be shown to teachers, not to students. We had this in a crude form in the Annotation pane in OLab3 Nodes. This new system will be much more flexible.

DYED: Donate Your Educational Data

This is a concept that we came up with a while ago. Instead of donating your body to science, why not donate your data?

We just posted a collaborative discussion document on this topic on Google Docs. You can comment directly, using this link — please join us in this innovative approach.

When creating virtual scenarios, we often struggle to find good case data. The metadata systems are not set up to provide easy searching by case conditions or presentations. In this document, we describe some of the challenges and also some potential solutions.

This was particularly poignant some years ago when we were trying to provide some innovative teaching illustrations about cervical spine injuries. Check out this YouTube video.

We look forward to your comments, using the Google Docs link above.

CURIOS video mashup service fixed

Finally, we managed to fix an annoying we glitch in our setup that was preventing the use of the CURIOS video mashup tool. You can access it via our demo server at https://demo.openlabyrinth.ca/ – simply go to the Tools menu then Video Mashup.

For more info on how to use the CURIOS service, check out the user guide. You can use CURIOS on most YouTube videos, not just your own.

You will need an authoring account on our demo OLab3 server to create your own mashup snippets. But once the snippets are created, anyone can use them.

We have an app for that

We have put together a neat little app that you can use to find and play OLab3 and OLab4 scenarios. It will work on either iOS or Android.

You can access and install it using this link:

https://olab4.glideapp.io/

You still need to use a password combo to access the OLab4 server but you can choose to store that in your browser or keychain. And it can be a little slow to load sometimes.

Edit: As often happens with the Freemium model, Glide.io have now changed their pricing structure. The free version is now somewhat limited. The costs and limitations of even the Private versions are not justifiable for our purposes, at this point.