OLab and Chatbots

A common downside that authors encounter with many virtual patient and virtual scenario platforms is that the choices and options tend to be predetermined. Indeed, we just came across a paper where the authors seemed to initially consider it to be advantageous to keep the responses simple and limited. [1] As we have seen elsewhere, they did find that being able to offer more free-form input would be an advantage.

But therein lies the problem. Natural language processing (NLP), as we have commented before, can be quite hard to do and, until recently, prohibitively complex and expensive. Our Turk Talk approach, where we use a hybrid model of humans and computers to address the challenges of NLP, has been very successful.

However, there are now some other approaches to NLP. Some of the cognitive computing platforms, like IBM Watson, now provide enormous power and flexibility in this area, with some very impressive results. If you are serious about exploring NLP, we encourage you to go check out the cognitive computing services from IBM, Google, Microsoft and Amazon. They are making huge strides.

At a slightly less complex level, we are also now seeing a plethora of chatbot services. You will likely have encountered these when calling your phone company or other large service. They present you with a menu of choices and you can tell them where to go. Literally and figuratively. These have also improved a lot from a few years ago and handle conversational input reasonably well, without any speaker-dependent training (like you used to see with voice recognition programs like Dragon a few years ago).

Chatbot engines are becoming easier to program. Firstly, you do not need to be a programmer or understand anything about phonemes etc. Indeed, there is now a new vocation: instead of being a web site designer, you can now be a Conversation Designer. This is much easier than it used to be and there are simple online courses (for $1000) where you can learn this new skill.

We have explored chatbots and we do think that they will have a future role to play with OLab. We are looking at integrating a chatbot as an externally callable service from the new OLab4 architecture, and we anticipate that they will get better, easier, cheaper and more flexible. Indeed, we note that Shadow Health has already taken this approach with their virtual simulations.

We do have a few reservations about the offerings of Shadow Health. When you first look at their virtual simulations (more commonly called virtual patients), they are cutely interactive. Their avatars move as they speak and you can speak to them directly. First impressions are very good and they score highly with students as being “more realistic”. They are, of course, quite expensive as you would expect with a new technology. More importantly, we have found that, despite the impressions given by the demo cases, they are still quite limited and simple. They tend to be tuned towards very basic history-taking techniques and oriented towards very early learners. But compared to where things were a few years ago, there has been a lot of progress.

If you are just looking for an easy way to buy some content provision, with some cute interaction features, these will save you a lot of work. However, we wonder to what extent these will catch on in the broader sphere of health professional education. Virtual patients and virtual scenarios have been widely studied and appreciated for their capabilities in the learning and assessment of decision-making and problem-solving. For this kind of deeper educational analysis and evaluation, you need more than a cute interface.

It is also important to consider how you will integrate such tools and platforms into your curriculum and learning designs. We will write more on this soon because there are many other factors to consider but, in the meantime, we have noted that there are some primary axiological reasons for being cautious about leaping into chatbots. For example, while they have improved greatly, it still typically takes a team of 4-6 people and 2-3 months to write and debug a single case. This kind of effort definitely pays off for a phone company, who are handling thousands of calls per hour and where there are only a limited number of options and departments to speak to. But the average clinical teacher has about two weeks to create a case and is doing it on their own. This is feasible with Turk Talk, as we have demonstrated in our studies [2], but is not yet so with chatbots.

 

1. Jacklin, S., Maskrey, N., & Chapman, S. (2020). A shared decision-making virtual patient in medical education: A mixed-methods evaluation (Preprint). JMIR Medical Education, 7(2). https://doi.org/10.2196/22745

2. Cullen, M., & Topps, D. (2019). OSCE vs TTalk cost analysis. Calgary. https://doi.org/https://doi.org/10.5683/SP2/RJXRWC