FACETs in OLab

Factored or Faceted Agents for Cognitive Educational Tasks (FACETs)

What is a FACET?

For some time, the OLab team has been exploring the use of cognitive computing platforms such as IBM Watson to extend its functions as an educational research platform. New developments in cognitive computing have opened a lot more possibilities.

In our schema, a FACET is built on the concept of Faceted or Factored Cognition. There is a very nice explanation of Factored Cognition here: the use of multiple concurrent agents to help you accomplish more in the way of cognitive tasks.  Our team brings the perspective that, as well as using multiple factored agents, we should be exploring how to integrate the functions of these factored agents across a range of tasks as facets of that job.

There is a parallel here with faceted search and this paper gives a very nice (if rather complex) breakdown of using facets and gems (collections of facets) in hypernormalization.

How might FACETs be useful?

Rather than getting caught up in the scope creep that we saw in the OLab 3 monolith, we are instead researching how to extend OLab’s capabilities using FACETs that we can call at will. To the programmer, this might simply sound like micro-services… but they are more like macro-services… on steroids!

Rather than expecting a micro-service to produce a single result from a single simple task, FACETs take the approach of breaking down that cognitive task into various elements and then employing a variety of cognitive computing engines in the cloud to do a lot of the heavy lifting for you.

For example, in natural language understanding, we have spent some time exploring a variety of approaches in our DFlow toolset. We have presented some simple examples in our OLab scenarios, such as Gabby.

https://mvp.olab.ca/renderLabyrinth/go/2539/52160 — use the code ‘tdm’ to unlock the case.

Opening up our thinking with the FACET concept allows us to utilize a variety of conversational agents in parallel, each tuned to their best effect. We can also integrate the solid work we have developed since 2013 in the TTalk service, a hybrid combination of human and computer interaction which can handle very subtle communications phrasing and nuance.

Back in OLab3, a favorite feature for scenario authors were our simple Flash-based avatars. For what was a remarkably simple and limited concept, it was fascinating to see what our authors were able to achieve. Because Adobe’s Flash format was discontinued, we decided not to implement avatars in OLab4. However, our recent explorations of cognitive agents has opened up the use of some much more capable avatars. Check out our FACETed avatars here.

We are also exploring how we can make our CURIOS video mashup service much more capable using FACETs. The CURIOS service is simple and yet very capable – however it has seen little use because many authors shy away from its utilitarian and finicky interface. FACETs will allow us to extend what can be done with educational video mashups.

For analyzing some of the discussions in our TTalk cases, in the past we used the IBM Watson platform to provide Sentiment Analysis services. This was very productive and gave us great guidance in fine tuning the performance of our TTalk moderators. See our paper on this. However, now using the FACETs approach, we will be able to engage a variety of cognitive computing services for Sentiment Analysis, using each service to its best advantage, while keeping costs low.