Victory’s spinoff metacog was just featured in a blog post by Databricks, a company founded by the team that created Apache Spark, a powerful open-source data processing engine. See the Databricks blog post below.
metacog has been hard at work releasing new capabilities of its learning analytics platform, while at the same time enhancing existing capabilities. If you offer subscription-based products, you know that your customers expect continuous improvement. With metacog, we partner with you to deliver new capabilities in deep learning analytics that you can easily integrate into your products to generate new data-driven business models and revenue streams.
Why data analytics for adaptive and competency-based learning is so challenging
You may have seen many companies offering data analytics applied to learning products. If you look closely, most of the time what is offered is “administrative-level” data and simple scoring data:
- Time-on-task data – How long did learners use the interactive?
- “Attendance” data – Did learners participate?
- SCORM-compliant scores reported to a learning management system (LMS) – How well are learners doing?
- Simple score reports – How many right, how many wrong?
It turns out that in order to improve anything, you have to be able to measure it, but so far in education we have been measuring the wrong thing – the final answer.
This explains why scoring is the key issue. In the past, most open-ended assessments had to be human-scored. And this greatly reduces the frequency with which teachers and professors assign open-ended assessments. Yet it is open-ended tasks that best assess the ability of a candidate to perform well in today’s job market.
Why metacog is different
metacog generates event streams that faithfully record all the actions a learner takes as they interact with your content. This includes your open-ended assessment items. The data from hundreds or thousands of sessions are analyzed to determine patterns in how learners approach solutions. The data enable machine scoring, and also give invaluable insights to educators and product developers, because they see not only the users’ answers, but how they arrived at the answers.
These data-driven insights are the crucial first step toward developing innovative products that can improve learning outcomes. But this is effective only if it can be done on the fly.
How do they do it?
In order to keep up the pace of these innovations, metacog built a research-to-production pipeline. metacog focuses on testing and processing innovative research, so it can be acted on immediately via product updates.
One way to update products faster is via workflow enhancements that streamline debugging and User Acceptance Testing (UAT).
To achieve this, metacog partnered with Databricks to overcome the complex issues of development testing.
Databricks blog post
This Databricks blog post explains how metacog utilizes Databricks for nearly continuous updates. We provide a glossary to explain a few of the technical terms in plain English.
|ETL (Extract, Transform, Load)||Raw data is stored in one or more databases. In order to analyze the data, the data are extracted, or pulled, from the database. Often the data need to be transformed to be consistent, for example always using “FL” for Florida. Then the transformed data is loaded into a new database (a data warehouse) that can be accessed via data queries.|
|IDE (Integrated Development Environment)||A software application that allows developers to create and edit code, preview the results in a simulated user environment, and then debug the code. Often powerful automation tools are included to facilitate rapid prototyping in an Agile workflow.|
|Machine Learning Algorithms||In the field of artificial intelligence (AI), a computer can “learn” to do a task without being programmed how to do it step by step. Essentially, a machine cannot think. But it can be instructed how to generate a set of “artificial” data, and then measure how well that matches a set of real data. In a machine learning algorithm, the machine tries many different ways of generating the artificial data, and then it “learns” which method(s) work best under different conditions.|