Category Archives: metacog

Metacog’s learning data platform is the most powerful and advanced technology for improving learning outcomes at scale.

Guided Deep Learning and the Future of Assessment

Victory’s spinoff metacog has been busy adding new features and functionalities. When companies look to incorporate metacog into their digital products, they want to know two things:

  1. How does metacog work?
  2. What can metacog help me do now that I couldn’t do before?

The answers to both questions lie in our unique approach to guided deep learning: machine learning steered by an understanding of real student outcomes.

Deep Learning

In education, deep learning is different from deeper learning, which is a pedagogical approach to instruction. In the world of Big Data, deep learning is an artificial intelligence (AI) approach that creates neural networks in which each “neuron” is a computer processor. This structure mimics how the human brain works in parallel processing.

Deep learning can be very effective, but it has a drawback: neural networks are so complex that we can’t know how they arrive at certain decisions.

Guided Deep Learning

At metacog, our guided deep learning process begins with a clear definition of what constitutes a good result. The computer program then goes on to do the heavy lifting. We can trust the reasoning behind the program’s decisions because we supply the reasons!

Without proper guidance, deep learning on its own can be problematic. Take self-driving cars, for example. They can use neural networks to observe and mimic human drivers, but unless the software can also effectively evaluate proper behavior, the resulting decisions can be erratic, and even dangerous. We don’t want self-driving cars that emulate a driver who is distracted by a text message!

metacog’s guided deep learning approach, on the other hand, specifies how to measure the appropriate outcomes. Like a self-driving car that models only the best driving behavior, our system understands the goals users want to achieve, and knows when those goals have been met successfully. How is that achieved?

Rubrics to the Rescue

The measurable goals are defined by rubrics. Just as rubrics are used to guide a teacher in scoring an open-ended performance task, rubrics are used to guide metacog. We set up a simple system in which humans create a rubric that defines what behavior constitutes a good score and what constitutes a bad one.

Once the rubric is defined, the deep learning program is then guided in how to apply the rubric. This is done with training sessions. In one training session, an educator scores the performance task while watching a playback of a student’s performance. Given enough training sessions, the deep learning program can then emulate the scoring with good accuracy and impeccable consistency—just like a self-driving car would emulate a perfect driver.

The benefits of machine scoring are enormous. A student can get immediate feedback while doing the performance task, instead of waiting for the teacher to grade the performance task after they have finished. And a teacher can be alerted immediately when a student is struggling, and then take appropriate steps for remediation. So metacog does not replace the teacher. Instead, it puts a highly intelligent teacher’s aide at the side of every student.

If this topic piqued your interest, here are a few links for a deeper dive.

Further Reading

metacog Building Its Deep Learning Analytics Platform with Databricks

Victory’s spinoff metacog was just featured in a blog post by Databricks, a company founded by the team that created Apache Spark, a powerful open-source data processing engine. See the Databricks blog post below.

metacog has been hard at work releasing new capabilities of its learning analytics platform, while at the same time enhancing existing capabilities. If you offer subscription-based products, you know that your customers expect continuous improvement. With metacog, we partner with you to deliver new capabilities in deep learning analytics that you can easily integrate into your products to generate new data-driven business models and revenue streams.

Why data analytics for adaptive and competency-based learning is so challenging

You may have seen many companies offering data analytics applied to learning products. If you look closely, most of the time what is offered is “administrative-level” data and simple scoring data:

  • Time-on-task data – How long did learners use the interactive?
  • “Attendance” data – Did learners participate?
  • SCORM-compliant scores reported to a learning management system (LMS) – How well are learners doing?
  • Simple score reports – How many right, how many wrong?

It turns out that in order to improve anything, you have to be able to measure it, but so far in education we have been measuring the wrong thing – the final answer.

This explains why scoring is the key issue. In the past, most open-ended assessments had to be human-scored. And this greatly reduces the frequency with which teachers and professors assign open-ended assessments. Yet it is open-ended tasks that best assess the ability of a candidate to perform well in today’s job market.

Why metacog is different

Continue reading

Design: The Secret Behind Effective Digital Learning Experiences–Part 2

In our first blog on this topic, we began to reveal the secret behind the magnetic allure of games, simulations, and other online performance-based digital learning experiences. We showed you a well-aligned, well-designed simulation-based science performance task.

In this blog, we’ll show you how the design of a digital performance task directly influences the richness of the data we can gather on student learning. This time, we’ll use a social studies task we developed:

Behind the Scenes: Designing the Boston Massacre Performance Task

We designed this performance task to give students opportunities to visually, graphically, and dynamically analyze text, make inferences based on evidence, and synthesize their understanding.

Why? Let’s examine the standards. The C3 framework, CCSS standards, and other state and national efforts to align learning expectations to 21st-century workforce demands are emphasizing critical analysis and evidence from text.


Continue reading

What is metacog?

Watch this video for some behind-the-scenes thinking about metacog, our ground-breaking data analytics work.

What is metacog?

metacog is a series of APIs (application programming interfaces), or more simply put, software tools that give information to other software applications.

What does metacog do?

metacog is the engine in your car. It’s the paint on your paintbrush. It’s the tomato in your pasta sauce. By itself, metacog is a piece of code. But together with an online learning experience, we can see what learners do, analyze how they think, and improve the ways they learn.

Continue reading

metacog Partners with PhET Interactive Simulations

A lot of education companies are putting data analytics to work, because the first step in improving student outcomes is to see where students are.

All Data Analytics Were Not Created Equal

But you can only get so much from information collected outside the activities. That kind of data is like taking student attendance — was the student present? Did the student stay for the whole class? How many activities did the student complete, and what were the scores?

These status reports are useful, but to truly get insight, you need to see what students are doing inside the digital activities. These are the kinds of richer questions we can now ask:

  • Did the student struggle?
  • Did the student persist and improve?
  • Did the student achieve a high score through insight into the core principles and practices?

metacog Partnering with PhET Interactive Simulations

In the STEM disciplines, metacog is paving the way with a joint venture with PhET Interactive Simulations, the premier developer of science simulations. The timing couldn’t be better, as many publishers are struggling to develop programs that match the spirit and intent of the NGSS (Next Generation Science Standards).

See the press release for more details, and feel free to share with your colleagues.

metacog Releases Automated Real-time Rubric-based Scoring API

Victory’s spinoff metacog just released its advanced scoring analytics API at ISTE 2015 in Philadelphia. See the press release here, and feel free to share it with your colleagues.

Why Assessment Has Changed

Assessments have been evolving rapidly, mostly due to these factors:

  • The new standards (Common Core and Next Generation Science) focus on practices that require higher-order thinking and decision-making skills.
  • Jobs are changing, and employers need evidence that prospective employees have the necessary twenty-first-century skills.
  • Technology makes it possible.

The assessment landscape is of course more complex than this; stay tuned for more detail in future blog posts.

The Problem

One writer recently complained to us about NGSS:

They wrote the NGSS as if they had one goal in mind—don’t allow multiple choice questions. And now I have an assignment to write 100 MC questions for NGSS!

Writing good assessments is an art, but even the best assessments won’t be used unless they can be readily scored.

That’s why there are still so many multiple choice questions in high stakes assessments—because it is so expensive to grade the more open-ended assessments by hand.

While technology has made great strides in interactive digital assessment, the most robust assessments still have to be hand-scored.

Until now.

The Answer—How metacog Makes Automated Scoring Possible

Continue reading

Why Instrument an Educational Simulation?

In earlier blogs, I showed how we are using metacog to instrument some of our prototype TEIs (technology enhanced items). The idea is simple: as thousands or millions of learners use a DLO (digital learning object), their actions can be streamed anonymously to the metacog™ servers. This happens in the background without interfering with the DLO performance. Then data analytics can reveal how learners used the DLO.

A simulation is another type of DLO that we can instrument. Simulations allow learners to explore in a safe, controlled environment. They have been around a long time, but have been revitalized by the focus in Common Core and Next Generation Science Standards on developing meaningful and authentic practices.

PhET, based out of the University of Colorado in Boulder, is well known for developing truly open-ended, robust simulations. We worked with PhET to instrument their existing simulation on Beer’s Law. This is an inquiry-based simulation in which students discover how light is absorbed differently when different color lasers pass through different solutions. Watch this video to see how it works.
Continue reading

The Next Wave of Competency Portfolios

Victory’s spin-off, metacog, is up and running. As with any breakthrough technology, it may take some time for people to realize its full potential.

One promising area has been getting a lot of buzz lately: competency portfolios. Originally, portfolios were the domain of artists or architects, and competency portfolios retain a visual emphasis. You want to see what someone can do. Show me! However, a competency portfolio is typically summarized by a report, and you can’t always see the work behind the badges or microcredentials. This is especially true for open-ended assessments that in the past haven’t been machine-scorable.

That can all change with metacog. Watch our video on competency portfolios to learn more.
Continue reading