Category Archives: Assessment

Guided Deep Learning and the Future of Assessment

Victory’s spinoff metacog has been busy adding new features and functionalities. When companies look to incorporate metacog into their digital products, they want to know two things:

  1. How does metacog work?
  2. What can metacog help me do now that I couldn’t do before?

The answers to both questions lie in our unique approach to guided deep learning: machine learning steered by an understanding of real student outcomes.

Deep Learning

In education, deep learning is different from deeper learning, which is a pedagogical approach to instruction. In the world of Big Data, deep learning is an artificial intelligence (AI) approach that creates neural networks in which each “neuron” is a computer processor. This structure mimics how the human brain works in parallel processing.

Deep learning can be very effective, but it has a drawback: neural networks are so complex that we can’t know how they arrive at certain decisions.

Guided Deep Learning

At metacog, our guided deep learning process begins with a clear definition of what constitutes a good result. The computer program then goes on to do the heavy lifting. We can trust the reasoning behind the program’s decisions because we supply the reasons!

Without proper guidance, deep learning on its own can be problematic. Take self-driving cars, for example. They can use neural networks to observe and mimic human drivers, but unless the software can also effectively evaluate proper behavior, the resulting decisions can be erratic, and even dangerous. We don’t want self-driving cars that emulate a driver who is distracted by a text message!

metacog’s guided deep learning approach, on the other hand, specifies how to measure the appropriate outcomes. Like a self-driving car that models only the best driving behavior, our system understands the goals users want to achieve, and knows when those goals have been met successfully. How is that achieved?

Rubrics to the Rescue

The measurable goals are defined by rubrics. Just as rubrics are used to guide a teacher in scoring an open-ended performance task, rubrics are used to guide metacog. We set up a simple system in which humans create a rubric that defines what behavior constitutes a good score and what constitutes a bad one.

Once the rubric is defined, the deep learning program is then guided in how to apply the rubric. This is done with training sessions. In one training session, an educator scores the performance task while watching a playback of a student’s performance. Given enough training sessions, the deep learning program can then emulate the scoring with good accuracy and impeccable consistency—just like a self-driving car would emulate a perfect driver.

The benefits of machine scoring are enormous. A student can get immediate feedback while doing the performance task, instead of waiting for the teacher to grade the performance task after they have finished. And a teacher can be alerted immediately when a student is struggling, and then take appropriate steps for remediation. So metacog does not replace the teacher. Instead, it puts a highly intelligent teacher’s aide at the side of every student.

If this topic piqued your interest, here are a few links for a deeper dive.

Further Reading

https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/

https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

https://www.edsurge.com/news/2016-10-06-why-education-needs-augmented-not-artificial-intelligence

5 Keys to Assessment Literacy

At Victory, we have been developing many kinds of assessments. Whether the assessment is high-stakes summative testing, a performance-based task, or formative student self-assessment, assessment has a huge impact on classroom instruction. This means assessment literacy is a critical tool for teachers as they develop curriculum and apply classroom strategies.

What Is Assessment Literacy?

What does assessment literacy mean? It may help to consider other types of literacy. Science literacy, for example, means being prepared to understand and discuss science issues, especially as they impact social and political decisions. Visual literacy involves understanding how people use visual information, and choosing a visual approach that supports your goals. Digital literacy is the ability to use technology tools, and choose which tool is most appropriate for a given purpose. In the same way, assessment literacy is the ability to understand the purpose of each type of assessment and then use this knowledge to make better assessment decisions.

From our experience, these are 5 keys to assessment literacy:

5 Keys to Assessment Literacy
key-flipped-small 1 Understanding different kinds of assessments and their purposes
key-flipped-small 2 Recognizing the best assessment to use for a given situation
key-flipped-small 3 Knowing how to prepare students for each kind of assessment
key-flipped-small 4 Knowing how to implement each kind of assessment
key-flipped-small 5 Getting comfortable with interpreting assessment data

Continue reading

5 Keys to Digital Literacy

Recently, we premiered our digital lesson on the Boston Massacre at the ISTE and ILA conferences. The lesson was a big hit. It inspired many discussions with technology coordinators and educators on what makes a lesson good for digital literacy. The table below summarizes what we learned, and the video that follows gives concrete examples of how the 5 keys to digital literacy are executed in the Boston Massacre lesson.

5 Keys to Digital Literacy
key-flipped-small 1 Make sure the lesson has a beginning, a middle, and an end.
key-flipped-small 2 Each interactive should build on the previous one so that students gain practice and automaticity in skills and strategies.
key-flipped-small 3 Processes for working through a digital lesson need to be consistent.
key-flipped-small 4 Cross-curricular activities encourage students to employ skills and strategies from other disciplines in new ways.
key-flipped-small 5 Make sure students are using data, analyzing it, and using 21st-century skills.

Are We There Yet?

Continue reading

A Revolutionary Interactive Lesson

In our last blog on performance tasks, we revealed our instructional design approach to creating a social studies performance task, The Boston Massacre.

In this blog, we’ll explain why we expanded the performance task to become an interactive lesson, with embedded performance tasks. So, this is really an evolution, not a revolution.

Here is a sneak preview of the lesson:


What Was Missing in the Original Task?

Our original Boston Massacre performance task was unique in several regards:

  • It developed critical thinking through the analysis and comparison of key characters.
  • Students evaluated multiple causes and effects to rank the importance of earlier events that led up to the key event.
  • Students needed to do a close reading to find evidence to support their arguments.
But, a performance task is rarely used in isolation. It is either part of a summative assessment or used formatively in a lesson. For a lesson, providing a stand-alone performance task requires the teacher to do a lot of work before it can be effective. The teacher has to decide:
  • whether it supports the lesson objectives,
  • when to use it in a lesson,
  • how to provide support if students struggle, and
  • how to use the scores.

Even the world’s best performance task won’t help students if teachers won’t use it!

Collaborating on a Solution

Continue reading

Design: The Secret Behind Effective Digital Learning Experiences–Part 2

In our first blog on this topic, we began to reveal the secret behind the magnetic allure of games, simulations, and other online performance-based digital learning experiences. We showed you a well-aligned, well-designed simulation-based science performance task.

In this blog, we’ll show you how the design of a digital performance task directly influences the richness of the data we can gather on student learning. This time, we’ll use a social studies task we developed:


Behind the Scenes: Designing the Boston Massacre Performance Task

We designed this performance task to give students opportunities to visually, graphically, and dynamically analyze text, make inferences based on evidence, and synthesize their understanding.

Why? Let’s examine the standards. The C3 framework, CCSS standards, and other state and national efforts to align learning expectations to 21st-century workforce demands are emphasizing critical analysis and evidence from text.

Boston-Massacre-Perf-Task-standards

Continue reading

What’s Next? Part 3: Assessment

This is the third in our series of posts about the changing landscape in education.

In the video below, Victory’s staff discuss what’s next in assessment. We’ve added a few more thoughts below. Feel free to use the comments to join in the conversation.

A Variation on “The Chicken or the Egg?”

Which comes first, change in curriculum or change in assessment? Often people see standards as a driver of change, and certainly both assessment and curriculum are affected by standards reform. But as it is with so many things, you don’t really know what it is (or can be) until you see it in action.

Continue reading

metacog Partners with PhET Interactive Simulations

A lot of education companies are putting data analytics to work, because the first step in improving student outcomes is to see where students are.

All Data Analytics Were Not Created Equal

But you can only get so much from information collected outside the activities. That kind of data is like taking student attendance — was the student present? Did the student stay for the whole class? How many activities did the student complete, and what were the scores?

These status reports are useful, but to truly get insight, you need to see what students are doing inside the digital activities. These are the kinds of richer questions we can now ask:

  • Did the student struggle?
  • Did the student persist and improve?
  • Did the student achieve a high score through insight into the core principles and practices?

metacog Partnering with PhET Interactive Simulations

In the STEM disciplines, metacog is paving the way with a joint venture with PhET Interactive Simulations, the premier developer of science simulations. The timing couldn’t be better, as many publishers are struggling to develop programs that match the spirit and intent of the NGSS (Next Generation Science Standards).

See the press release for more details, and feel free to share with your colleagues.

metacog Releases Automated Real-time Rubric-based Scoring API

Victory’s spinoff metacog just released its advanced scoring analytics API at ISTE 2015 in Philadelphia. See the press release here, and feel free to share it with your colleagues.

Why Assessment Has Changed

Assessments have been evolving rapidly, mostly due to these factors:

  • The new standards (Common Core and Next Generation Science) focus on practices that require higher-order thinking and decision-making skills.
  • Jobs are changing, and employers need evidence that prospective employees have the necessary twenty-first-century skills.
  • Technology makes it possible.

The assessment landscape is of course more complex than this; stay tuned for more detail in future blog posts.

The Problem

One writer recently complained to us about NGSS:

They wrote the NGSS as if they had one goal in mind—don’t allow multiple choice questions. And now I have an assignment to write 100 MC questions for NGSS!

Writing good assessments is an art, but even the best assessments won’t be used unless they can be readily scored.

That’s why there are still so many multiple choice questions in high stakes assessments—because it is so expensive to grade the more open-ended assessments by hand.

While technology has made great strides in interactive digital assessment, the most robust assessments still have to be hand-scored.

Until now.

The Answer—How metacog Makes Automated Scoring Possible

Continue reading