Since the creation in 2002 of the No Child Left Behind Act (NCLB), accountability and assessment of public education in the United States has been based on annual standardized state tests. These tests have been used to determine the effectiveness of states, districts, schools, and teachers in helping students learn.
Public school students in the United States are given more standardized tests, and are tested more frequently, than students in any other country. The growth of testing has fueled the world of assessment and turned it into a billion-dollar industry.
The number of tests has affected English Language Learners (ELLs) in the U.S. who, in addition to the annual standardized subject matter tests, are assessed every year on their English proficiency. Under NCLB, states not only had to identify English learners but also had to create English proficiency standards along with assessments that reflected these standards. Every year ELLs have to take state tests to determine if they are making progress in learning English and in attaining English-language proficiency. Continue reading →
In many recent projects, we have taken on the challenge of developing three-dimensional learning tasks and lessons. We often start with a close reading of the NGSS (Next Generation Science Standards). The instructional designers then meet with subject matter experts to design a task with learning outcomes that measure specific performance expectations. In the example below, the task was designed to meet these three-dimensional learning goals.
We have already blogged about using the PhET Skate Park simulation to develop a performance task. We decided to take another stab at it, as a proof of concept for a three dimensional learning task. This task is a bit more challenging than our first one.
Please watch the video and then try the performance task. We’d love to hear your feedback!
Victory’s spinoff metacog has been busy adding new features and functionalities. When companies look to incorporate metacog into their digital products, they want to know two things:
How does metacog work?
What can metacog help me do now that I couldn’t do before?
The answers to both questions lie in our unique approach to guided deep learning application: machine learning steered by an understanding of real student outcomes.
In education, deep learning is different from deeper learning, which is a pedagogical approach to instruction. In the world of Big Data, deep learning is an artificial intelligence (AI) approach that creates neural networks in which each “neuron” is a computer processor. This structure mimics how the human brain works in parallel processing.
Deep learning can be very effective, but it has a drawback: neural networks are so complex that we can’t know how they arrive at certain decisions. Continue reading →
The educational market is in flux. States are pushing back from both Common Core State Standards (CCSS) and assessments linked to the CCSS. States and publishers are waiting to see:
Will funding be directed to charter schools?
How many more states will drop out of the CCSS?
Will states want summative, formative, or competency-based tests?
How will products align to changing state standards?
What products should states, districts, and publishers develop to meet current market needs?
Many states are moving to create their own standards. How will these new standards affect the educational market? What steps must states and publishers take?
All the uncertainty in the market calls for gap analyses. A gap analysis identifies how current products are aligned to new standards, which standards still correlate, and what’s missing—gaps where new standards are not well covered.
Publishers need to ensure that their products and assessments readily address the changing needs of states and districts.
States and districts need to know how their new standards align to older standards. Since most states adopted CCSS, new standards usually are analyzed and compared to CCSS.
World Languages & Education #1 … an ongoing series
As we discussed in recent posts, the assessment market is in flux. But this is nothing new. The passage of No Child Left Behind (NCLB) in 2002 disrupted the market, and for some companies this turned out to be a boon, as spending on state-level assessments nearly tripled in the next 6 years. As you can see from this graph, state-level assessment spending has decreased since 2008, while classroom assessment spending has continued to grow.
Just as the change in 2002 represented an opportunity for many companies, the shifts we see now may also have a silver lining. And for one area in particular, Spanish assessments, there may be continued growth, especially in the classroom market. Why? Regardless of other shifts that may occur, students with Spanish as the first language comprise by far the largest population among English Language Learners (ELL) in the United States, at 71%, according to the Migration Policy Institute.
At Victory, we have been developing many kinds of assessments. Whether the assessment is high-stakes summative testing, a performance-based task, or formative student self-assessment, assessment has a huge impact on classroom instruction. This means assessment literacy is a critical tool for teachers as they develop curriculum and apply classroom strategies.
What Is Assessment Literacy?
What does assessment literacy mean? It may help to consider other types of literacy. Science literacy, for example, means being prepared to understand and discuss science issues, especially as they impact social and political decisions. Visual literacy involves understanding how people use visual information, and choosing a visual approach that supports your goals. Digital literacy is the ability to use technology tools, and choose which tool is most appropriate for a given purpose. In the same way, assessment literacy is the ability to understand the purpose of each type of assessment and then use this knowledge to make better assessment decisions.
From our experience, these are 5 keys to assessment literacy:
5 Keys to Assessment Literacy
Understanding different kinds of assessments and their purposes
Recognizing the best assessment to use for a given situation
Knowing how to prepare students for each kind of assessment
Knowing how to implement each kind of assessment
Getting comfortable with interpreting assessment data
In this blog, we’ll explain why we expanded the performance task to become an interactive lesson, with embedded performance tasks. So, this is really an evolution, not a revolution.
Here is a sneak preview of the lesson:
What Was Missing in the Original Task?
Our original Boston Massacre performance task was unique in several regards:
It developed critical thinking through the analysis and comparison of key characters.
Students evaluated multiple causes and effects to rank the importance of earlier events that led up to the key event.
Students needed to do a close reading to find evidence to support their arguments.
But, a performance task is rarely used in isolation. It is either part of a summative assessment or used formatively in a lesson. For a lesson, providing a stand-alone performance task requires the teacher to do a lot of work before it can be effective. The teacher has to decide:
whether it supports the lesson objectives,
when to use it in a lesson,
how to provide support if students struggle, and
how to use the scores.
Even the world’s best performance task won’t help students if teachers won’t use it!
In our recent blog post, Instructional Design 101, we provided an overview of several popular instructional design models. One of these, the original ADDIE model, was a linear approach with some iterative features. It evolved to be more cyclical, and spawned many other models. In similar fashion, our linear workflows at Victory have evolved to keep up with rapid changes in our industry.
Watch this video for a quick look at Victory’s vendor and partnership processes. Many projects do not require a partnership process; we originally used it to develop digital products, but it has many benefits for complex print products as well.
The video also references backward design, which we first blogged about in Talking to the Test: The Learning Continuum. In backward design, the initial development focuses on assessments because they determine what evidence we will accept as proof of mastery of the associated learning objectives. Again, not every project warrants a backward-design approach. It makes the most sense for subjects with open-ended user experiences that are hard to assess and hard to teach. We have found that if most of the assessment is traditional, then a traditional development process generally will also be sufficient.