On December 10, 2015, President Obama signed into law the Every Student Succeeds Act (ESSA). ESSA reauthorizes the Elementary and Secondary Education Act (ESEA), the country’s oldest national education law dedicated to providing equal opportunity to all students. ESSA scales back much of the federal government’s role in public education on everything from testing and teacher quality to low-performing schools. Under ESSA, states do have greater flexibility, but they still are required to submit ESSA plans to the Department of Education (DOE) for approval.
The deadline for submitting ESSA plans to the DOE was last year. By October 2018, all 50 states, plus the District of Columbia and Puerto Rico, had their ESSA plans approved. States are now in the process of implementing them. How are the plans that require changing policies on curriculum, assessment, and accountability playing out at the local level? What are the implications for publishers?
Since the creation in 2002 of the No Child Left Behind Act (NCLB), accountability and assessment of public education in the United States has been based on annual standardized state tests. These tests have been used to determine the effectiveness of states, districts, schools, and teachers in helping students learn.
Public school students in the United States are given more standardized tests, and are tested more frequently, than students in any other country. The growth of testing has fueled the world of assessment and turned it into a billion-dollar industry.
The number of tests has affected English Language Learners (ELLs) in the U.S. who, in addition to the annual standardized subject matter tests, are assessed every year on their English proficiency. Under NCLB, states not only had to identify English learners but also had to create English proficiency standards along with assessments that reflected these standards. Every year ELLs have to take state tests to determine if they are making progress in learning English and in attaining English-language proficiency. Continue reading
Victory’s spinoff metacog has been busy adding new features and functionalities. When companies look to incorporate metacog into their digital products, they want to know two things:
- How does metacog work?
- What can metacog help me do now that I couldn’t do before?
The answers to both questions lie in our unique approach to guided deep learning application: machine learning steered by an understanding of real student outcomes.
In education, deep learning is different from deeper learning, which is a pedagogical approach to instruction. In the world of Big Data, deep learning is an artificial intelligence (AI) approach that creates neural networks in which each “neuron” is a computer processor. This structure mimics how the human brain works in parallel processing.
Deep learning can be very effective, but it has a drawback: neural networks are so complex that we can’t know how they arrive at certain decisions. Continue reading
The educational market is in flux. States are pushing back from both Common Core State Standards (CCSS) and assessments linked to the CCSS. States and publishers are waiting to see:
- Will funding be directed to charter schools?
- How many more states will drop out of the CCSS?
- Will states want summative, formative, or competency-based tests?
- How will products align to changing state standards?
- What products should states, districts, and publishers develop to meet current market needs?
Many states are moving to create their own standards. How will these new standards affect the educational market? What steps must states and publishers take?
All the uncertainty in the market calls for gap analyses. A gap analysis identifies how current products are aligned to new standards, which standards still correlate, and what’s missing—gaps where new standards are not well covered.
Publishers need to ensure that their products and assessments readily address the changing needs of states and districts.
States and districts need to know how their new standards align to older standards. Since most states adopted CCSS, new standards usually are analyzed and compared to CCSS.
What actions are taken during a gap analysis?
World Languages & Education #1 … an ongoing series
As we discussed in recent posts, the assessment market is in flux. But this is nothing new. The passage of No Child Left Behind (NCLB) in 2002 disrupted the market, and for some companies this turned out to be a boon, as spending on state-level assessments nearly tripled in the next 6 years. As you can see from this graph, state-level assessment spending has decreased since 2008, while classroom assessment spending has continued to grow.
Source: based on Simba data reported by Education Week.
Just as the change in 2002 represented an opportunity for many companies, the shifts we see now may also have a silver lining. And for one area in particular, Spanish assessments, there may be continued growth, especially in the classroom market. Why? Regardless of other shifts that may occur, students with Spanish as the first language comprise by far the largest population among English Language Learners (ELL) in the United States, at 71%, according to the Migration Policy Institute.
Assessment in Education #1 … an ongoing series on assessment
The assessment market is a billion-dollar business. However, the market is in flux and no one can predict what will happen. Here are 9 key indicators to watch in 2017:
1. Uncertainty over the new administration’s educational policies
On the campaign trail, the president said that CCSS had to go and implied that states should control education policy. These two quotes give some indication of what might happen:
“I want local education. I want the parents, and I want all of the teachers, and I want everybody to get together around a school and to make education great.”
“Common core is out!”
2. New secretary of education supports charter schools/vouchers
The Problem: Open-ended Tasks Are Underutilized
We have been blogging about performance tasks for several years. One thing we have heard many times from publishers and educators alike:
“I’d use performance tasks more often if I didn’t have to score them.”
At Victory, we have been developing many kinds of assessments. Whether the assessment is high-stakes summative testing, a performance-based task, or formative student self-assessment, assessment has a huge impact on classroom instruction. This means assessment literacy is a critical tool for teachers as they develop curriculum and apply classroom strategies.
What Is Assessment Literacy?
What does assessment literacy mean? It may help to consider other types of literacy. Science literacy, for example, means being prepared to understand and discuss science issues, especially as they impact social and political decisions. Visual literacy involves understanding how people use visual information, and choosing a visual approach that supports your goals. Digital literacy is the ability to use technology tools, and choose which tool is most appropriate for a given purpose. In the same way, assessment literacy is the ability to understand the purpose of each type of assessment and then use this knowledge to make better assessment decisions.
From our experience, these are 5 keys to assessment literacy:
|5 Keys to Assessment Literacy
||Understanding different kinds of assessments and their purposes
||Recognizing the best assessment to use for a given situation
||Knowing how to prepare students for each kind of assessment
||Knowing how to implement each kind of assessment
||Getting comfortable with interpreting assessment data
In our last blog on performance tasks, we revealed our instructional design approach to creating a social studies performance task, The Boston Massacre.
In this blog, we’ll explain why we expanded the performance task to become an interactive lesson, with embedded performance tasks. So, this is really an evolution, not a revolution.
Here is a sneak preview of the lesson:
What Was Missing in the Original Task?
Our original Boston Massacre performance task was unique in several regards:
- It developed critical thinking through the analysis and comparison of key characters.
- Students evaluated multiple causes and effects to rank the importance of earlier events that led up to the key event.
- Students needed to do a close reading to find evidence to support their arguments.
But, a performance task is rarely used in isolation. It is either part of a summative assessment or used formatively in a lesson. For a lesson, providing a stand-alone performance task requires the teacher to do a lot of work before it can be effective. The teacher has to decide:
- whether it supports the lesson objectives,
- when to use it in a lesson,
- how to provide support if students struggle, and
- how to use the scores.
Even the world’s best performance task won’t help students if teachers won’t use it!
Collaborating on a Solution
When I was executive editor of Weekly Reader, I was often struck by how challenging it was to put together a weekly magazine for the lowest grades. Now, we faced similar challenges in developing a technology enhanced item (TEI) for first graders. They may be digital natives, but they are still 6 years old.
If you have been following our blog, you have seen our first two TEI prototypes. Our primary-grades team engaged in extensive discussions as they developed a technology enhanced item for Grade 1. Please watch this 4-minute video and use the Comments to give us feedback.