Not having heard something is not as good as having heard it; having heard it is not as good as having seen it; having seen it is not as good as knowing it; knowing it is not as good as putting it into practice.—Xun Kuang, c. 312-230 BCE
Choosing Instruction Modes
Facing a classroom of students who represent different levels of learning curves is not an easy task. We know students learn better by doing, but they may not all be ready at the same time. This is a key reason for planning how to manage classroom instruction.
An effective way to start planning is to ask yourself:
When should I teach the whole class?
Should I move some students into group work?
When are students ready to work independently?
The choices you make directly affect students’ learning and the structure and pacing of lessons. The presentation below gives an overview of when whole-class, small-group, and independent modes work best in classroom instruction. Just click “Start Prezi.”
Spanish: The Greatest Impact on Education Outcomes
To achieve a brighter future in the United States, our students will need to be accomplished in math and science, adept in technology, and fluent and literate in English.
There are, however, hurdles to jump. One is language. In 2014, the largest numbers of new immigrants came from India and China. But there is a tremendous diversity in languages spoken, as shown by this graph from the Census Bureau. In each of these 15 cities, at least 140 languages are spoken.
This may make the educational publisher’s task seem daunting. But it is still true that the largest group of non-English speakers in the U.S. is Hispanic. According to the 2015 census, 41 million native Spanish speakers live in the United States. Another 11 million Spanish speakers are bilingual. By focusing on Spanish, at least initially, a publisher can have the most significant impact on education outcomes.
The Need for Accurate Translation
Research confirms that students learn math, science, and social studies more deeply when taught in their native language. Accurate translations of texts are essential for helping students stay on track as they transition to full English proficiency.
No problem: educational content can be run through a translation program, right?
When I first taught kindergarten, I read aloud The Very Hungry Caterpillar. The kids in my class loved it and we spent hours discussing caterpillars, eating habits, stories, and Eric Carle. Luckily for me, the third-grade teacher down the hall, probably tired of hearing me read the book aloud again and again, offered a wonderful list of books to read aloud. The list included fiction and nonfiction. To this day, I’m grateful (as I’m sure my students are, too!) to this teacher. Over time, I’ve learned from first-hand experience how important a read-aloud is.
Research supports the importance of read-alouds for developing fluency, background knowledge, and language acquisition. Allen (2000) reminds us that those same benefits occur when we use read-alouds beyond the primary years. Read-alouds are one classroom practice that students never outgrow.
As a working author, I’ve spent time researching and thinking about what makes a good read-aloud and how to ensure everyone has the best read-aloud experience. Typically when writing, I read aloud what I have written. A read-aloud has often served as an excellent editor. It’s a good habit for kids to use when they write. Here are five key things I suggest to do for an exciting and meaningful read-aloud.
When we develop digital solutions at Victory, we want the end user to experience visuals as intuitively as possible. Because space is always at a premium, visuals and text are equally important. The visuals need to immediately convey information and tell an extended story. When used well, they not only save space on the page (a picture is worth a thousand words), they also inspire confidence in the reader (or should we say “viewer”) by subtly conveying that the overall message will also be easy to understand.
Visual literacy is experiencing resurgence. It is defined many ways in different disciplines, but a good general definition is:
visual literacy: a set of skills used when a person either sees or produces images in order to interpret them, discover a fuller meaning, and make emotional connections.
From our research, there are five important things to consider about visual literacy:
5 Keys to Visual Literacy
Observing elements in complex images and determining how they relate
Developing questions to ask about the images
Understanding how different visual approaches convey different meanings
Identifying the emotional impact of different techniques on the viewer
Interpreting an author’s intent based on the choices made to deliver the message
At Victory, we have been developing many kinds of assessments. Whether the assessment is high-stakes summative testing, a performance-based task, or formative student self-assessment, assessment has a huge impact on classroom instruction. This means assessment literacy is a critical tool for teachers as they develop curriculum and apply classroom strategies.
What Is Assessment Literacy?
What does assessment literacy mean? It may help to consider other types of literacy. Science literacy, for example, means being prepared to understand and discuss science issues, especially as they impact social and political decisions. Visual literacy involves understanding how people use visual information, and choosing a visual approach that supports your goals. Digital literacy is the ability to use technology tools, and choose which tool is most appropriate for a given purpose. In the same way, assessment literacy is the ability to understand the purpose of each type of assessment and then use this knowledge to make better assessment decisions.
From our experience, these are 5 keys to assessment literacy:
5 Keys to Assessment Literacy
Understanding different kinds of assessments and their purposes
Recognizing the best assessment to use for a given situation
Knowing how to prepare students for each kind of assessment
Knowing how to implement each kind of assessment
Getting comfortable with interpreting assessment data
Recently, we premiered our digital lesson on the Boston Massacre at the ISTE and ILA conferences. The lesson was a big hit. It inspired many discussions with technology coordinators and educators on what makes a lesson good for digital literacy. The table below summarizes what we learned, and the video that follows gives concrete examples of how the 5 keys to digital literacy are executed in the Boston Massacre lesson.
5 Keys to Digital Literacy
Make sure the lesson has a beginning, a middle, and an end.
Each interactive should build on the previous one so that students gain practice and automaticity in skills and strategies.
Processes for working through a digital lesson need to be consistent.
Cross-curricular activities encourage students to employ skills and strategies from other disciplines in new ways.
Make sure students are using data, analyzing it, and using 21st-century skills.
In this blog, we’ll explain why we expanded the performance task to become an interactive lesson, with embedded performance tasks. So, this is really an evolution, not a revolution.
Here is a sneak preview of the lesson:
What Was Missing in the Original Task?
Our original Boston Massacre performance task was unique in several regards:
It developed critical thinking through the analysis and comparison of key characters.
Students evaluated multiple causes and effects to rank the importance of earlier events that led up to the key event.
Students needed to do a close reading to find evidence to support their arguments.
But, a performance task is rarely used in isolation. It is either part of a summative assessment or used formatively in a lesson. For a lesson, providing a stand-alone performance task requires the teacher to do a lot of work before it can be effective. The teacher has to decide:
whether it supports the lesson objectives,
when to use it in a lesson,
how to provide support if students struggle, and
how to use the scores.
Even the world’s best performance task won’t help students if teachers won’t use it!
In our recent blog post, Instructional Design 101, we provided an overview of several popular instructional design models. One of these, the original ADDIE model, was a linear approach with some iterative features. It evolved to be more cyclical, and spawned many other models. In similar fashion, our linear workflows at Victory have evolved to keep up with rapid changes in our industry.
Watch this video for a quick look at Victory’s vendor and partnership processes. Many projects do not require a partnership process; we originally used it to develop digital products, but it has many benefits for complex print products as well.
The video also references backward design, which we first blogged about in Talking to the Test: The Learning Continuum. In backward design, the initial development focuses on assessments because they determine what evidence we will accept as proof of mastery of the associated learning objectives. Again, not every project warrants a backward-design approach. It makes the most sense for subjects with open-ended user experiences that are hard to assess and hard to teach. We have found that if most of the assessment is traditional, then a traditional development process generally will also be sufficient.
Victory’s spinoff metacog was just featured in a blog post by Databricks, a company founded by the team that created Apache Spark, a powerful open-source data processing engine. See the Databricks blog post below.
metacog has been hard at work releasing new capabilities of its learning analytics platform, while at the same time enhancing existing capabilities. If you offer subscription-based products, you know that your customers expect continuous improvement. With metacog, we partner with you to deliver new capabilities in deep learning analytics that you can easily integrate into your products to generate new data-driven business models and revenue streams.
Why data analytics for adaptive and competency-based learning is so challenging
You may have seen many companies offering data analytics applied to learning products. If you look closely, most of the time what is offered is “administrative-level” data and simple scoring data:
Time-on-task data – How long did learners use the interactive?
“Attendance” data – Did learners participate?
SCORM-compliant scores reported to a learning management system (LMS) – How well are learners doing?
Simple score reports – How many right, how many wrong?
It turns out that in order to improve anything, you have to be able to measure it, but so far in education we have been measuring the wrong thing – the final answer.
This explains why scoring is the key issue. In the past, most open-ended assessments had to be human-scored. And this greatly reduces the frequency with which teachers and professors assign open-ended assessments. Yet it is open-ended tasks that best assess the ability of a candidate to perform well in today’s job market.