Archive for Learning Science

Thursday, September 25th, 2014

Materials in motion: Exploring the use of animation in learning

By: Aaron J. Dewald, Originally posted at the Center for Innovation in Legal Education blog

A year or so ago, a discussion on Teknoids had started about what defines a video. It was inferred that something needed to move to constitute a video. Do narrated slide shows from Keynote, PowerPoint, etcetera constitute videos?

Legal education is going through some interesting changes at the moment. Much of it is related to distance education: online, blended, etc. Those who are starting to dabble in it are exploring a whole host of options for delivering this type of education: talking headnarrated presentationshand drawing on white board, etc. Coming from a learning science background, I’ve read an array of research out there that talks about this very thing. They all ask the question: What kinds of digital materials should we be using? What is the impact of a “video” on learning and comprehension? Can including animation (things that move) improve learning outcomes by students?

Let’s set an operational definition of the of the word “video.” In the learning science literature, you typically see references to multimediavisualization, or visual representation instead of the word video.  For purposes of this blog post, I’ll refer to it as a visualization.  Further, visualizations can be static or dynamic. This allows us to refer to the content, and not the vehicle that contains the data.  I mean, technically, anything in a .mov or .mp4 format is a video…. because it’s a video format, right?

Anyway.

Looking to the Theory

Let’s start off with a definition of the word “visualization.”  As I’m sure you can imagine, a visualization for these purposes is something someone could see – something visible.  In the literature, visualizations are often separated into two different types: static and dynamic.  A static visualization is one that does not have a temporal component.  Nothing changes over time… it’s just… there. Like a photo or a drawing.  A dynamic visualization is one that does include a temporal aspect.  This is a key distinguisher when it comes to making decisions about the type of visualization you’d use in your class.

Let’s chat for a brief moment about static and dynamic visualizations. The materials that are used in instructional environments need to be carefully chosen to accomplish certain learning goals.  The proper materials at the proper times can build learners up to the relevant level of knowledge they need. In the literature, this is called scaffolding. Now, with the nature of the Internet, there are so. many. materials. to. choose. from. The problem is, is that you really need to think long and hard about what you pick to use in your class, presentation, or other environment where learning occurs. The appropriateness and effectiveness of static or dynamic visualizations depends a lot on what it is you’re teaching and who you’re teaching it to?  Novices demand simplicity and concreteness (Goldstone & Son, 2005).  Experts do better with theoretical constructs.  Novices can get hung up on seductive details that are irrelevant to what is learned (Clark & Mayer, 2011).  Experts do a better job identifying what is important and focusing on that.

When I first started my PhD program, I thought that dynamic was always a better choice.  I mean, TV is dynamic. YouTube has tons and tons of dynamic content. This idea that dynamic is always a better choice almost seems supported by the literature.  Hoffler and Leutner (2007) did a meta-analysis of studies that directly compared static and dynamic visualizations.  As an aside, a meta-analysis is one where you don’t necessarily conduct your own experiment – rather you aggregate many experiments and analyze those pieces as a whole. They examined 71 studies that directly compared static and dynamic visualizations. They reported a significant effect of dynamic over static. Specifically, they found this effect when the visualization contained procedural information.  Sweet… so blog over, use dynamic… right?

Not quite. Other research done wasn’t so certain about this.  Specifically, Tversky, Morrison, and Betrancourt (2002) suggested that dynamic visualizations aren’t all that Hoffler and Leutner cracked them up to be. They suggested that dynamic visualizations are only useful when the concept being portrayed has a necessary and natural temporal aspect. Basically, if the movement is necessary to depicting the “thing” being learned, then it should move. When Hoffler and Leutner did their meta-analysis, they examined an awful lot of studies that examined the effect of dynamic visualizations in teaching learners procedural information: tying knots, assembling machine guns, physics, hand-bandaging, circulatory system, dribbling a soccer ball, etc.

Tversky et al. offer two principles that can help decide whether or not dynamic visualizations should be used, the Congruence Principle and the Apprehension Principle. The congruence principle is rather simple. Essentially the visualization should match up to what it’s representing. This is specifically aimed at those that try to be clever and teach something in a way that’s completely unrelated. The apprehension principle is equally understandable: keep it simple.

Additionally, they think that learning from dynamic visualizations can actually be bad for some learners. Sometimes, the animations are too complex, contain irrelevant movements, or are too quick for students to learn from.  Basically, they aren’t always designed very well.  These inappropriate designs can impact the cognitive processes the learners use.  For example, information in a dynamic visualization is transitory. Things move on screen, things move off screen.  Pictures and other images are here, then gone.  Narration is spoken… then never heard again.  This transitory information makes it difficult for some learners to attend to.  Further, asking the learners to integrate all of this with other transitory information as well as the information they already know (prior knowledge) might result in an excessive cognitive burden and divides their attention over the length of the visualization.

Let’s extend this a step further. Sometimes, designers or teachers will create learning materials that are dynamic for no other reason than, “because they can.” Prezi is a good example of this. Done right, Prezi works well for weaving a non-linear story. Most of the time, it’s good for making people sick and fragmenting their train of thought. Theoretically, this was studied by Lowe (1999). Lowe tested learners’ ability to learn from complex weather animations. They were asked to study from an animated weather map that taught certain atmospheric conditions.  Their post-test examined their ability to recreate the phenomena they saw in the study materials.  It was found that the learners focused on the wrong things. They spent more time looking at the perceptually obvious details as opposed to the perceptually relevant ones. The flashy things distracted the learners from learning. Do you want your viewers to focus on how cool the movements are or the information in your presentation?

The learners in Lowe’s case all had one thing in common. They were novices.  These were learners with a limited base of knowledge in the domain. This brings up a valid point. When you choose materials to use in class, be sure you understand the prior knowledge of your learners.  Many 1L’s come to law school with a very, very limited prior knowledge in any legal domain. Because of this, we should be using simple graphics and images.  We should be using relevant animations (also simple) if necessary.  Just because it looks simple, doesn’t mean it’s “childish” or “non-professional.”  Remember, research has been done on this, so if someone calls you out, point to the literature! 🙂

What have we learned so far?

  1. Learners don’t always learn best from movement. Static might be just fine, so long as there isn’t relevant temporal information involved in the topic.
  2. Dynamic visualizations can be detrimental to some learners, especially if the visualization moves too fast or asks the learner to integrate too much. Comprehension is a process that takes time.
  3. Know your learner. Novices might struggle a little more with dynamic visualizations than more expert students.

Implications for learners and suggestions for legal education

What does this mean for our students?  In today’s learning environment pushing blended learning, this should be a key consideration.

Many faculty and instructional designers want to get really fancy with their blended learning materials.  They realize that forty minutes of a faculty member lecturing to the camera isn’t such a good way to teach online. In reaction, some will go all out, finding ways to use Flash or After Effects to create elaborate animations or cartoons. Some might feel “inadequate” that all they’re doing is narrating PowerPoint.  What we’ve just explored should bring your concerns to rest. There are some anecdotal reports that they don’t need to be flashy.  They can be casual. They can be simple.  They don’t need to entertain. As long as you keep the principles we talked about in mind, you’ll do fine!

To wrap it up, law isn’t necessarily filled with lessons where time is a necessary and natural component of the topic. Much of law is interpretation and application on a foundation of relevant facts, rules, statutes, and more.  Nothing in there necessarily lends itself to being displayed dynamically. This works out in favor of law schools that are seeing budgets slashed and technology departments shrink. There’s no need to have elaborate equipment or software to create visualizations that are used in class, online, or other places. It does suggest the need for individuals that understand multimedia literature, to some extent, but I feel law schools will need instructional design or educational psychology help in the near future… so be one of the first and hire an instructional designer.

Thursday, October 10th, 2013

Improving Presentations (or videos, or other multimedia) with Learning Science

Note: This blog post was derived from a presentation I gave at the New York Law School. I was invited by Doni Gewirtzman and Kris Franklin to speak about the impact of learning science on the creation of presentations. I realized there are many nuances to the use of presentations. Some lecture with them, some don’t. Some use only a handful of slides, some use a deck of sixty. These are very general and basic principles that I think cover the widest range of situations. However, there are absolutely many ways in which this can be interpreted and implemented. For example, if we’re walking through language from a contract, then yes… it’s absolutely ok to have a mess of text on the screen. We can do simple things like text fading and using boldface fonts to control attention.

That said, I hope you find this post useful. Many of the concepts are at their very basic, but as we move forward with creating materials for a blended or flipped classroom, consideration to how we design these presentations/multimedia will be necessary to improve the efficiency of the teaching and learning process.

Finally, I apologize for the length of this post.  There’s a lot of information to communicate. I’ve provided three multimedia videos that help illustrate some of the points, as well as break up the information into three separate chunks. The content in the videos is the same as the content in the text, so you get to choose which information intake method is most effective for you! 

Plan of Action
This post will cover three primary topics.

First, we’re going to talk a quick bit about what we ultimately want for our learners. I’ll introduce two key terms from the learning science literature, then offer explanations for what they really mean.

Second, we’re going to talk about the impact of these concepts on the presentations we use or the multimedia we expose to students. I’m going to be talking specifically in the context of effective presentations, but this is really where the rubber hits the road for other types of learning like blended or online courses.

Third and finally, based on the theories and principles we’ll have discussed, I’ll offer three practical examples of how this information can improve your presentations and then ultimately benefit your students in their comprehension of learning materials.

A Quick Introduction to Learning Science

1. What is Learning Science

As mentioned, I’m a student of the learning sciences. At it’s heart, learning science is:

“An interdisciplinary field that works to further scientific understanding of learning as well as to engage in the design and implementation of learning innovations, and improvement of instructional methodologies.” Wikipedia

So based on this definition we can see that much of what is currently going on around us in education is starting to base itself on scientific understanding of how people learn, then reacting to those results through improving ways in which we currently instruct AND innovating new ways to promote learning. Blended learning and online learning are examples of these innovations.

2. What can it tell us?
Learning science can tell us what we should do for our students, and back it with science. Obviously, we want our students to learn the most they can. We want them to engage with this information, to process it, to understand it, to reuse it, to apply it. Learning science can advise us on how to best accomplish these learning objectives. The first idea will inform us on what levels of understanding our students have, telling us how much they know, and how they know it. It’ll also help us adjust the way we teach to make sure everyone has an opportunity to learn the same amount. It’s the idea of Depth of Comprehension.

3. Depth of Comprehension
In terms of human evolution, depth of comprehension is a relatively new topic. In 1988, a professor by the name of Walter Kintsch (who just so happens to be my advisor’s advisor) offered an explanation of learning called the Construction-Integration principle. For this post’s purposes… what Construction-Integration is isn’t as important as what this actually means to us. The CI principle tells us that there are three levels of comprehension that learners can possess: surface, textbase, and situation model. Let’s take a quick stroll through these.

A surface level of comprehension is one in which the information is learned absolutely. Think of memorizing a poem. If there were one in a language you didn’t know, you could probably “learn” it based on sounds and intonation… but you wouldn’t have a clue as to what it means. Generally speaking, this rarely happens in learning, so we don’t talk much about it in my field.

The textbase level of comprehension is one that is much more prevalent in learning. For ease, we’ll call this the shallow level of comprehension. A shallow level of comprehension occurs when a learner “learns” just enough to paraphrase the main ideas back to you, when they learn it just enough to pick out items from a multiple choice list. A shallow level of comprehension occurs primarily when students cram for exams. Hoping to get the gist just long enough to take an exam and do relatively well in it.

As you might imagine, this level of comprehension isn’t the best level of comprehension. Why?

A shallow level of comprehension can result in inert, unconnected knowledge. It’s inflexible and unusable in novel situations… situations that many learners encounter in legal education. Further, because it has such a transient existence in a learner’s mind, it’s often out just as quickly as it gets in, leaving it outside of the long-term memory of the learner.

What we want our students to obtain is a situation model level of comprehension. We’ll call this one deep comprehension. A deep level of comprehension is constructed when the learners truly absorb what is being taught. They integrate the information with their prior knowledge. They connect what they’re learning to what they already know. They have a well-developed structure of knowledge, which is a hallmark of deep comprehension. In turn, they’re able to apply this new knowledge to novel situations. It’s flexible, so they can apply it to new problems they encounter, inside and outside of the domain in which it was originally learned. And because it’s integrated with their prior knowledge, it’s much more stable and permanent than when it’s a shallow level.

Unfortunately, as you might imagine, deep comprehension is also the most difficult level of comprehension to achieve in the classroom. There are some things we can do, however, that help us give students an opportunity to develop the deep level of comprehension we hope.

Two Theories that Inform Multimedia Design

1. Introduction to Multimedia Theory
Fortunately, there is one way in which we can immediately impact a learner’s ability to develop a deep level of comprehension – through our presentations and other multimedia. They way we structure our presentations can have a profound impact on the level of comprehension our learners can achieve. To better understand this, we’ll turn to two key ideas in multimedia design: the dual-coding theory and the other is the cognitive theory of multimedia learning.

2. The Dual-Coding Theory
The dual-coding theory was developed by Allan Paivio in 1971. At its essence, the dual-coding theory explains that when you provide learners with two ways to learn or encode information, they can understand it more deeply than if you only give them one.

Paivio offers the reasoning for this is because different kinds of information are processed differently and along distinct channels in the human mind.

Verbal information goes in through the ears and is processed as verbal information, visual information is taken in through the eyes and processed as visual information. The brain can then “smush” these together and allow them to build on themselves and provide a better encoding of the to be learned information.

So, put into practice, it looks like this. I can give you the word “dog.”

I can give you a picture of a “dog.”

But, as the dual coding theory goes, if I provide you with a picture of a dog as well as the word “dog,” the fact you’re seeing both allows you to encode this information more thoroughly than receiving one alone.

When the verbal and visual contents are complementary and overlap, there can be multiple retrieval cues, which enhance recall and in some cases a deeper comprehension of the information.

One thing to understand, though, is that learners do not have an infinite amount of bandwidth. Humans can only process a finite amount of information in a channel at a time, and they make sense of the learning materials by actively creating mental representations. If we overflow their channels, we run the risk of limited comprehension by our students. The more load we put on them, the more likely it is that shallow levels of comprehension will be attained.

3. Cognitive Theory of Multimedia Learning

A professor from UC Santa Barbara by the name of Richard Mayer came along and went one step further and offered the Cognitive Theory of Multimedia Learning. This theory was proposed in 2001. Its basic idea builds on what Professor Paivio found with the dual-coding theory. Words and pictures are superior to words alone – with a few caveats.

The Cognitive Theory of Multimedia Learning consists of 10-12 individual principles. These principles have held up fairly well over the last decade but many of them have been tweaked as more and more people turn to learning with multimedia, such as online classes, hybrid and flipped learning, and so on. I’ve chosen three of the most relevant to talk about: multimedia principle, coherence principle, and the redundancy principle.

If you’re really interested in the other principles, which are no less important, you should buy his book Multimedia Learning (Cambridge Press, 2001). There’s likely an updated version – I know he’s done more work refining the principles since the blended learning revolution is upon us.

a. The Multimedia Principle
The multimedia principle bridges us from the dual-coding theory to the multimedia principles.

Basically, it states that learners learn better from words and pictures than words alone or pictures alone. Recall this is something that Paivio also advocated for. Mayer studied how students engage with different types of learning materials and found that students’ comprehension was more shallow when they learned with text alone – they weren’t engaging in deeper processes that allowed them to connect the new information with what they already know.

So, you might imagine this comes into play when we have PowerPoint slides that are nothing but text… as far as the eye can see. Our learners aren’t really learning much from that. There’s so much text on the screen, the best they can do is try to “get the gist” of the slide and engage in shallow learning tactics to absorb it all.

b. The Redundancy Principle

The multimedia principle leads directly into the next principle we’ll talk about.

The redundancy principle states that learners learn best from pictures and narration than from pictures, narration and on-screen text – especially when that on-screen text is the same as what’s being spoken to them. The redundancy principle offers that learners don’t learn well when they hear and see the same verbal message during a presentation. Many people will use their PowerPoint presentation as a script, and read from the slide, offering very little new information that isn’t already on there.

This brings up an interesting point that involves the dual-coding theory. Remember how we said that images are processed through the visual channel and sounds are processed through the verbal channel? Well, our channels are not infinite and have a pretty limited capacity. When visual stimuli like pictures are presented, the visual channel picks those up, no problem. When spoken narration is heard, those are picked up by the verbal channel. Text, however, is weird.

Text, you’d think, is picked up by visual channel, but it’s not. Text actually imposes on the verbal channel. You can test this out by reading something and having someone talk to you. It’s very hard, if not impossible, to take both information in. So, if you’re reading your slides and your learners are reading your slides, you’re consuming a lot of their brain power. This is brain power they could use to form deeper connections with what you’re talking about, formulate questions, or make inferences. If we overload one of the channels, we run the risk of shallower processes.

The redundancy principle basically asks to minimize the amount of text that you put on slides. Reading is very instinctive, and people can’t help it. if you have a mess of text on a slide learners are going to read it, if you’re talking, you’re adding to their load. Learners can’t process both at the same time. If you’re going to say it, don’t put it on the slide. Just use a word, or a keyword, then speak the rest.

c. The Coherence Principle
Finally, we’re going to chat about the last principle, the coherence principle. It states that learners learn best when extraneous pictures, words, sounds and other such things are excluded rather than included. Essentially, less is more.

Things that move, sparkle, or shake can confuse novice learners. They may be distracted by the movement. They may try to figure out what it means and spend some of their processing power on something that isn’t relevant to the topic. This also goes for images that have nothing to do with the topic. When you select images, they should be relevant to the topic to help them understand or to illustrate a point.

So, that fancy animation you found online? Ditch it. The sparkly transitions between slides? You don’t need them. Random clip art or pictures that “looks cool.” Leave it out. Anything that doesn’t directly contribute to the learning, understanding, and comprehension of information should be left out. Use movement to draw attention to salient and important details, not to wow the audience.

d. Depth of Learning
The multimedia principles were studied for the depth of learning they provide learners, and through Mayer’s experiments, they found that adherence to these and the other principles can help in providing learners with a deep level of comprehension by making connections more concrete and allowing students to focus on what’s important and seeing how things may relate to one another. This is done by freeing up some of their cognitive capacity, allowing them to think more deeply about the topic. This is done by connecting words to images to show examples of topics or concepts.

While there’s no perfect solution to designing multimedia to always result in deeper comprehension, every little bit can help.

Practical Implementations

All the theory in the world isn’t worth much to you if it can’t be applied to your day-to-day operations. This is where the fun part of my job comes in. I’m going to talk about three ways in which the research we’ve just described can be applied to your presentations.

1. Less Text, More Visuals
This is the one you saw coming. Both the dual coding theory and the multimedia principles advocate for less text on the screen for your learners. This is sometimes hard to do. Many professors I work with use their presentations as a script… in fact, many people do this, but while it might make things easier for you, it also distracts from the learner. Large passages of text compel a learner to read.

When they’re reading, and listening, and watching, they’re overloaded with information and some will be lost. Since their verbal channel is being overloaded by reading and listening, there isn’t much bandwidth for deeper comprehension, they’re trying to read and write and listen. Typically their comprehension forms on a shallow level.

I offer the suggestion of using key words or short phrases as anchors for your talk, then articulating what they mean, use fewer bullet points on a slide. Rather than putting five or five bullet points on the screen and repeating them out loud, use three. The less text you can put on the screen, the easier it’s going to be on your learners. When selecting images, find ones that can help support your message.

2. Simple Graphics are Okay, Preferred in many Situations
When you’re assembling your presentation, don’t think that you need the fanciest, most detailed or complex images for your presentation.

Research has shown that overly complex imagery can distract learners from what is important. As an example, a lot of research has been done teaching things like the heart and circulatory system. When they use a “real” looking heart as opposed to a simplistic looking one, students didn’t perform as well. This is especially important when you’re dealing with novices and brings the coherence principle to the forefront. Novices can get stuck on the little details, like the veins and wrinkles in the heart.

As you think about which images to use for your presentations, they don’t need to be, and shouldn’t necessarily be, super detailed images. Silhouettes and stick figures work well. If you do need to show a complex picture, don’t put a bunch of them on the screen at the same time. Choose one that best illustrates your idea and stick with it.

3. Fancy Transitions and Animations Can Distract, not Enhance
I often get asked about transitions. Should I use them to “pep” up my presentation? Which should I use? Which are best? My answer is most always, none. These are known as seductive details.

When you’re creating a presentation, you want to make sure that you draw attention to two things. First, is you. You are the information source. You have information that you need to transfer to your students. Second, is your presentation. Your presentation supports what you say. Your presentation helps learners connect the dots, helps them find ways to infer and understand. You don’t want your learners to be distracted by the presentation.

Fancy transitions, movements, and animations will only help your learners understand if the movement helps them understand the content. So if you’re teaching combustion, then showing a moving engine will help learners deeply comprehend the concepts. If the animations or graphics aren’t necessary, don’t include them.

Now, sometimes I do use something that people may view as animation. Sometimes I’ll slide in text, pop up images, or something similar. This is done to control the amount of information a learner sees at any given time. If I have five pictures, or multiple bullets, I don’t want the learner to have to consume all of them at the same time, filling up their verbal channel. I will pop up what they need to see at any given point in time. Or, I’ll use it to draw attention to a point or concept.

Conclusion

As you can see, there are a lot of ways to sprinkle a little bit of learning science into your presentation. There are numerous concepts that branch out from this, such as how to select graphics themselves, what activities should I be doing in class, is there a “medium” level between textbase and situation model? All of these questions are good ones, and could be answered by diving into the research a little more and applying that to your specific and unique case. Either way, education has a lot to learn from learning science, and the quicker we are to understand, adapt, and implement those ideas, the better off we’ll be.

Tuesday, December 18th, 2012

Blending the First-Year Legal Classroom

Introduction to our Contracts Pilot
With all the talk about blended learning, flipped classrooms, and the like, we wanted to visualize what that might look like in the legal classroom, but first lets take a peek at a situation that could be a catalyst for blending a legal classroom.

The first-year courses are widely taught using the Socratic method. This method is a very useful and effective way to teach content to learners. This method works really well in smaller groups, so all students have an opportunity to “test” their knowledge on a topic in class. I like to think of this dialogue as a mini formative assessment, where the teacher can probe the knowledge of the student and foster an environment where that student can scaffold their knowledge and build it up to a level the professor deems satisfactory, causing the student to consider all sides of a particular issue.

In today’s world, class sizes of 50+ can often dampen the effect of the Socratic method. As Schwartz said in his 2001 paper titled Teaching Law by Design, students are asked to learn vicariously when they aren’t “on call”. Vicarious learning happens when the students in the class imagine themselves in the shoes of the student engaged in Socratic dialogue with the professor.

Many students in class do not take advantage of this “watch and imitate” environment (Schwartz, 2001). In doing so, they miss out an opportunity to engage with the content and imagine how they might answer the questions posed by the instructor. Further, with computers, iPhones, and iPads entering the classroom in droves, its likely the student is anywhere but imagining themselves in the shoes of their peer.

In-class time becomes an issue. Though it is nearly impossible to hit all fifty students each day in class, if the faculty member had more time, more students would have an opportunity to be tested, quizzed, and in turn, tested on their understanding of the topic. With so much content to cover over the course of the semester, faculty find it difficult to “find more time” in class to create more opportunities for dialog, assessment – or even better – create activities for the students to put their knowledge into play.

This is where the concept of blended learning can come into play. By taking some of the more rote or routine information that is usually conveyed by lecture or PowerPoint, we can save some time in class that can be used to engage more students, be filled with active learning opportunities, or anything else.

Implementing a “Blended” Legal Classroom
We aimed high.  We wanted to try this out in a first year class and chose contracts – because two professors (as opposed to four) were teaching the class. They had larger sections (50+ students), and it was easier to implement this pedagogical change with two professors. In all, our first year student body is approximately 101 students.

Instead of trying to cover all possible areas of content, we chose one specific area that typically lends itself to lecture – the Restatements. Jointly, the two professors reviewed their syllabus and identified roughly 40 videos that could convey the information in the Restatement that’s typically delivered by presentation/lecture in class. The senior faculty members were granted course release time to develop these videos.

The faculty members partnered with an instructional designer (me) who would facilitate development of the multimedia. The process went like this. The faculty members would write a script that would articulate the information to be conveyed. This process typically took 3-4 hours per script. Upon completion and approval of the script, it was given to the instructional designer. The instructional designer would then review the script (as a novice learner) and came up with a rough sketch for the content. If there were any revisions to be made, they were done after this point.  If the instructional designer had a difficult time understanding the information, the faculty member would work to clarify the script.

This was actually a serendipitous occurance… many times expert teachers have difficulty viewing their content from the perspective of a novice.  Since the majority of the learners are novices, my preview of the script was a good test for comprehensibility. If I couldn’t figure out what they were saying, I was either: not trying hard enough, or it truly was worded incorrectly.

After final, final approval, the script was read and recorded by the instructional designer. The audio was married with the multimedia and produced as a video.  This video was then uploaded to YouTube.  The process from start to finish could take anywhere up to 15 hours per video. In comparison, we aimed to have the individual videos no longer than 10 minutes in total viewable length.

The original aim was to have the modules complete one week prior to when they would be relevant in class. Though we originally stuck to this timeline, the time burden of creating the videos caught up to us and we wound up delivering them a day or two before they were to be covered. Fortunately, this only occurred the last two weeks of the semester

We wanted the students to watch the videos prior to class. Instead of spending 30 minutes lecturing about the Restatements and then discussing them, the students came to class prepared to do the discussion. This reduced the time necessary in class and also facilitated a deeper discussion.  The time savings was used throughout the semester for more in-class group work. In class time was constructed assuming the students had watched the videos.

At the conclusion of the course, we administered a survey that covered four primary areas of interest: questions about the videos themselves, questions about their use in class, questions about student satisfaction and motivation, and general study habit questions.

Wanna check ’em out? You can see the playlist of Contracts videos on YouTube.

How were they made?
All of the videos were made using Keynote – an Apple presentation product.  Surprisingly (or not) Keynote is a powerful tool for making animations like this. I don’t like to add a lot of extraneous movement to the slides… like those you’ll find in crazy transitions. The ease of use and availability make this an ideal tool for making multimedia presentations. Also, these could be used live, in class to achieve the same effect.

I plan on applying for a CALI slot to walk people through the steps in making their own, from script generation to export of the final movie. It would be catered to a non-technical audience – there are plenty of programs like iMovie, Vegas, and Final Cut that could be used to achieve the same outcome for the power users.

Survey Results
Of the 101 students that took the class, 69 of them responded to the survey. They were split virtually even with 34 females and 35 males replying. Here are some very interesting results that came out of the survey:

Regarding video questions

  • Roughly 97% of respondents agreed or strongly agreed that the modules made the Restatement content easy to understand.
  • 10% of respondents agreed or strongly agreed that the length (8:30 on average) was too long.  40% were neutral. This answered our hypothesis that most students would be ok with a length lower than 10 minutes. A few students noted in their qualitative feedback that some of them were too long.
  • Students were mostly neutral (37%) or agreed (36.2%) when we asked if there was desire to have a way to clarify questions after watching the module.  We asked this in anticipation of a message board or discussion forum or something. This conflicts a little bit with a more direct question later.

Module use in class

  • Students typically watched the modules before class time (49%). Unfortunately, due to unforeseen scheduling (one professor was ahead of the other), the modules were sometimes released very closely to class, if not after.
  • The previous point was supported by the fact that nearly 85% of the students reported wanting more time with the modules prior to class.
  • Students also reported using the modules as a review after class (70%)
  • Not surprisingly, 42% of the students agreed or strongly agreed that they would rather watch the videos than read about the restatements. 29% were neutral.
  • 50% of the students agreed or strongly agreed that the videos allowed them to pay better attention in class. 31% were neutral. We were very satisfied with this response, because it speaks to the idea that moving the non-interactive content outside of the classroom can facilitate a better learning experience in the classroom.
  • Nearly 60% of the students wish they had a way to assess their knowledge after watching the videos. This question was asked in anticipation of administering the videos with a formative assessment to allow students some idea of their comprehension.
  • Interestingly, over half of the students reported that they wouldn’t have used an online discussion board to talk about the content in the videos.
  • Several questions asked the students if they used the videos as a substitute for outlines or note taking in class, overwhelmingly the students replied. “No.”
  • Finally, students would choose a class that implemented videos over one that does not (85%)

Qualitative Feedback
There were a few common threads through all of these:

  • Contrary to what multimedia theory says, the students wanted me to read the text of the restatements. They hated the silent time I gave them to read to themselves. Confused?  There’s a multimedia principle called the Redundancy principle.  Basically, it says that if you have a bunch of text on a screen, and you read it to the viewer, they spend more cognitive energy reconciling what you’re reading out loud to what’s printed on screen.  The unfortunate side effect is they aren’t reading to comprehend, they’re reading to reconcile.This was probably the most surprising to me… and I’m willing to admit that I was wrong.  Just proof that what is proved in a “lab” may not be the best thing in real life.  If you’re interested in reading more about it, you can pick up the book on Amazon.  I think anyone who uses technology to create learning environments, especially multimedia ones like videos, animations, or the like, should understand the principles in this book.
  • As stated in the survey, many wanted them far ahead of time. This was strongly emphasized in the feedback. Having already made the videos and a better understanding of their use, etc… this shouldn’t be an issue for future iterations, but this is something to keep in mind if we want to do new courses in the future.  We definitely need more lead time.
  • A funny one: Students were tired of “widgets”. A few feedback statements and some verbal feedback (given to me in Torts class) told me they wanted real examples and not theoretical “widgets” as part of the examples. There must be something too theoretical about a widget… something lacking in their prior knowledge. Next time, we’ll use something like iPhones or paintbrushes. Maybe we can make some money with product placement!  Just kidding…
  • The students really, really liked the videos, and found them extremely helpful. They noticed towards the end of the semester when we were a little rushed to get them all out… but I thought we still stayed on a pretty good release schedule considering the amount of time that went into them.
  • Captioning or script availability – this is a feature on YouTube and might just need to be mentioned in class.

YouTube
Putting these videos onto YouTube has been one of the best decisions we’ve made so far. Of course, Utah has the most visits and minutes watched, but even more amazing is the use outside of our school – especially because there was no advertising anywhere for it.  Many people found these just by searching around YouTube.

We ended up with 37 videos for a run time of 5:35:28 (335 minutes). Average video length was around 8′ 25″.

Here’s a look at the relevant YouTube statistics for our videos (as of 8/15/2012 – 12/7/2012):

  • 8,373 views; 38,810 minutes watched
  • Top three videos based on views: Promissory Estoppel (618), Unjust Enrichment (598), Statute of Frauds 1 (502)
  • Top three videos based on minutes watched: Unjust Enrichment (3,287), Irrevocable Offers (2,723), Consideration (2,446)
  • Top five countries based on views: U.S. (7,720), U.K. (173), Canada (75), Hong Kong (38), Jamaica (38)
  • Top five countries based on minutes: US (37,446), UK (430), Trinidad (205), Canada (186), H.K. (103)
  • Top four states based on minutes: Utah (21,538), Cali (3,859), Florida (1,906), North Carolina (1,183)

I’m absolutely thrilled that other states had so many views and minutes watched.  Someone in the other states had obviously found these, used them, and potentially shared them.  I don’t anticipate the numbers climbing much higher than this from here on out, but it will be interesting to see after finals are done across the nation.

This does show that certain pieces of blended learning can be repurposed into review sets.  Since this is fundamental knowledge, if it’s designed right, anyone can use it to get the shallow, surface level comprehension of the topic.

Thank you, thank you, thank you!
Again, I want to thank Profs. Debora Threedy and Terry Kogan for the tremendous amount of work they put into this project over the course of the semester.  After reading the survey, feedback, and checking out the YouTube statistics, the students – here, nationally, and globally – benefited from their hard work and effort into this project. I really do think we’re onto something with it and look forward to the next iteration of the project.

Thursday, December 13th, 2012

The Rise, Fall, and Re-creation of the Counter-terrorism Simulation (Part 2)

Part 2: The Re-creation

This isn’t a joke!  They are learning!  Aren’t they?
Try this. Watch this three-minute video about the Simulation.  Make a note every time someone mentions learning, learning objective, or outcomes for the students. Make another note when someone says something about a feature or a technology we used.  Who wins?

Were the students learning?” Further, if they were learning something how would we prove it?

I talked with one of the faculty members here at the College of Law (CoL) and asked her what was going on with the Simulation. She said that although Alumni were familiar with the simulation, it had a lukewarm response every year – like a soap opera that the students were playing in. I mean, we were streaming it online and had documentaries made about it, but there was nothing that outlined what the students were learning through the use of this Simulation. What were the objectives? What were the outcomes? What was the rubric?

What are the students learning?  No one could answer that definitively. Sure we could make things up. We could say they are learning decision making in a high pressure environment. We could say we’re operating under the situated cognition theoretical framework. We could say they are learning valuable decision making and communication skills. Fine. They do that through the course of life in law school, don’t they?

Man this bugged me. We invested so much time, thought, energy, and ideas into the Simulation. We wanted this to be successful for EVERYONE involved – all stakeholders from the students to the donors.

We had to stop and reflect as a group. We needed to push the reset button. We had to identify problem areas and address them before we made the simulation “bigger and better.”

What should come first: realism through technology or learning objectives?
We needed to change the model. We equated high technology usage and realism with good learning outcomes. We never stopped to ask ourselves and assess ourselves if the technology facilitated a good learning environment to the students, or was just something extra to add to the pomp. What learning purpose does all this technology have?  We had the right idea on some things… the reporters were a good mechanism for feedback to the students as they “learn” in the simulation, but good learning environments don’t happen by chance.  They happen through good planning.  By setting our your learning objectives in advance, you can be assured that the technology you implement has a direct effect.

The class structure was wrong
The class was virtually all theory or readings. They went through the chapters in the book, talked Socratic style, and learned and discussed terrorism in various capacities. Two weeks before the end of the semester – the Simulation was dropped on them.

What’s wrong with that?

No opportunity to practice.  It’s like reading a book about Vince Lombardi and then playing a football game for their final test. Students spent 99% of their time in class learning, scribing, listening, and no time practicing and performing skills important to doing well in the simulation.  We invested all this time and energy into using technology to create a realistic scenario, but we didn’t even assess whether this technological environment was conducive to learning based on what they were taught. Shouldn’t the students have an opportunity to have some practice with skills that we were grading in the Simulation? How much of the Simulation was them trying to “survive” and how much was an actual test of whether or not they’re doing it right?

What kind of changes did we make?
Alright, so instead of thinking of ways we can make it more real or bigger & better for next year, we put out an analysis of what we thought we should have.  Instead of finding ways to project outwards, we decided to do some self-reflection.

  • More quantifiable outcomes: The Simulation is a highly qualitative event. There’s so much going on, it’s hard to objectively quantify student outcomes during the event. We’d like to facilitate an environment in which the students can be quantifiably rated on their performances – something like a performance score
  • More practice with relevant skills and constructs: Because the only exposure to the Simulation environment occurs during the Simulation at the end of the semester, there isn’t time allocated to the students to identify and practice the skills necessary to facilitate a successful Simulation. We want to give the students more time to develop the skills directly relevant to the Simulation.
  • More formative feedback on student progress: Students learn best with appropriate feedback.  By providing formative structure for feedback, students can further develop their skills in areas they are deficient. This will provide the students an opportunity to continue to work on their skills as relevant to the Simulation, and carry these skills with them into the work place.
  • An overall assessment of student skill performance: By providing the students with an aggregation of the quantifiable scores along with the constructive qualitative feedback, students will essentially have a formative assessment report that provides insight into their strengths and weaknesses and take the necessary steps to work on their performance in the main Simulation.

These changes came about:

  • Breakdown of skills and constructs: We’ve identified four skill areas necessary for successful performance in the Simulation. The four primary skill areas are: decision making, teamwork, information gathering and analysis, and advocacy and articulation.
  • Mini-simulations to test/reinforce relevant skills & constructs: For each of the identified four skill areas, we’ve created four mini-simulations that target development of these skills. Each mini-simulation is approximately one hour in length and is developed in parallel with the coursework. This will allow the students the ability to work on these specific skills prior to the main Simulation – in a Simulation context.
  • Formative feedback given to students pre-Simulation: Performance rubrics have been created for each identified skill. With the rubric, we can provide two different types of useful feedback for the students as they work through the mini-simulation. First, we can provide them with quantifiable information (a score) on their performance as it relates to the rubric. Second, qualitative feedback is provided for each criteria of the rubric.
  • Assessment reports were created and given (feedback): After each mini-simulation, the student is given a printed report that aggregates the quantitative and qualitative feedback provided by the raters of that mini-simulation. This clearly outlines the student’s performance and allows the student to identify and improve on weaknesses. It is also the basis for individual meetings the students schedule with the professor.
  • Main simulation changed to become more efficacious: In order to focus on the quality of the learning experience, we’ve made some changes to the main simulation. Instead of one giant nine-hour simulation, we are separating the students into three groups. Each group participates in a four hour main Simulation. This will level the importance of each role within the simulation and provide a better opportunity for the students to be rated on their performance. Each of the three groups run through the same simulation scenario, so in addition to within-student comparisons, the raters can also provide between-group comparisons of performance.

These questions and answers brought about the creation of a simulation design course. Instead of relying on a group of students doing this in their spare time, or even as a research project, we wanted to provide students an environment in which they can learn how to write a good simulation.  One in which the students can learn the skills and not just perform them. One where each activity is deliberate and chosen to reinforce something we feel they should learn. To refer back to our football example, if we feel blocking is a good skill for our students to learn, we should not only talk about it, but have practice actually doing it.  That way students can receive feedback on their performance, hone their skill, and have an opportunity to implement what they learned in an overall activity. Not only are the students in Amos’ class learning about Counterterrorism, the students in the design class are learning how to train effectively. Everyone wins!

We rely heavily on technology to facilitate this course. We use Canvas as a Learning Management System to manage the course schedule and readings.  We bring subject matter experts in through Skype (or even Polycom if they’re advanced) to give lectures on the skills – to assist the students in creating their learning environments.  The students use Google Docs to collaborate on script writing for both the main simulation and mini simulation.

Technology’s new role
Technology is awesome. It can facilitate learning opportunities and learning environments that didn’t exist prior. Technology can bring content experts from all over the world to your classroom.  Technology can turn brief writing into a collaborative experience for students. Technology can even be the learning environment for students.

Technology is not a substitute for good pedagogical planning. Technology can not take a broken class and make it better, just because we’re using clickers. Technology needs good planning.  Technology needs good insight. Technology needs to be collaborative. Technologists need to understand what the professor is trying to do with their class. Professors need to be receptive to new technologies that can make once tedious or impossible tasks easy. After all, that’s what technology is there for, right?

The role of technology in the Counterterrorism simulation is now tied to a learning objective. Some examples?

  • Streams aren’t recorded or pushed out to the public for promotional purposes, rather we now have time for the simulation writers to watch the recording and give feedback to the students. They rely on the archive of the stream after the fact to create this feedback, the external stream is just a convenient result of this need.
  • Technology was created to facilitate feedback and rating of students.  The iPad app isn’t just a fancy promotional piece, rather it’s something used to make the aggregation of scores and feedback streamlined, so the simulation writers can get it done efficiently and effectively, and get the feedback to the students in a timely manner.
  • Technology helps us communicate those results to the students and the community. By informing other students, faculty, and alumni. Using websites (authenticated of course) or even printed reports, we can get information to students quickly, so they can reflect on their performance and prepare questions for skill review. We can also tell sponsors, donors, and alumni how the students are performing in the simulation.  Instead of anecdotal stories, we actually have some hard evidence of student outcomes.
  • Websites like our fake CNN site are now tied to a skill: information management.  We can write the simulation around what the students do (or don’t do) when information is coming at them a mile a minute. Having this allows us to run the mini-simulations in a much more efficient manner.

We’ve also developed a sort of primer that other faculty can use when creating their own experiential learning exercises. This outlines the different stages of planning and also offers ideas on how technology can be used to develop ideas at each stage. This helps us create a sort of “menu” for technology and situations in which it might be best used to facilitate learning in their simulation.

I know this is long and sort of technology – sort of not. Either way, it’s a learning process. Hopefully in telling this story, we can offer it as a thought experiment for you. Hopefully the path that we’ve moved along will help you when you try to integrate technology into your school’s activities. These are the sort of things we’re hoping to accomplish with our Center for Innovation in Legal Education.

Next week is a little more technical.  I have a blog post telling you a little bit about a blended classroom environment we created for a first-year Contracts course.  We wrapped it up with a survey and I have some interesting thoughts and ideas to share with you.  Look for that one on December 18th!

As always, if you have any questions, thoughts, advice, or comments, leave them in the section below or drop me a line, I’d love to hear from you. You can always follow me on Twitter as well.

Tuesday, December 11th, 2012

The Rise, Fall, and Re-creation of the Counter-terrorism Simulation (Part 1)

Sometimes, we get lost in the excitement of technologies. When you’re a hammer, all you see is nails, right?.  It’s like that for us.  Every problem or situation we see can be “improved” with technology. Last year at CALI, I talked a little about this… the “shiny object syndrome” we often develop… looking for places technology can be used.

My story over the next few postings will be just about that.  We had carte blanche over a newly created program at the College of Law (CoL) called, “The Counter-terrorism Simulation.”  Like kids in a candy store, we saw this as an opportunity to show the awesomeness that is technology, and make the other faculty come knocking at our door.  Unfortunately, the story wouldn’t play out like that.  I’m going to tell you how it did.

Part 1: The Rise and Fall

So, what is the Counterterrorism Simulation, anyway?
The Counterterrorism simulation is an annual exercise put on by Professor Amos Guiora as part of his Perspectives on Counterterrorism class. Amos came to the CoL in the Spring of 2006 and immediately connected with the IT group to help facilitate this simulation exercise.

Of course, we jumped at the chance.  See, it was our job to help facilitate the realism of the simulation, so the students could have an approximation of what life would be like in a real-life situation. Plus, there’s a major learning theory out there that students can learn just by taking part in an activity called situated cognition.

It worked like this:  about 20 students total, take part in a full-day Counterterrorism simulation (8-9 hours).  The class was run like many law school courses are run.  The course is centered around a book Amos’ has written about Counterterrorism.  Each day was a lecture/discussion/dialogue about whatever chapter was currently being read. At the end of the semester the students were put in this Simulation and asked to “play out” a scenario put before them: dirty bombs, international border disputes, torture, and whatever the hot topic was for that time. The students took on roles of the Cabinet.  Someone was President, someone was Vice-president, Secretary of State, and so on. They made decisions based largely on what they had learned in the chapters they had read.

Amos had experience running simulations during his tenure in the Israeli Defense Forces, and found it a good way to train soldiers for situations they might encounter in the field. This experiential learning experience was very valuable for learning. He liked it so much, he brought it to Case Western (where he started teaching) and created a simulation to train students for the decision making environment they’ll see themselves in after they leave law school. When he came on board at the University of Utah, he brought the Simulation with him.

First year was slow, second year exploded…
As you can imagine, the first year was a little dance between Amos and IT. The technology we interjected wasn’t terribly advanced or well thought out. We just wanted to impress him with our application of technology.

The Simulation separated the students into three or four separate groups. Each group was “somewhere else” in the world: France, DC, NY, etc.  We facilitated this by putting the student groups into different rooms. So, some technology implemented was to contribute to the illusion of this distance and facilitate communications. We set up phone lines, video conferencing, pseudo-email accounts, etcetera.

We pre-recorded news clips, burned onto DVDs, to be delivered to the students at preplanned times throughout the scenario. These clips helped progress the storyline of the Simulation. We also tried to simulate television news in bringing information to the students in a way they might receive it in real life – through video.

The first Simulation was slow and simple, but we already had ideas for improving it the next year.

The second year saw tremendous improvement.  First, Amos recruited a student volunteer to help write out a new simulation scenario. This person consulted with IT to discuss how we could better “tell the story” and “make it more real” for the students.  It was this person’s (singular) job to draft a scenario that would last 8-9 hours.

We injected a little more technology. We did away with the DVD’s and created a mock CNN website, complete with embedded video news clips. The website was modeled after the real CNN site to contribute to the realism.  We contracted with the University’s Media Solutions team to help us make our news recordings look more realistic, complete with graphics that CNN might use on-screen during a real time of crisis.

The biggest increase in “technological innovation for realism” was a bit of a mistake. Some students made an off-hand comment to Amos about talking to the CEO of Home Depot. He passed my phone number (my personal phone number, mind you) to the students and said, “The person on the other end can help you.”  Sure enough, they called me and I acted out the role of the CEO. Soon after that another call came asking for the Governor of Maryland. Then the Police Chief of New York.  At the end of the day, I had over 300 missed calls and 70 voicemails.

This phenomena showed us that if we want to increase the realism (which was our goal, right?), we need to have “shadow players”. People whose job it was to play these roles to contribute to the activity of the simulation. We had to facilitate web conferencing, cell phones, personal blogs, email addresses, Google docs, and more.  Technology had to step in to create this illusion of reality for the students. If there was a problem and the storyline called for it, we threw technology at it, then beat our chest as to how awesome technology was for the Simulation.

Everything can be better with technology!
We started to command a pretty heavy role with the Simulation design and planning. If the simulation script writer needed something, they often contacted IT to help facilitate whatever it was they needed. We were the go-to engine for virtually every aspect of the Simulation.

Eventually, we started to stream the Simulation out to the masses. We wanted to show the world everything that was going on in the Simulation rooms.  If we told the students that thousands of people could potentially watch their performance, we’re increasing the reality – creating the “high pressure” environment. Not only that, but technology would help us turn the Simulation into a big event – almost a giant social media event for the CoL.

Not only that, but we were able to recruit a local community college and their journalism department. To fill the mock CNN website with news, we had journalism students interview and write stories over the course of the simulation. They used their digital cameras and recorders to capture the conversation, then transformed them into news articles/video clips and posted them online.

So, what are some things we did by year three of the simulation?

  • Mock CNN website (WordPress) to deliver “news” articles and video streams to participants.
  • Rooms all outfitted with video conferencing hardware to facilitate communication between “countries and organizations”.
  • Live video streams of all rooms at all times.
  • Interactive dashboard where external viewers could “peek” into the simulation without intrusion and chat about what they saw.
  • Local and remote shadow players, complete with phones, email, blogs, etc. that could interact with the students
  • Student journalists to report on the activities

Are we doing it right?
Eventually, we started to grow a little too big for our own britches.  We enlisted full-time help from the Media Solutions team to help run the web streams and capture the events. We looked at adding outside groups (hospital, political science, communications, external professional organizations) to help add additional fuel to the “realism stew” we were creating. One volunteer simulation writer became a committee of students. We even had a documentary made about the simulation that won some ABA awards.

This is all good, right?  A win for IT’s involvement in CoL affairs. Legitimacy! Faculty loved us, trusted us, and wanted to collaborate with us – right?!  Our Alumni looked forward to it every year – right?! The fact that we integrated technology with the educational environment made it a huge success that everyone wanted to be a part of – right?!

Well, not quite.  Interest in the program wasn’t quite what we thought it would be. There was even word was going around that it was a distracting circus. It’s around this time that this question popped up – “What are the students actually learning?

That single question made us wonder, what ARE the students learning? And further, how do we know?

Part 2 on Thursday….

Wednesday, December 5th, 2012

Introductions are in order

Hey everyone, my name is Aaron Dewald, and I’ll be the poster for Dec/Jan.

I thought I’d take a minute to introduce myself and let you know what I’ll be writing about over the next month or so.

About Me
I work at the S.J. Quinney College of Law at the University of Utah.  I’m the Associate Director for our brand new Center for Innovation in Legal Education. We’re trying to find ways to introduce a little learning science into the classroom. Combine that with some technology and hopefully it’s a recipe for success for faculty and students.

Anyway, I’ve attended the CALI conference over the past few years or so… and I try to give back by presenting at each one I attend. I’m a lurker on Teknoids (should probably respond more often), but I enjoy reading the dialogue back and forth on the forum.

The blog schedule 
I have a few specific things that I’d like to share with you over the next 4-5 weeks, I’m specifically going to write about:

  • Running simulations.  We run a Counter-terrorism simulation each year here. I’d like to share with you a story about technology and how it drove us to reconsider how we ran the simulation.  I’ll probably do this in two or four parts. It’s a fun little story. Likely going to present about it with our IT Director Mark Beekhuizen at CALI next year.
  • Blended classrooms and first-year courses We’re just wrapping up a project in which our two Contracts professors created multimedia modules based on the Restatement of Contracts. These were given to students a few days before class to get them up to speed about the restatements. The time saved in the classroom was then used for dialogue about the modules. The multimedia modules were uploaded to YouTube if you’d like to see them. I’ll write about the results of our survey, some interesting YouTube statistics, implementation, as well as report on what the overall findings were.  If there’s an appetite, I’ll propose a CALI session where we can learn how to make the modules for your own use.
  • Learning science I’m a PhD student in learning sciences, so sometimes there’s cool research that comes through that’s worth sharing with others. Most of it will have a technology spin, so it won’t be the academically dry stuff that’s out there. But I’d like to introduce a few things and write about its implications on technology in the legal classroom.
I’ll try to keep everything short and to the point.
I’m excited to write for you, share my knowledge with you, and learn from you. Drop me a line, if you’d like!