Archive for Experiential Learning

Thursday, December 13th, 2012

The Rise, Fall, and Re-creation of the Counter-terrorism Simulation (Part 2)

Part 2: The Re-creation

This isn’t a joke!  They are learning!  Aren’t they?
Try this. Watch this three-minute video about the Simulation.  Make a note every time someone mentions learning, learning objective, or outcomes for the students. Make another note when someone says something about a feature or a technology we used.  Who wins?

Were the students learning?” Further, if they were learning something how would we prove it?

I talked with one of the faculty members here at the College of Law (CoL) and asked her what was going on with the Simulation. She said that although Alumni were familiar with the simulation, it had a lukewarm response every year – like a soap opera that the students were playing in. I mean, we were streaming it online and had documentaries made about it, but there was nothing that outlined what the students were learning through the use of this Simulation. What were the objectives? What were the outcomes? What was the rubric?

What are the students learning?  No one could answer that definitively. Sure we could make things up. We could say they are learning decision making in a high pressure environment. We could say we’re operating under the situated cognition theoretical framework. We could say they are learning valuable decision making and communication skills. Fine. They do that through the course of life in law school, don’t they?

Man this bugged me. We invested so much time, thought, energy, and ideas into the Simulation. We wanted this to be successful for EVERYONE involved – all stakeholders from the students to the donors.

We had to stop and reflect as a group. We needed to push the reset button. We had to identify problem areas and address them before we made the simulation “bigger and better.”

What should come first: realism through technology or learning objectives?
We needed to change the model. We equated high technology usage and realism with good learning outcomes. We never stopped to ask ourselves and assess ourselves if the technology facilitated a good learning environment to the students, or was just something extra to add to the pomp. What learning purpose does all this technology have?  We had the right idea on some things… the reporters were a good mechanism for feedback to the students as they “learn” in the simulation, but good learning environments don’t happen by chance.  They happen through good planning.  By setting our your learning objectives in advance, you can be assured that the technology you implement has a direct effect.

The class structure was wrong
The class was virtually all theory or readings. They went through the chapters in the book, talked Socratic style, and learned and discussed terrorism in various capacities. Two weeks before the end of the semester – the Simulation was dropped on them.

What’s wrong with that?

No opportunity to practice.  It’s like reading a book about Vince Lombardi and then playing a football game for their final test. Students spent 99% of their time in class learning, scribing, listening, and no time practicing and performing skills important to doing well in the simulation.  We invested all this time and energy into using technology to create a realistic scenario, but we didn’t even assess whether this technological environment was conducive to learning based on what they were taught. Shouldn’t the students have an opportunity to have some practice with skills that we were grading in the Simulation? How much of the Simulation was them trying to “survive” and how much was an actual test of whether or not they’re doing it right?

What kind of changes did we make?
Alright, so instead of thinking of ways we can make it more real or bigger & better for next year, we put out an analysis of what we thought we should have.  Instead of finding ways to project outwards, we decided to do some self-reflection.

  • More quantifiable outcomes: The Simulation is a highly qualitative event. There’s so much going on, it’s hard to objectively quantify student outcomes during the event. We’d like to facilitate an environment in which the students can be quantifiably rated on their performances – something like a performance score
  • More practice with relevant skills and constructs: Because the only exposure to the Simulation environment occurs during the Simulation at the end of the semester, there isn’t time allocated to the students to identify and practice the skills necessary to facilitate a successful Simulation. We want to give the students more time to develop the skills directly relevant to the Simulation.
  • More formative feedback on student progress: Students learn best with appropriate feedback.  By providing formative structure for feedback, students can further develop their skills in areas they are deficient. This will provide the students an opportunity to continue to work on their skills as relevant to the Simulation, and carry these skills with them into the work place.
  • An overall assessment of student skill performance: By providing the students with an aggregation of the quantifiable scores along with the constructive qualitative feedback, students will essentially have a formative assessment report that provides insight into their strengths and weaknesses and take the necessary steps to work on their performance in the main Simulation.

These changes came about:

  • Breakdown of skills and constructs: We’ve identified four skill areas necessary for successful performance in the Simulation. The four primary skill areas are: decision making, teamwork, information gathering and analysis, and advocacy and articulation.
  • Mini-simulations to test/reinforce relevant skills & constructs: For each of the identified four skill areas, we’ve created four mini-simulations that target development of these skills. Each mini-simulation is approximately one hour in length and is developed in parallel with the coursework. This will allow the students the ability to work on these specific skills prior to the main Simulation – in a Simulation context.
  • Formative feedback given to students pre-Simulation: Performance rubrics have been created for each identified skill. With the rubric, we can provide two different types of useful feedback for the students as they work through the mini-simulation. First, we can provide them with quantifiable information (a score) on their performance as it relates to the rubric. Second, qualitative feedback is provided for each criteria of the rubric.
  • Assessment reports were created and given (feedback): After each mini-simulation, the student is given a printed report that aggregates the quantitative and qualitative feedback provided by the raters of that mini-simulation. This clearly outlines the student’s performance and allows the student to identify and improve on weaknesses. It is also the basis for individual meetings the students schedule with the professor.
  • Main simulation changed to become more efficacious: In order to focus on the quality of the learning experience, we’ve made some changes to the main simulation. Instead of one giant nine-hour simulation, we are separating the students into three groups. Each group participates in a four hour main Simulation. This will level the importance of each role within the simulation and provide a better opportunity for the students to be rated on their performance. Each of the three groups run through the same simulation scenario, so in addition to within-student comparisons, the raters can also provide between-group comparisons of performance.

These questions and answers brought about the creation of a simulation design course. Instead of relying on a group of students doing this in their spare time, or even as a research project, we wanted to provide students an environment in which they can learn how to write a good simulation.  One in which the students can learn the skills and not just perform them. One where each activity is deliberate and chosen to reinforce something we feel they should learn. To refer back to our football example, if we feel blocking is a good skill for our students to learn, we should not only talk about it, but have practice actually doing it.  That way students can receive feedback on their performance, hone their skill, and have an opportunity to implement what they learned in an overall activity. Not only are the students in Amos’ class learning about Counterterrorism, the students in the design class are learning how to train effectively. Everyone wins!

We rely heavily on technology to facilitate this course. We use Canvas as a Learning Management System to manage the course schedule and readings.  We bring subject matter experts in through Skype (or even Polycom if they’re advanced) to give lectures on the skills – to assist the students in creating their learning environments.  The students use Google Docs to collaborate on script writing for both the main simulation and mini simulation.

Technology’s new role
Technology is awesome. It can facilitate learning opportunities and learning environments that didn’t exist prior. Technology can bring content experts from all over the world to your classroom.  Technology can turn brief writing into a collaborative experience for students. Technology can even be the learning environment for students.

Technology is not a substitute for good pedagogical planning. Technology can not take a broken class and make it better, just because we’re using clickers. Technology needs good planning.  Technology needs good insight. Technology needs to be collaborative. Technologists need to understand what the professor is trying to do with their class. Professors need to be receptive to new technologies that can make once tedious or impossible tasks easy. After all, that’s what technology is there for, right?

The role of technology in the Counterterrorism simulation is now tied to a learning objective. Some examples?

  • Streams aren’t recorded or pushed out to the public for promotional purposes, rather we now have time for the simulation writers to watch the recording and give feedback to the students. They rely on the archive of the stream after the fact to create this feedback, the external stream is just a convenient result of this need.
  • Technology was created to facilitate feedback and rating of students.  The iPad app isn’t just a fancy promotional piece, rather it’s something used to make the aggregation of scores and feedback streamlined, so the simulation writers can get it done efficiently and effectively, and get the feedback to the students in a timely manner.
  • Technology helps us communicate those results to the students and the community. By informing other students, faculty, and alumni. Using websites (authenticated of course) or even printed reports, we can get information to students quickly, so they can reflect on their performance and prepare questions for skill review. We can also tell sponsors, donors, and alumni how the students are performing in the simulation.  Instead of anecdotal stories, we actually have some hard evidence of student outcomes.
  • Websites like our fake CNN site are now tied to a skill: information management.  We can write the simulation around what the students do (or don’t do) when information is coming at them a mile a minute. Having this allows us to run the mini-simulations in a much more efficient manner.

We’ve also developed a sort of primer that other faculty can use when creating their own experiential learning exercises. This outlines the different stages of planning and also offers ideas on how technology can be used to develop ideas at each stage. This helps us create a sort of “menu” for technology and situations in which it might be best used to facilitate learning in their simulation.

I know this is long and sort of technology – sort of not. Either way, it’s a learning process. Hopefully in telling this story, we can offer it as a thought experiment for you. Hopefully the path that we’ve moved along will help you when you try to integrate technology into your school’s activities. These are the sort of things we’re hoping to accomplish with our Center for Innovation in Legal Education.

Next week is a little more technical.  I have a blog post telling you a little bit about a blended classroom environment we created for a first-year Contracts course.  We wrapped it up with a survey and I have some interesting thoughts and ideas to share with you.  Look for that one on December 18th!

As always, if you have any questions, thoughts, advice, or comments, leave them in the section below or drop me a line, I’d love to hear from you. You can always follow me on Twitter as well.

Tuesday, December 11th, 2012

The Rise, Fall, and Re-creation of the Counter-terrorism Simulation (Part 1)

Sometimes, we get lost in the excitement of technologies. When you’re a hammer, all you see is nails, right?.  It’s like that for us.  Every problem or situation we see can be “improved” with technology. Last year at CALI, I talked a little about this… the “shiny object syndrome” we often develop… looking for places technology can be used.

My story over the next few postings will be just about that.  We had carte blanche over a newly created program at the College of Law (CoL) called, “The Counter-terrorism Simulation.”  Like kids in a candy store, we saw this as an opportunity to show the awesomeness that is technology, and make the other faculty come knocking at our door.  Unfortunately, the story wouldn’t play out like that.  I’m going to tell you how it did.

Part 1: The Rise and Fall

So, what is the Counterterrorism Simulation, anyway?
The Counterterrorism simulation is an annual exercise put on by Professor Amos Guiora as part of his Perspectives on Counterterrorism class. Amos came to the CoL in the Spring of 2006 and immediately connected with the IT group to help facilitate this simulation exercise.

Of course, we jumped at the chance.  See, it was our job to help facilitate the realism of the simulation, so the students could have an approximation of what life would be like in a real-life situation. Plus, there’s a major learning theory out there that students can learn just by taking part in an activity called situated cognition.

It worked like this:  about 20 students total, take part in a full-day Counterterrorism simulation (8-9 hours).  The class was run like many law school courses are run.  The course is centered around a book Amos’ has written about Counterterrorism.  Each day was a lecture/discussion/dialogue about whatever chapter was currently being read. At the end of the semester the students were put in this Simulation and asked to “play out” a scenario put before them: dirty bombs, international border disputes, torture, and whatever the hot topic was for that time. The students took on roles of the Cabinet.  Someone was President, someone was Vice-president, Secretary of State, and so on. They made decisions based largely on what they had learned in the chapters they had read.

Amos had experience running simulations during his tenure in the Israeli Defense Forces, and found it a good way to train soldiers for situations they might encounter in the field. This experiential learning experience was very valuable for learning. He liked it so much, he brought it to Case Western (where he started teaching) and created a simulation to train students for the decision making environment they’ll see themselves in after they leave law school. When he came on board at the University of Utah, he brought the Simulation with him.

First year was slow, second year exploded…
As you can imagine, the first year was a little dance between Amos and IT. The technology we interjected wasn’t terribly advanced or well thought out. We just wanted to impress him with our application of technology.

The Simulation separated the students into three or four separate groups. Each group was “somewhere else” in the world: France, DC, NY, etc.  We facilitated this by putting the student groups into different rooms. So, some technology implemented was to contribute to the illusion of this distance and facilitate communications. We set up phone lines, video conferencing, pseudo-email accounts, etcetera.

We pre-recorded news clips, burned onto DVDs, to be delivered to the students at preplanned times throughout the scenario. These clips helped progress the storyline of the Simulation. We also tried to simulate television news in bringing information to the students in a way they might receive it in real life – through video.

The first Simulation was slow and simple, but we already had ideas for improving it the next year.

The second year saw tremendous improvement.  First, Amos recruited a student volunteer to help write out a new simulation scenario. This person consulted with IT to discuss how we could better “tell the story” and “make it more real” for the students.  It was this person’s (singular) job to draft a scenario that would last 8-9 hours.

We injected a little more technology. We did away with the DVD’s and created a mock CNN website, complete with embedded video news clips. The website was modeled after the real CNN site to contribute to the realism.  We contracted with the University’s Media Solutions team to help us make our news recordings look more realistic, complete with graphics that CNN might use on-screen during a real time of crisis.

The biggest increase in “technological innovation for realism” was a bit of a mistake. Some students made an off-hand comment to Amos about talking to the CEO of Home Depot. He passed my phone number (my personal phone number, mind you) to the students and said, “The person on the other end can help you.”  Sure enough, they called me and I acted out the role of the CEO. Soon after that another call came asking for the Governor of Maryland. Then the Police Chief of New York.  At the end of the day, I had over 300 missed calls and 70 voicemails.

This phenomena showed us that if we want to increase the realism (which was our goal, right?), we need to have “shadow players”. People whose job it was to play these roles to contribute to the activity of the simulation. We had to facilitate web conferencing, cell phones, personal blogs, email addresses, Google docs, and more.  Technology had to step in to create this illusion of reality for the students. If there was a problem and the storyline called for it, we threw technology at it, then beat our chest as to how awesome technology was for the Simulation.

Everything can be better with technology!
We started to command a pretty heavy role with the Simulation design and planning. If the simulation script writer needed something, they often contacted IT to help facilitate whatever it was they needed. We were the go-to engine for virtually every aspect of the Simulation.

Eventually, we started to stream the Simulation out to the masses. We wanted to show the world everything that was going on in the Simulation rooms.  If we told the students that thousands of people could potentially watch their performance, we’re increasing the reality – creating the “high pressure” environment. Not only that, but technology would help us turn the Simulation into a big event – almost a giant social media event for the CoL.

Not only that, but we were able to recruit a local community college and their journalism department. To fill the mock CNN website with news, we had journalism students interview and write stories over the course of the simulation. They used their digital cameras and recorders to capture the conversation, then transformed them into news articles/video clips and posted them online.

So, what are some things we did by year three of the simulation?

  • Mock CNN website (WordPress) to deliver “news” articles and video streams to participants.
  • Rooms all outfitted with video conferencing hardware to facilitate communication between “countries and organizations”.
  • Live video streams of all rooms at all times.
  • Interactive dashboard where external viewers could “peek” into the simulation without intrusion and chat about what they saw.
  • Local and remote shadow players, complete with phones, email, blogs, etc. that could interact with the students
  • Student journalists to report on the activities

Are we doing it right?
Eventually, we started to grow a little too big for our own britches.  We enlisted full-time help from the Media Solutions team to help run the web streams and capture the events. We looked at adding outside groups (hospital, political science, communications, external professional organizations) to help add additional fuel to the “realism stew” we were creating. One volunteer simulation writer became a committee of students. We even had a documentary made about the simulation that won some ABA awards.

This is all good, right?  A win for IT’s involvement in CoL affairs. Legitimacy! Faculty loved us, trusted us, and wanted to collaborate with us – right?!  Our Alumni looked forward to it every year – right?! The fact that we integrated technology with the educational environment made it a huge success that everyone wanted to be a part of – right?!

Well, not quite.  Interest in the program wasn’t quite what we thought it would be. There was even word was going around that it was a distracting circus. It’s around this time that this question popped up – “What are the students actually learning?

That single question made us wonder, what ARE the students learning? And further, how do we know?

Part 2 on Thursday….

Wednesday, December 5th, 2012

Introductions are in order

Hey everyone, my name is Aaron Dewald, and I’ll be the poster for Dec/Jan.

I thought I’d take a minute to introduce myself and let you know what I’ll be writing about over the next month or so.

About Me
I work at the S.J. Quinney College of Law at the University of Utah.  I’m the Associate Director for our brand new Center for Innovation in Legal Education. We’re trying to find ways to introduce a little learning science into the classroom. Combine that with some technology and hopefully it’s a recipe for success for faculty and students.

Anyway, I’ve attended the CALI conference over the past few years or so… and I try to give back by presenting at each one I attend. I’m a lurker on Teknoids (should probably respond more often), but I enjoy reading the dialogue back and forth on the forum.

The blog schedule 
I have a few specific things that I’d like to share with you over the next 4-5 weeks, I’m specifically going to write about:

  • Running simulations.  We run a Counter-terrorism simulation each year here. I’d like to share with you a story about technology and how it drove us to reconsider how we ran the simulation.  I’ll probably do this in two or four parts. It’s a fun little story. Likely going to present about it with our IT Director Mark Beekhuizen at CALI next year.
  • Blended classrooms and first-year courses We’re just wrapping up a project in which our two Contracts professors created multimedia modules based on the Restatement of Contracts. These were given to students a few days before class to get them up to speed about the restatements. The time saved in the classroom was then used for dialogue about the modules. The multimedia modules were uploaded to YouTube if you’d like to see them. I’ll write about the results of our survey, some interesting YouTube statistics, implementation, as well as report on what the overall findings were.  If there’s an appetite, I’ll propose a CALI session where we can learn how to make the modules for your own use.
  • Learning science I’m a PhD student in learning sciences, so sometimes there’s cool research that comes through that’s worth sharing with others. Most of it will have a technology spin, so it won’t be the academically dry stuff that’s out there. But I’d like to introduce a few things and write about its implications on technology in the legal classroom.
I’ll try to keep everything short and to the point.
I’m excited to write for you, share my knowledge with you, and learn from you. Drop me a line, if you’d like!