Wednesday, July 14, 2010

The UT System's Success (to me) With Virtual Worlds

UT Dallas School of Management
Location: UT Dallas School of Management

In a conversation that began in the comments section of a post at New World Notes, Hamlet Au put this question to me:

"I'm not aware of any education application *besides* the Canadian Border Simulation which have showed concrete, verifiable success metrics for using SL pedagogically. Are you? I'd love to know about them."

I knew that other case studies of good projects exist, often documented in the pages of The Journal of Virtual Worlds Research. I mentioned that in an e-mail to Au, and cited some of the following information, in terms of how universities conduct assessment.

The University of Texas system's project, to me at least, provides a good indicator of how virtual worlds can be successful in higher education. My reasoning is twofold: first, all participating campuses stepped up, paying their own way, a second year. Second, the project had an assessment plan that meets standards for such work in higher education.

Yet as I'll come to show, even a project that made participants happy might not be a success, depending on who defines the term.

How Institutions of Higher Education Assess Programs and Projects

Success, to the accrediting agency to which my university belongs, involves a few steps. Generally, self-assessment means setting one or more goals, then devising learning outcomes for each goal and outcome measures that can be quantified. The department or program must have the assessment instruments and plan reviewed by outsiders, because there's always the danger of setting goals and measures that mean very little.

The University of Texas Pilot Project

This system-wide project, funded for its first year by a grant, encompassed 15 campuses. Each got 3 islands in SL. All but one school used their SL land for faculty developed projects. UT provides a page about SL for faculty here and a set of assessment rubrics here.

If you want to see their campuses (it's summer now, folks, do don't complain about emptiness) start at the signboard pictured above in Second Life. The boots and oil derricks are nice Texan touches, too. I hope to get to look at a few projects in depth during the school year. UT Dallas provides a Web portal that gives links to projects and student pictures.

The head of the project, Mario Guerra (SL: Luigi Miles), spoke to the Virtual Worlds Education Roundtable on June 1. Mario took over after the sudden death of the founder, Dr. Leslie Jarmon. The entire transcript of Mario's and AJ Brooks' talk (transcribed from voice to text) is a slog, as all transcripts are, but it's here if you wish to have a look.

At the end of the year all 14 participating campuses choose to renew at least one of their three islands. At that point, they were paying their own way. That's good news to the participants, but it's not "success" in assessment terms, unless the goal was faculty involvement and the outcome measure were "at least one faculty project per campus in Second Life."

So here is what the assessment page, linked above, notes:

How do you know if using Second Life …

  • enhanced students’ learning and skill development?
  • engaged and motivated students?
  • accommodated students’ learning preferences?
A project that Mario singled out paired a Design instructor at UT Austin and a Marine Science instructor located 3 hours away. The Marine Science instructor used SL to replace a video link technology that was buggy. At the end of the collaboration, the "result was an underwater classroom and a research vessel that could test water, get data."

This simulation has "cool factor" all over it and would be far less expensive than finding an actual research vessel.

But did it enhance learning?

Wait for It: The Glory of Academic Life is Our Slowness

It will be some time before peer-reviewed academic articles let us know if the projects lowered costs (a goal Mario mentioned at our meeting) and enhanced learning. The collaborations enabled in the virtual world, moreover, were not to have been preexisting ones. The one cited for the Marine Science classroom was completely new.

Most faculty and college administrators I know would assume this project to be a success story, if student evaluations were positive from classes and programs that used the SL islands. Time will tell if the other collaborations were as rich and interesting for students' learning.

I do not have Mario's numbers on how much any project saved the UT system, when compared to teaching the materials in other ways, if indeed they could be taught at all without a virtual world. This may not satisfy Hamlet's concern for "concrete" metrics, but one must assume that, as at my university, the project at UT would not continue a second year without having passed muster with an Office of Institutional Assessment and a Foundations & Grants Office. Both are likely to be involved for the fiscal management of a project like UT's.

None of these sorts of offices work quickly. The result of this tedium is salutatory: unlike the private sector, where a smooth pitch and a nice idea can attract venture capital, a university Dean or an outside granting agency is unlikely to bow to a group of enthusiasts with a cool idea but little planning, foresight, or proof of success.

That's the beauty, and frustration, of collegiate self-governance. We have a system of checks and balances that can be infuriating to outsiders. But they do work.

Unless a state government mandated it, there would be no reason to release financial records that were reviewed by neutral parties within an institution. They are painfully neutral in my experience and that of colleagues at other schools. Great ideas and programs often cannot be funded, even if you are good friends with everyone involved.

And though course evaluations are not part of the FERPA law governing student records, I suspect that an institution might block the release of evaluations beyond what a school chooses to release anonymously.

The Annual Assessment Trap: So What Have You Done Lately?

If you want more funding, success must continue. That's one rationale behind annual measurement, the key to modern assessment. Success need not be "more projects each year," so continuation of at least some projects at each campus would be one key metric that assessment experts would want to see.

That does make the UT story unique, in a time of constrained budgets. It surprised me to hear how many UT campuses were opting in for another year in SL, at a time when gradually educators are setting up parallel projects (or just migrating) to OpenSim, where they face lower costs and fewer restrictions than in SL. I have no figures on the numbers for that exodus, but it's like the Richmond VA humidity, a funk that just hangs in the air when we get together as a group and hear tales of individuals and institutions setting up their own OS servers.

For UT, Mario noted that the ongoing success in SL was due to UT's laying a "foundation." He described this as varying from campus to campus, " maybe 3-4 demos, training for faculty and graduate student instructors: Benefits, what you can do." It also took leadership at the highest level to make this project work. He added that the UT "Vice chancellor was excited [regarding] SL opportunities" and moved things forward.

The voluntary renewals this year seem a good enough metric to indicate "success," though I do not have complete data on what projects occurred at each campus. I'd like to see how it goes on year three. If UT's involvement grows more in terms of numbers of projects and faculty involved, if not islands, that would satisfy my school's assessment agency as "success."

That would also address the most vexing question Au put to me in a follow-up e-mail:

"My question is whether this proves SL is working with their students, or just that it's working with people (select students and teachers) who are already evangelists for Second Life. I suspect the latter."

This is what I suspect, too. When I put this question to the assembled at last night's Virtual Worlds Education Roundtable, I was just not satisfied with the answers I got. In fact, that merits a different blog post.

I fear that too many of us remain cheerleaders. But if the UT system brings in more designers, faculty, and students, then we'd have a project that grows beyond an evangelical base.

Coda: The Disconnect Between the Eggheads and the Suits

Of course, Linden Lab would naturally measure success as the number of islands paying tier, an irrelevant metric for educators. Hence a disconnect in SL between the company and the eggheads.

It's great if our work makes Linden Lab money, but I really could care less, as long as my nonprofit university pays its tier and our House of Usher build does not go "poof."

2 comments:

Peter Miller said...

I think Au probably sees SLeducators as a homogeneous bunch whereas that isn't true at all. There are, for example, edtech researchers (who depend on a degree of novelty for their next grant application so have itchy feet), edtech support staff & librarians, early-adopter faculty (like us), mainstream faculty and contractors.

My guess is that you won't get mainstream faculty involvement without engaging the edtech support staff fully. I attended part of the UoT Undergrad Conference and was very impressed by the progress they had made and the entirely objective, zero-hype way they discussed what had gone right and wrong. Several faculty paid tribute to the support they had had and made it clear that it had been crucial. It's a pity that no transcript or recording was made.

iliveisl said...

yeah baby! hook 'em horns!

wanna know why God makes sunsets a burnt orange colour?

because he loves the Texas Longhorns! woohoo!!!

guess where i went to grad school irl? =D