Showing posts with label students. Show all posts
Showing posts with label students. Show all posts

Sunday, August 24, 2025

From Antiquity, a Reason Why AI-Generated Writing is not "Great"


Every year, I read at least one text (in translation) from Antiquity. I find that the long-term perspective gets me through the bumps in life's road. I'm currently reading On Great Writing (or if you will, On the Sublime) by Longinus, in a Hackett Classics edition I picked up at an academic conference's book room.

G.M.A Grube makes the work come to life; we know so little about Longinus (the author lived between the 1st Century BCE and the 3rd CE, that the text passes my "castaway reader" test. Here we go: a work washes up intact on your desert island. Yay, something to pass the time! Yet you have no information on the author, and no secondary sources. You must use the actual words on the page to come to a conclusion about the work's meaning.

Longinus talks about many aspects of what constitute "the Sublime" in excellent writing,  but one passage late in the text commends itself to my current students in "Writing With and About AI." I've said since 2022 that AI prose is "voiceless," and Longinus gives reasons why some prose most moves us:

Which is to be preferred in poetry and prose, great writing with occasional flaws or moderate talent which is entirely sound and faultless?. . . . It is perhaps also inevitable that inferior and average talent remains for the most part safe and faultless because it avoids risk and does not aim at the heights, while great qualities are always precarious because of their very greatness.

Bad student writing is slap-dash, one-draft work that has no coherence. No wonder bored or harried students turn to AI! At the same time, why not simply give all such technically correct, but average work what it should earn: a C? AI produces good, boring, safe prose. Many students who pair moderate talent with immoderate anxiety already do that. I never give them an A. For years I've said "this piece takes no intellectual risks. You are writing to please me, not learn something new."

In Nancy Sommers' excellent short films from the 1990s about writers at Harvard, I recall how one fourth-year student said that he learned to begin with what he did not know, starting with a question. This remark changed how I taught writing. I'm going to press my current adult students to do the same: begin with what you DON'T know. As Longinus warns us, "A world-wide sterility of utterance has come upon our life."

In discussion with ChatGPT 5 recently, I asked it about the role of human labor in a a time when automation already takes some entry-level jobs. It replied, summing up a short list of human skills essential to success, "the future may need fewer button-pushers and more meaning-makers."

Good writing sways us, it shows us the meaning past the words. It says with us, like the remark by that Harvard student. So this term, I'm asking more, not less, of my writers even as all of them use AI in their assignments. The machine as raised the bar on what constitutes excellence.

image: Raphael's The School of Athens (Wikipedia)

 

Thursday, August 21, 2025

A Stunning Lack of Search Results From Universities

AI generated image of a campus building

As a staunch supporter of an open Internet, where advice can be freely shared, I'm stumped by the lack of results form a few simple Duck Duck Go queries:

  • University sources Podcast Scripting for AI 
  • College guides Podcast scripting for AI 

Fordham's page on using ElevenLabs proved the only hit that was not from some start-up trying to hawk AI services. This outcome discourages me greatly. Here's why.

Universities, even ones under the hammer of Federal and state funding cuts, have robust I.T. presences.  Internal materials for faculty and students can be shared with no more than a few high-level links. To me the lack of materials portends either a lack of care about reaching beyond the campus gates (a sad lack of marketing savvy) or, more ominously, that colleges and universities are as ill-prepared as Ian Bogost claims in his recent Atlantic piece, "College Students Have Already Changed Forever." Bogost notes that "plenty of professors are oblivious. It isn’t that they fail to understand the nature of the threat to classroom practice. But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students. For them, the coming year could provide a painful revelation."

One would think that schools such as Washington U at St. Louis, where Bogost is on the faculty, would do a great deal, as Ohio State has begun with its recent initiative. I found this page on AI resources at Washington University, but you must scroll to the bottom to get resources for faculty. Mostly, the page emphasizes new degree programs for students. Digging in a bit, I found this page for teaching activities, a page on an AI summer institute, and some philosophical advice about how to design assignments. Why don't such in-house materials appear higher in search results than those of peddlars of their software solutions? "They didn't give us any money" would be the answer for Google, but for Duck Duck Go, it seems dubious.

My hunch is that our schools are so overwhelmed that "sharing the wisdom" remains a very low priority. 

Luckily and of more use to faculty, The WAC Clearinghouse has launched an initiative to build a library of open-access lesson plans for AI. I'm still not sure why universities have left it to a talented group working across our institutional boundaries to do this. I'd claim that Generative AI poses the largest challenge to how we educate in a long time, categorically different from the coming of the Internet.

image: a really lame prompt and result, "Impressionistic image of traditional campus Humanities building with students going to class," given to Dall-E, when I was first using the image generator. 

Sunday, August 10, 2025

CS Grads Face AI-Driven Unemployment

Computer Code on Screen

I have told my students, ever since early 2023, "add value to AI content if you want a job." It seems that recent Computer-Science grads have found this out the hard way. Companies are hiring far fewer entry-level coders as AI takes on that task.

 A story in the New York Times today reports on the troubles faced by this cadre of coders; some of them interviewed had applied for thousands of jobs, without a single bite. One story ended well: a young woman who had been rebuffed again and again for coding jobs found one in sales for a tech firm, probably because of her communication skills honed as a TikTok influencer.

The numbers for these students are depressing:  

"Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York."

I don't know that others are doing much better, though I was encouraged to see that History majors have an unemployment rate of 3%. My recent students contact me for letters of reference and they are taking part-time teaching jobs, going to Law School, planning to work abroad. The job market is rather grim for them, something I experienced for very different reasons in 1983, when I moved back in with my parents, took two part-time jobs paying minimum wage, and I waited for times to improve. Friends went to the Peace Corps, the military, or grad school. 

What is different now? On the positive side, these young people know how to build professional networks (and have technology for that I'd could not have imagined). They get internships, something unheard-of except for Engineering and Business majors at Virginia in the early 1980s. On the negative side? They have been groomed from birth to get into the right school, then promised a six-figure salary. I see that among the Business-school students I teach too. I fear they too will face a round of rejections, and soon, as AI continues to evolve and companies deploy it for jobs once thought secure from automation.

Those interviewed by the Times note how rejections can come in minutes by email; AI scans the thousands of applications for AI-related skills. None? Instant round-file for that application. I got that treatment too from firms where I naively thought I might be of service as a tech writer. With a flimsy one-page resume that consisted of grocery-story work primarily, I got snubbed.

My Humanist side says "welcome to the club" of under-employed but bright people. My my humane side says "you worked hard yet you have been replaced by a machine." The answers are elusive, because as the story notes, universities are slow to implement AI-coding into their CS curricula, the one area where some new grads find work. And to be honest, that's simply the result of an industry that caught so many of us flat-footed two and a half years ago. It takes years to change a curriculum. 

I fear all those Accounting and Finance majors are next on the chopping block, as companies scale up their AI efforts.

So what do I tell my students this term? Learn to be flexible? Hone those value-adding human skills? Get ready to have side-gigs? Volunteer? Build a robust professional network? I suppose that may work...for now. I'll know more when I begin to use ChatGPT 5 soon. I fear it may be the creative genie that takes away even more jobs.

This moment marks where their and my experiences differ. When I graduated, the Federal government was not axing hundreds of thousands of jobs and no software was replacing entry-level workers wholesale. We had inflation then, but the country was run by a competent, avuncular President and sane and independent Congress. No more.

Before going to Europe for a year in 1985 and after so many rejection letters, though an aunt's contacts I managed to land a professional-writing job for the parole division of Virginia's Department of Corrections. It was soul-draining work, but it paid well. I picked up a volunteer gig tutoring ESL to Cambodian and Vietnamese refugees; that ESL experience helped me land a teaching gig in Madrid. Then grad school, then...where I am now.

Keep at it, graduates. 

Creative-Commons image: Wallpaperflare.com 

Tuesday, July 8, 2025

A Class Policy on AI Hallucinations

 

Image from Hitchcock's film Vertigo


As I noted in my last post, a student in my May-term class committed academic dishonesty in a reading journal I'd sought to make AI-resistant. This to me is a more serious issue than turning in work an AI generated. Why is that?

In my course, I provide careful guidelines for how students may use AI to help them. For instance, since today's undergrads are generally awful readers, I allow them to employ AI to help them understand themes in readings and connections between readings. Some students in their evaluations noted that the graded work here "forced them" to do their readings. 

Yes, readers, thank you for reading this post. Be advised that many college students no longer do any class readings, unless forced, even at selective institutions. To me, that's a gateway to a new Dark Age, nothing less.

My method, following my rules for multiple-entry Google Workspace reading journals, requires students to fill one column with a quotation or summary of a key event, a second column with an analysis of the event, and a third with a question to bring to class. I also include a mandate to comment weekly on a peer's journal. I got this notion from John Bean's excellent book Engaging Ideas; you can learn more in the third edition of Bean's classic, with his co-author Dan Melzer.

The student who misused AI had done good work for earlier assessments, yet for the final one asked an AI to find notable quotations. That was not against my policies. What was? In two instances, the AI invented direct quotations not in the readings. The writer, too harried or too complacent to check, did not do a word search of the originals.

I gave the writer an F on that assessment, which pulled down the final course-grade. In my reasoning, I said that had that occurred on the job, the student would have likely been fired. Best to learn that ethical lesson now, while the stakes were relatively low (though to a perfection-obsessed undergrad, the stakes may have seemed high, indeed).

We discussed the matter in a cordial way; the writer had done well on earlier work but for reasons still fuzzy to me, failed on this final assessment. So in future classes, I'll change two things. First, my policy on AI hallucinations will be harsh; if the assessments are as frequent as in my recent class there will be no chance at revision. In ones where assessment is less frequent, the F will be applied but the writer will get to revise the journal and I will average the two grades.

Second, I'll add a new requirement for synthesis with earlier readings: yes, a fourth column! This skill is woefully lacking in students, who seem unable to construct consistent narratives across the work done in a class. This I attribute to a "taught to the test" mentality in high school as well as a disconnected learning experience and, for some, lack of passion for learning afterwards.

As for AI hallucination? Though Ethan Mollick and OpenAI claim that larger models hallucinate less now than in the early days, I'm not so sure. Mollick tends to use what I call "concierge" AIs that cost quite a bit; my students generally use a free model and do not engineer their prompts well. 

You can read more about a comparison of different LLM models and hallucination here. I still feel that we remain a long way from knowing which LLM to trust and when, but the OpenAI article does provide good tests we humans can apply to check the AI's output.

Always check its work. My student did not, and paid a price, rightly so. 

Image from the film Vertigo. 

Friday, June 20, 2025

Ohio State Takes the AI Plunge


A colleague in senior administration at OSU sent me a notice about their new AI initiative.

It's so at odds, in a healthy manner, with the "just say no, hard no" of the CCCC Chair's April keynote on AI. While I'm encouraged about a new CCCC working group formed on AI, as with so many things run by faculty, it's going to take time to spin up, while administration and industry race (perhaps unwisely) into adopting this technology. I'm going to chuckle at the negative stories such as one from The Columbus Dispatch, picked up by MSN, calling it a "betrayal" and claiming that AI is being "rammed down students' throats." 

Have these reporters even looked closely at student use?

Let's just assume it's nearly 100%. My 3rd annual student survey says as much. We need to address that reality and do so ethically and in a pedagogically sound manner. Maybe that's where we can critique this or similar initiatives. I remain the wary adopter, not an enthusiast.

So what are the broad outlines of the OSU plan, to be unfolded this Fall?

  • Units on AI fluency in a seminar taken by all students.
  • Support for faculty to incorporate Generative AI into classes. I do not see a mandate for all faculty.
  • Building upon an “Embedded Literacy" in all majors. Read more about them here. This will likely be strengthened to include appropriate and ethical use of AI in the discipline of the major.
  • Partnership with industry.

So why the pushback? 

Perhaps the details are too vague, the timing too sudden and rolled out over the summer when many faculty and students are away.

Yet this development has deep roots; OSU is a Big 10 and a land-grant; they have long partnered with companies to help their students develop skills needed in the workplaces of their era. Though I taught for nearly all my career at a liberal-arts university, I hear already from adult learners that in their jobs, AI fluency no longer remains an option. New hires are expected to have some fluency or go elsewhere for a first job.

Are we caving to corporations? Only if we let them set the terms of engagement. We have adopted new technologies before our way, by providing open-access Internet resources, releasing materials into the Creative Commons, and pursuing innovations with mobile computing. I still live by those rules, never putting a syllabus into BlackBoard's gated community. My content is on the open Web for all to use. That was the promise of the early Internet. I hope we can do something similar with AI.

I just finished a course that focused on deep-reading techniques for literature. We used AI for two assignments, but the reading journal, done as a Google Doc and commented upon by peers, proved hard for AI to assist. One student leaned too heavily on AI on a final set of journal entries, and it hallucinated quotations that do not exist in the texts the student then analyzed: an F on that assessment proved penalty enough. I reminded the student, in essence, "on the job, you'd have been fired. Here it just reduced the final grade."

The students had to employ critical-thinking skills beyond summary and analysis to find "need to know" questions to bring to class discussion, where 50% of their grades came in the form of participation.

AI cannot do that. But without learning to use AI ethically and effectively, my students won't land jobs in a few years. So there's my pushback: no matter the basis for objecting to AI, college involves helping students start careers. It does so much else, too, but without employed alumni, the entire enterprise of higher ed would fold. 

Let's see where OSU goes with this venture. 

Image: OSU Seal via Wikipedia 

 

Friday, May 30, 2025

A Good Discussion of Story with Chat GPT 4

Robot reading a book

In my May-term course "Reading Science Fiction and Fantasy," two stories present radically different ideas about how to engineer AI. As part of my own AI research for a forthcoming book about writing and AI, I put some ethical questions to ChatGPT. I’m still using their free version 4, though I’m considering a subscription to it as its answers have become much better and its vocabulary that of a thoughtful colleague.

Here’s an exchange from my summer class. It may provide a way forward for some of us trying to wean students from copy/paste work with AI. Instead, I modeled an idea-building exercise, one of many I'm using, to show writers who AI can unblock their process early on. Students have trouble with deep-reading and re-reading. I believe that such exercises might show them a hard truth of US and probably global captialism: after graduation, I'd hire ChatGPT, not them, to do close analysis. It shocks these sociable, amiable youngsters: I warn them "add human value to what it can do, or you won't find work in your chosen field."

At a recent concert, I chatted with an attorney seated at our table. She mentioned how many young paralegals send her unreadable copy. I told her, "tell them to use AI to fix it up or you'll fire the lot of them."

That does not sound like me, but such writers are lazy and under-trained. Putting my feet into a CEO's expensive loafers, that's the hard choice I'd make, then hire some Humanities majors with enough pre-law and pre-business coursework and AI experience to do the job of dozens I let go, having the newcomers do some revisions to responses culled from the training data of our AI.

Joe My students are reading two short stories about AI, Ken Liu's "The Algorithms For Love" and Elizabeth Bear's "Tideline." Have you either read these texts or a reliable summary of their plots? Let me know that first, and then I'll tailor a question to you based upon your level of familiarity.

ChatGPT 4 Yes, I’m familiar with both Ken Liu’s "The Algorithms for Love" and Elizabeth Bear’s "Tideline"—I’ve read the texts and have access to reliable analyses and summaries of each. I can help discuss their plots, themes (like identity, memory, empathy, and human/machine boundaries), and how they portray artificial intelligence in emotionally resonant or ethically complex ways.

Go ahead with your question—I’m ready for it!

Joe Thank you. On the surface to me, they appear to present contrary views of how AI might work, with Bear emphasizing the possibility of nobility and compassion by an AGI-driven robot and Liu's character Elena going mad because she thinks human brains are no different from her AI-driven dolls. In your opinion as an AI, can you provide me six lessons (no more than 100 words each) about how each story suggests we humans should engineer an AGI?

ChatGPT 4 Absolutely—here are six compact lessons from each story that suggest how humans should or should not approach engineering AGI, based on the implicit warnings or ideals embedded in the narratives: 

Lessons from Tideline (Elizabeth Bear)

Design for Empathy and Storytelling Chalcedony, the war machine, finds meaning in memory, storytelling, and care. Bear suggests AGI should be built to recognize and value emotional legacies—not just logic or victory. Empathy, not dominance, can define machine intelligence.

Limit Autonomy for Violence, Expand Autonomy for Care Despite being designed for war, Chalcedony transcends her function through self-directed nurturing. Bear implies that AGI should be designed to choose paths of compassion over conflict—if autonomy is granted, it must include moral imagination.

Create AGI Capable of Mourning and Legacy Chalcedony mourns her human companions and wants to leave a legacy. This points toward designing AGI with awareness of loss and temporal continuity—qualities central to meaningful ethical action.

Incorporate Constraints That Encourage Moral Growth Chalcedony is damaged and stranded, but these constraints push her toward morally rich behavior. Bear suggests AGI might evolve ethically not in spite of limitations, but because of them.

Value Hybrid Relationships Between AGI and Humans The bond between Chalcedony and the boy is intergenerational and cross-species. Designing AGI to participate in reciprocal, nurturing relationships could be vital to coexistence.

Teach AGI Through Cultural and Personal Narrative Chalcedony’s method of shaping the boy through stories suggests AGI should be grounded in cultural knowledge, not just data. Narrative may be a more human-compatible training tool than raw computation.

Lessons from The Algorithms for Love (Ken Liu)

Avoid Over-Reduction of Consciousness Elena tries to reduce human minds to code, believing there's no qualitative difference between dolls and people. Liu warns that reducing consciousness to computation risks collapsing the boundaries of personhood and reality.

Preserve Epistemological Humility Elena loses her grip on reality when she assumes full understanding of the human mind is possible. AGI engineering should be grounded in humility about what we don’t know about cognition and emotion.

Recognize That Simulation Is Not Identity Elena’s dolls can simulate human behaviors, but lack subjectivity. Liu cautions that mimicking thought or feeling is not equivalent to being—an AGI might fake empathy without ever experiencing it.

Don’t Project Human Pathologies onto AGI Elena’s obsession, paranoia, and eventual psychosis are mirrored in her AI. Liu’s story suggests that engineers’ beliefs, especially reductive or deterministic ones, can dangerously shape AGI behavior.

Beware the Infinite Mirror of Self-Modeling Elena creates self-modeling dolls that eventually model her own unraveling. Recursive self-awareness without grounding may produce instability. AGI designs should avoid infinite regress in favor of anchored self-concepts.

Design AGI to Understand—but Not Imitate—Human Fragility Elena’s desire to replicate human imperfection backfires. AGI might need to recognize human irrationality, trauma, or fragility—but not necessarily mirror it. Liu’s story is a warning about creating minds too much like our own.

Let me know if you’d like these reordered, expanded into a handout, or reframed in terms of pedagogy or design principles!

Joe Wow. I love those answers. Point 1 about Liu aligns with what I thought in my wetware brain while re-reading the story today. Many thanks. I'll share this with class today.

ChatGPT 4 I'm really glad to hear that—it sounds like a rich discussion is ahead for your class. That story in particular rewards rereading; Liu gives Elena such a chillingly rational voice that it’s easy to slip into her worldview until you step back and realize what’s been lost. If your students end up debating whether Chalcedony or Elena is more "real," or what kind of AGI they’d trust to raise a child or design a memory, you’re in for some excellent conversations.

Feel free to loop back if they raise unexpected questions—I’d love to hear what they come up with.

image source: Creative-Commons image via publicdomainpictures.net

Tuesday, October 29, 2024

Why the Hype Cycle Does not Quite Work For Generative AI

Gartner Hype Cycle Graphic

(Not) The Same As It Ever Was

Many years (!) ago, I wrote here about why the Virtual World Second Life had followed, fairly well, the Gartner Group's well known "hype cycle" for new technology. Follow the link here to read about the history of the concept.

I'm reminded by Wagner James Au of New World Notes that other virtual worlds such as Fortnite and Minecraft have thrived, and indeed, have seen some good use-cases made for education.

At the same time, I don't hear of anyone at Capitol One's hive using virtual worlds. We don't see links from Office 365 or Google Workspaces to a virtual world. I have a sense that AI is a different animal for these worker bees and their daily applications, as well as my own students.

I do know a freelance graphic designer who lost her primary client to AI work; the client outsources graphics to AI run by semi-skilled staff in-house. My friend has moved on to ramp up her greeting-card business. That may color my skepticism that AI is a "fad," as Second Life clearly was.

SL and Gartner

I joined the virtual world in 2007, and yes, I was foolishly an enthusiast. Since I believe in being forthcoming about personal mistakes, I recall telling my students investigating this Strange Land that "you will all have avatars in a few years, for work!" I let slip the same to some colleagues, one of whom aid "you never met a technology you didn't love!"

Cringe. 

I'll own being horribly wrong. Prognostication is a dicey thing! A silly image I cobbled together years ago shows the evolution of Second Life avatars (namely, my own, Iggy). At the same time, my update to the old image points toward a new direction in desktop and mobile computing that we all felt, a decade ago, to be far over the horizon.

Various Second Life Versions of the Author plus an AI writer in the year 2024

For many reason, including the difficulty of building simulations, a clunky interface, heavy system requirements, even poor management by Linden Lab, Second Life began a descent from what Gartner calls a "Peak of Inflated Expectations" in 2007.

By 2012 Second Life was well into a "Trough of Disillusionment." As early as 2008 I sensed that the media-party was over. A year later, I cited three factors working against academic use in a post here: Lack of rewards and incentives for faculty, the time needed to master the interface and content, Linden Lab's vacillating leadership and policies, the presence of what I and my students called "the creepy treehouse" of adult users doing sometimes naughty things.

In the for-profit sector, businesses that set up virtual storefronts in 2007 soon fled; a professor of Marketing I helped get started found SL fascinating, but after interviewing in-world merchants, she put it plainly: for brick-and-mortar firms, there's little return on investment in Second Life.

SL continues to be a large virtual world, with better graphics, promises of a new physics engine, and a dedicated base of users, but it did not become a daily experience on the job or at school. The Hype Cycle proved accurate. This has happened before: stand-alone creative hypertexts such as Michael Joyce's afternoon also map onto the cycle, though never regaining the popularity they enjoyed in the 1990s or even a steady base of users. Creative hypertexts appear to have faded utterly.

When a colleague in our campus digital pedagogy cohort claimed that generative AI would follow the same path traced by Gartner, I bristled because this time I feel that Gartner's model does not fit what I am seeing, daily, among students.

I could be wrong if a series of issues afflict AI. So let's compare the Second Life experience to that of generative AI:

The Browser

SL was never browser-based, even as a plugin, unlike the easy-to-master Unity 3D interface; AIs are built already into tools we use daily or can be queried from any Web browser.

Client v. Server Side

SL has high client-side requirements, whereas AI puts the burden on the server farms; these use enormous amounts of energy that my students and colleagues only recently started to notice, with all the inherent dangers related to sustainability.

The Interface 

SL has a difficult interface, employing menus worthy of an entire operating system and arcane coding options, whereas Gen AI's query features resemble (for now) those of a familiar search engine but with natural-language input. 

Platform Independence

SL and AI both work on multiple operating systems, but SL has never worked well with mobile devices; AI is baked into many apps now, with some Google searches automatically invoking Gemini results and Apple promising Apple Intelligence in their next phones. 

Creepy Treehouse

 SL has adult content that troubled my students; the guardrails we increasingly see with AI tend to keep users (sometimes clumsily or incompletely) from making Napalm or pornography.

Investment Sources

Investment in SL comes from individual users at this point; AI attracts venture capital, user subscriptions, and big-tech investment (Apple, Microsoft, Meta, Google).

Source of Growth in Academia

Faculty and campus-technologist curiosity spurred interest in SL; student use erupted for AI spontaneously in the Fall of 2022 and remains very strong. 

A Shaky Prognosis

To be fair to Second Life, system load has largely remained steady in the last decade. In my recent experience, I can run SL and a third-party client called Firestorm on my MacBook using no more than my phone's hotspot to connect to the world. That's astounding progress.

I don't foresee any of these differences posing a stumbling block for AI, leading to a Trough of Disillusionment, save energy use. We have a climate crisis; the last thing we can afford as a civilization would be to ramp up emissions of Carbon Dioxide.  

Yet if Moore's Law continues apace, we'll see AI reap rewards of lower power use because of more powerful processors in the servers that run it. Any disillusionment is likely to be temporary and its "trough" quite shallow.

For these reasons, I don't see the Hype Cycle apply to Generative AI in the same way as it has to several earlier technologies. I may be wrong, but for now Gartner's model does not fit it well.

Image Sources: Wikipedia Commons for Hype Cycle.

Monday, July 15, 2013

Usher 2013: What the Students Would Change

Location: Pushing back from keyboard

Note: To my readers who have followed the blog, it will go fairly quiet now, with perhaps an occasional post from time to time. I'm not in-world enough in SL or OpenSim to justify more, and my other writing beckons.

Thanks for all your comments and ideas over the years.

My students had a lot to say about my final experiment with virtual worlds. I'd suggest these critical remarks, following what they liked, may help others playing with interactive fiction using virtual worlds or other technologies.

As you read them, you'll notice conflicting desires. I suspect the most popular sort of interactive literary experience will steer between constraint and complete freedom. I'll leave that up to others to try.

Too Much or Too Little Freedom?
  • Although the freedom of virtual simulation enables a more personal experience with the story, the freedom is to the detriment of the author’s intent. By providing so much freedom in the story, the meaning behind the story gets lost. In addition, the agency given to the avatars permits characters to get lost, die, or make another mistake that removes them from the storyline.
  • One thing that was detrimental about having so much freedom was the fact that we could go anywhere we wanted even when we were not supposed to. There were several times that I ended up In a room I wasn’t supposed to find, or followed Madeline upstairs when she was trying to do something secretively. That may have been a glitch in the software, or we may have just been too clever, but it made the experience a little confusing.
  • Having read the story prior to the interactive experience, I felt that the simulation lacked depth because it did not hold the same meaning that Poe intended it to, which I find to be an injustice to the great author. In fact, the storyline in my experience not only lacked depth but also action because most of the simulation was dialogue among the avatars. Though the conflict in Poe’s story involves a live burial and revenge, the conflict in my simulation simply had to do with Madeline’s doctor not being exactly who Roderick thought he was. This ultimately led the online story to not be extremely engaging.
  • Even though the story was quite interesting, the final outcome was limited – Madeline either lives or dies. A simulation of a story with broader plot and more characters might also result in better experience. 
  • If I could fix anything about the way this final was conducted, I would make the simulation contain more action. We reached a certain point where I got bored with the large amount of conversation and the lack of more engaging activity. Maybe in the future, this final exam simulation could contain a battle of some sort. The characters could fight a ghost trying to attack them or a doctor trying to take Madeline away for experimenting, or have some other type of fun and unexpected action-based conflict.
The Setting
  • I thought the setting could have been improved. The House of Usher is supposed to be a scary place, however the larger, open rooms, bright Victorian furniture, and wider hallways failed to frighten. I would have liked to see the house’s hallways and rooms more damp, dark, and crowded. This virtual house also wasn’t as effective as Poe’s writing because a depiction of the setting makes it evident that Poe’s descriptions are now clichéd. 
Adding Interaction and Giving Out the Clues:
  • More accessible clues would have certainly been helpful: perhaps hints to secrets hidden around the house or scavenger hunt-type clues. At a certain point during the simulation, I felt stuck given the information that I currently had and was not sure how to proceed—notes that led me to specific locations would have helped keep me on the move. In addition to this, interactive items would have been more than welcome and probably a lot of fun. I felt useless in the crypt when I was being shot at by the doctor and did not have a weapon. The ability to use objects, especially weapons, would definitely have made me feel more integrated into the story. The more that a character can physically do, the more lifelike a simulation feels.
  • I also think that searching for clues and interacting with the house could have been a bigger component. Speaking with the actors playing Roderick and Madeleine was an important and interesting part of the experience, but got a little repetitive and boring during the first half of the simulation.
  • I think that the clues that we found were not really helpful in the simulation because they did not really direct us to a solution in any way. There were also too many of them that did not pertain to the plot. Instead Roderick and Madeline could give us clues to help us find the Easter eggs within the simulation
Final Top-Hat Tip

One sad coda to my work of 6 years with virtual worlds: not a single student mentioned the Usher project in their formal class evaluations. In fact, I believe that the time spent preparing for Usher hurt my other teaching, and thus my evaluations overall.

It will be a long time, if ever, that I use this technology with undergraduates. I don't expect SL to thrive again in the way that it was thriving when this blog began, in its original format, in 2007. That said, I wish those in SL's niche-universe good luck. If something like SL becomes popular with the somewhat jaded and very careerist students I teach, I'll give it another go.

Until then, so long and thanks for all the prims.

Friday, May 24, 2013

House of Usher 2013: What the Students Liked

Location: Far from virtual worlds

For what will most likely be my final teaching experience with virtual worlds, I tried to add depth as requested by the last group, in Spring 2011. Notably, I added more clues, plus a real-life scavenger hunt / mystery on campus. Second, I added a combat system and some dangers to the House of Usher.

I want to share the experiences and advice of the class this year, but I will do so in two posts.

First, what students had to say went well. Each bullet point comes from a different student.

As others with more support and energy than I have build virtual-worlds simulations, I hope this feedback will guide them. It was fun, but, frankly, too much work give how my job is structured. I doubt I would take on a project like this again, if I had to build it all from scratch.

Be that as it may, thanks to my actor-volunteers and students.

Immersion:
  • I felt that speaking on the screen, rather than with our mouths made the simulation more immersive. It made the simulation feel a little more game-like and controlled. Though we could say whatever we wanted, we had to be careful with our words, because it was up to the reader entirely to decide the tone of our words.
  • Unlike the short story, the simulation did also provide me with an extra way to see into the characters’ minds: diary entries and notes. One particular example of this is a note written by Madeline, which states, “I do not think that Roderick is correct in providing me with Laudanum. I fear that it will not assist me with this malady of walking about at night. I would prefer to lock my door." In reading this, my player discovered a distrust of Roderick that Madeline does not make public. Playing as a character in a simulation allows for more direct access to the other characters themselves, though it remains impossible to fully delve inside of their minds.
  • The amount of detail that went into the virtual house eased the immersion into the world of Poe. Because of the elaborate design of the virtual world, I was able to notice any important hint or clue that could help me with my task. For example, by reading the letters and journals that Madeline and Roderick wrote, I gained a lot of information about their respective illnesses and Madeline’s doctor. In addition to the copious amount of detail that the virtual setting offered, it also allowed me to create my own story.
  • I was sucked into the recreation of the Fall of the House of Usher to the degree that it felt ‘abnormal’ being in the real world. There were many factors that allowed for such an amazing experience. To begin with, the ‘cast’ of the story in the virtual world was dressed in authentic looking attire which one might have worn during the 19th century. Other than a few slipups, the characters conversed (through chat) in a manner similar to what one might expect from the people of the era.
Simulation Vs. Media With Deterministic Endings
  • For the vast majority of us who read stories or watch movies for fun today, these art works are onetime thing – rarely do we watch a movie or read a story again. Developing simulation for movies and stories might change this. If a story is, for instance, simulated with multiple outcomes, the audience will try and change the end of the story from the original work, possibly to avoid a danger their favorite character faces in the original story. This might require more than a single attempt.
  • This technique of storytelling has been employed in many video games over the years. In the Mass Effect trilogy for example the players actions could change the outcome and would be carried over three games something not possible in a two hour movie or even a long book. The atmosphere of the original [story] is kept in the simulation along with the added mystery of the final outcome.
  • What I found impressive in the simulation was its small details. For instance, you need to say a password as the code to open the door in crypt. Though this advanced technology was not available at that time, it was particularly interesting when you typed in the password in the chat column and the door opened. You could virtually sit down, drink absinthe, and light a candle. The horrific sound tracks in the game really exaggerated the melancholy and dark atmosphere, which is something the film and original novel did not have.
  • In “The Fall of the House of Usher” there was a lot of mystery surrounding the characters. Roderick’s actions were completely unexplained and readers finished the story with more questions than they started with. The interactive experience gave some insight into why the characters acted in certain ways. Unlike a movie, the student is not watching one person’s interpretation of a story. Instead, the student’s interpretation is combined . . . . with the teacher’s interpretation.
Augmented Reality: The Egg-Hunt
  • I believe that the scavenger hunt that we went on to get the eggs added to the experience of the final by adding the mystery or giving an insight into what we would see in the final. The hunt could be considered the prologue to the final.

Sunday, May 12, 2013

Commencement 2013: The Avatars Walk!

University of Richmond, Richmond, VA.
Location: Robins Center, University of Richmond

image credit: Boston Public Library at Flickr


I have reflected here before on the temporary and invented community that a college campus provides.

Today, my last group to use Second Life for an entire semester graduated. As a commencement marshal, I got to be up near the stage to help seat, direct, and congratulate our newest alumni. It's bittersweet to see them leave, as they were curious and interested students.

When I took my favorite walk of the year, from the ceremony back to my office to hang up my cap and gown, I thought about how time passes.  Today was wistful and perfect day, with light cloud, cool breezes, but warm-enough sun to remind us it is Spring in Virginia, the best time of all here. One colleague nearing retirement said to me "the years pass at first like telephone poles from a car window, then the pickets of a fence, then railroad ties."

As is always the case, the campus is boisterous near the basketball gym where the thousands of parents, staff, and students gather after the last diploma has been conferred. My walk in my academic robes takes me, in stages, further and further from those happy sounds and into the clear light of a quiet May afternoon. Soon the brick buildings--our campus looks like Hogwarts--loom in the clear daylight as if they'd been there for centuries.  Eventually all sounds, save those of Nature, are muted.

It's a ritual I never miss each year.

Next month, I'm going to return to my own alma mater's 30th reunion, partly because of the mark the education there left on me.

Thirty years!

Yes, in time in all becomes a blur. I suspect we'll all look back on the early days of virtual worlds that way, too, and say "that only seems like yesterday!" Moreover, I hope my students recall the strange experience of exploring SL as a possible future for communications, as they struggled to master the art of writing in academic settings.

Wednesday, May 1, 2013

Leaving Las Vegas: 6 Years in Virtual Worlds, Farewell

VWER 25 April 2013
Location: Grading Finals

During the Spring Semester of 2007, this fool rushed into Second Life, something he felt would not only change education but the world beyond. Virtual worlds looked like a utopian technology with lots of zealous folks ready to evangelize the masses.

Six years later, grading what may be my last-ever student projects about virtual worlds and somewhat wiser, this educated fool wonders why SL did not change the world or even higher ed. I've written elsewhere about why SL failed to become that "3D Web" of yore.  Meanwhile, the bandwagon has moved on, the cheerleaders yelling "hurrah!"and "higher education will never be the same!" for MOOCs.

I'll sum up what I've learned about utopian narratives and would-be transformative technologies here, based on not only the last 6 but the last 25 years of studying and writing professionally about technological change, especially that which generates legions of enthusiasts.
  1. Look past the message to the messenger
  2. Wait for results unless you are an entrepreneur or venture capitalist
  3. Be a trailing-edger
  4. Find community locally, not just online
  5. Consider what students have in their hands
Lesson one: Look past the message to the messenger

I first head about Second Life in a story from Wired. That is not a sedate or juried publication; it's the Popular Mechanics of the digital era. Ever the sucker for flying cars and moon bases, I decided "I need to get in early with this technology," not considering that one company, with a closed platform not built for education, held the cards. I trusted the vision of Magic Koolaid provider, Philip Rosedale. Linden Lab's corporate culture and Rosedale's wandering vision both disappointed this educator, along with many colleagues.

Who is pushing MOOCs today? College faculty members? Technologists who embrace the new without considering pedagogy of large classes with little or no contact with faculty? Right-wing lawmakers eager to dilute the power of those "tenured radicals" supposedly in charge of Higher Ed? Boards of Visitors eager to promote a school "brand" without a clear sense of what it will do to curriculum, staffing, or the long-term value of that "brand"?

Ask yourself, and take a deep breath before jumping on the band wagon.

I do wish I'd looked past the euphoria about virtual worlds in 2007 to see who was cheering most loudly.

Lesson Two: Wait for results, unless you are an entrepreneur or venture capitalist

I was not in virtual worlds for the money. As noted just now, I wanted to be in on "the next Web," as many were then pitching SL. In 1993, when I first saw a moving weather-pattern on the Mosaic browser in Dickie and Cindy Selfe's lab at Michigan Tech, I knew I was seeing something historic. In 2007 I thought so again, without applying the very critical-thinking skills I teach my students.

From 2003-06 or so, it made sense for venture capitalists to take a bet on this new technology. It might have become the next Web. Educators, however, need to always place sound pedagogy ahead of tech, which is a suspicious I have about the euphoria over MOOCs at the moment. I saw that same brand of enthusiasm for MOOs in the late 80s and early 90s, literary hypertext a bit later, glove-and-goggle VR from the 80s to the present, and of course, virtual worlds.

While one might reasonably claim that virtual worlds are going to become significant culturally, I'd suspect lots of Magic-Koolaid drinking by an educator who claimed SL will ever again be more than a niche-product in years to come. AJ Kelton of VWER rightly called SL "The AOL of virtual worlds" to the disdain of several Linden Lab staff. AJ was correct, and the Lab staff in question now work elsewhere, after being fired during the first stages of Second Life's ongoing and palpable decline.

Bottom line for me: waiting to see if SL lived up to its hype would have cost me nothing in 2007, and would have saved me time. Had I first taken a class in-world in 2009, I'd have been ready for the myriad frustrations and technical issues that bedeviled a product that seemed very much in Beta up to that point.

Lesson Three: Be a trailing-edger

During the summer I spent with the Selfes and their grad students at Michigan Tech, Richard "Dickie" Selfe, co-founder of that school's CCLI humanities lab, along with wife and fellow scholar Cynthia Selfe, once told me only to adopt trailing-edge tech for teaching and learning. The Selfes were among a group of 1980s pioneers with personal computing in the classroom, and Dickie's lab at MTU was a playful space, with stuffed animals, a coffee machine, snacks, and weekend gaming sessions with Doom and similar titles.  I'm sure that at Ohio State they continue this practice, so influential to young scholars of writing pedagogy in writing-intensive curricula.

At every step, while the Selfes liberally experimented with leading-and-bleeding-edge applications, in the classroom they proceeded more carefully with undergraduates. The older technologies were stable, easier to support, and grounded in best practices for teaching.

My experiences in 2007-8 in SL, and then in 2011 with OpenSim's Jokaydia Grid, taught me the dangers of being on the bleeding edge. My students and I bled. Only Jokay's personal help saved the final exam in 2011, but it left a bitter taste in my mouth for teaching with OpenSim. As for SL, only by 2009 was it stable enough for a class to appreciate. That class was, ironically, my last one to focus on the technology, rather than using it for a single project.

Today, SL as a product is fairly stable, and critical and scholarly work about virtual worlds has emerged to guide our pedagogy. One would be far better off starting today as a SLer with students, given these two changes. Those on the bleeding edge, however, get cut by it.

Lesson Four: Find community locally, not just online

My years with the Virtual Worlds Education Roundtable have been good ones, and I cherish the faculty contacts I have made there. That said, the weekly meetings and constant advice did not compensate for a lack of interest in virtual worlds on my campus. With other innovations, from our Writing Consultants program to First-Year Seminars, we meet in person and I have lunch with folks in the flesh. There is no substitute.

Our technologists all had avatars before I did, but they never convinced more than a handful of faculty to try SL. The learning curve, the oddness of avatar-based education on a residential campus, and the lack of incentives for faculty all worked against us.

In the end, SL was an experiment that failed at Richmond. When Linden Lab renewed the 50% discount for education (if you ask the right person!) we declined. Why spend even $150 monthly for a product that might be used once every few years by one faculty member? Meanwhile, our technologists have other more tasks, from supporting Blackboard and other meat-and-drink software to  new initiatives with mobile apps.

Mobile may turn out to be the "new shiny object" for education, but it's not a niche application for students. As for MOOCs? I will wait to see comparative studies of students' learning outcomes in them and outside them. That should have been done for virtual worlds.

Lesson Five: Consider what students have in their hands

The transition to smart phones as students' primary communications tools has changed everything for us. While laptops abound, students use them like big phones: never plugging in the AC adapters, perching them in nooks where Millennnials gather to collaborate, plastering them with stickers to personalize them. I suspect that with a better keyboard, students would do their content creation on fast tablets, since we have ubiquitous and fast wireless everywhere on campus.

None of that I could have foreseen in 2007, since I did not even slow down enough to consider how poorly SL would run on many laptops, especially those not hard-wired to an Ethernet port or plugged into a AC outlet.

Even with desktop connections, students loathe SL's lag. I saw that last week in the finals. Perhaps server-side baking from Linden Lab will make SL run better on what my students still use for content creation--laptops--though not on phones, where virtual worlds simply cannot display with any sort of grandeur.

But by then, Iggy will have left the virtual building.

Coda:

I'm thankful for an experiment of six years, even if the experiment failed. At least virtual worlds generated two publications for me, as well as a forthcoming anthology I'm co-editing with some chapters about virtual worlds. I don't write the rules, but publications and teaching evaluations are the currency of academia, despite the best wishes of utopians that it be otherwise.

It is always possible that my teaching load will shift again, and my Chair and Department will call on me to teach my course about the history, culture, and future of Cyberspace. In such an event, Iggy and his students will be back. Look out.

Wednesday, April 24, 2013

Finale for Usher & Teaching in Virtual Worlds?



Location: Virtual House of Usher

Is it to be "that's all, folks," in the immortal words of Porky Pig?  In twenty minutes, I start what may be my last-ever work with students inside a virtual world.

We'll see what the future brings, but for now, I'm betting on the Pig. Here's the final scoreboard from the students. Three students not only found eggs but solved the quest therein. Nicely done, Gunters!  Divij in particular came on strong with his work to lift the curse from his "bad egg."


Monday, April 22, 2013

What Happens When You Listen to Students' Requests About SL

Location: Dead

They wanted a combat system and "consequences" at the House of Usher.

And they got me killed. For THEIR final exams!

The dog did it. The students' avatars toast my demise with Absinthe.

Usher Egg-Hunt Finale

Location: Very Close to Egg #5

No one found that clue, but otherwise, we have winners and a new scoreboard.  I am mystified that none of the quests in the five eggs that were located have been done.

As for the online egg-hunt. The Haikus came in with ferocity, and all of them followed the 5-7-5 rhythm for the poems. All of them were very creative and made me laugh, so all our haiku writers got an extra point.

For the cheesiest line in Ready Player One, Beaumont wins for this one:
"Some time later, she leaned over and kissed me. It felt just like all those songs and poems promised it would." (Cline 372) 
I suppose Cline was being ironic, and Wade is a horny teenaged boy who has really fallen hard for Art3mis. But that cheese-factor here earns five wedges of Gorgonzola. 

Both Leah and Rayna submitted interesting analyses of Poe's descriptive language, in comments both interesting and economical. I do like the sense of decline and decay that Leah sniffs out:
The mood of the house of the fall of usher is definitely a mood of suspenseful caveat. From the beginning of the stories description of the house being decrepit, just barely holding itself together, the story is being set up for the house to crumble. Also the fact that the story begins with Roderick in a similar state as the house (mentally and physically) from the start, the reader knows that Roderick is unstable, and that he’s going to go over the edge at some point in time during the story, we just don’t know exactly when. Also the story of the dragon at the end is a definite give-away of the suspenseful nature of this story. As he reads the book, all of the actions and sounds from the story play out, but slowly. First he hears a sound, but he’s unsure if he made it up or not, then the situation escalates to Roderick speaking to himself and the door being banged down, and then Madelyn finally coming in. these are all very suspenseful parts of a very suspenseful tail, and the warning in the story is unclear, but I feel that it is definitely foreboding of something.
Good job, Gunters. Now on to my swan-song teaching with Second Life's simulation of Usher, for the foreseeable future. Traditional papers are a lot less work, but not nearly as fun.

Saturday, April 20, 2013

Online Easter-Eggs for Usher Finals

Location: Place where brains get wracked

Score updates are shown. If you have gotten an egg, it's time to e-mail me the solutions to the riddles they contain. I'll bestow a key icon (like those in Ready Player One) beside your scores above once you have reached that point. Divij, this can be your Art3mis moment--get me that Pet-Peeve violation and you'll vault upward and bask in the glory of a key beside your name.

Now on to NEW clues!

I am having a devil of a time coming up with more riddles, tasks, and such for the Virtual House of Usher Final Exams.  But three ideas struck me as interesting, given the tales we have read and and simulation itself. So here we go. You students must reply in the comments section with your answer.

My judgement will be subjective and as capricious as the contents of Roderick Usher's addled mind. Each of these could earn you one extra point.  Here we go: you tasks, Gunters! Be sure you are logged in with your Google account to reply, or at least put your name into the answer.

  1. Poe was a master of descriptive language. I want you to re-read "The Fall of the House of Usher" and in a comment of NO MORE THAN 300 words (no easy task!) make the best argument you can for the mood of the story, using the adjectives--and no other parts of speech--that Poe employs. +1 point to winner (not your team). If you win, I'll give you a hint about Usher.
  2. Our classmate Carly (now basking in one extra point) called Ready Player One  "cheesy." That is, itself, a fine adjective of its own. So let's have fun with Cline's book. He's a geek like me and can take the abuse. We will have a contest to find the cheesiest sentence in his entire novel. Post your nominations in the comments. Keep in mind that it must be a serious and not ironic sentence. You must do research, but as Wade reminds us (in a sentence that is not at all cheesy) research is easy if  you have no life. +1 point to finder (not your team). If you win, I'll give you a hint about Usher.
  3. There have been speculations since the time of Poe's death about his abuse of the opiate Laudanum. We have some clues about it in the House of Usher, in fact.  I don't know that Poe ever drank Absinthe, but we put a tray of the drink in the House, too. Your task: write a haiku (using formal haiku form) in honor of either drink. +1 point to finder (not your team). If you win, I'll give you a hint about Usher. If you manage to Google and steal someone else's haiku, AND I find out, you'll lose your point...and one more!
Note well: all Gunters who want credit for any of these three tasks must  turn in their comment to this blog by MIDNIGHT Sunday. Cue the fiendish laughter.

Friday, April 19, 2013

Usher Scoreboard Update #2


All is not glum for Divij. He must do the following and he'll easily vault back up the scoreboard:

His quest: +2 points if you can violate a dozen of Dr. Essid’s Pet Peeves in no more than 6 sentences. The paragraph must follow the last paragraph about Sir Ethelred did The Fall of the House of Usher. Show it to Dr. Essid before you visit Roderick and Madeline. If you read your paragraph to Roderick Usher when you are alone with him, he’ll reveal a secret to you and and only you.

Usher Scoreboard Update #1


Two More Hidden Clues for the Egg Hunters

Location: Undisclosed

Here are two final clues for the FYS 100 students' hunts. Since the House of Usher is full of clues (see image above), it is fun to hide a few in the physical world.  Rumors abound that two of yesterday's eggs have been found. I will post a score-board here as soon as I hear back from the lucky "Gunters."

Note that over the weekend, I'll have more posted that will not involve a physical egg but a hunt through the work of Ernest Cline and Edgar Allan Poe.

Clue #5: “Begin at an arcade you should know well by now. You will see us, since George did not chop us down. A week or two ago, we were in the fullness of our springtime beauty. Now we are like the others, except that one of us hides an egg.”

Clue #6: “What’s in my pocket? Everyone asks me but I never answer them. But if you follow my nose to the center, then turn right and think about Christmas greenery mentioned in a carol, you might find a clue!”

Thursday, April 18, 2013

Let the House of Usher "Egg" Hunt Begin!

Location: Undisclosed Location

For the House of Usher simulation, we've always included clues in the virtual House itself. Now for what may be the finale for the project, at least in my courses, I've put some real-life clues here and there on our campus. I suppose this is what game-theorists call (awkwardly) "gamification" in classes.

Students will hunt campus for the clues, the first four of which appear here. If they find a light bulb they get a point and a quest that will help them in the Usher simulation.

Even before we read Ernest Cline's light-hearted cyberpunk (that's not a typo) novel Ready Player One, I was calling them "Easter Eggs," in honor of the little gifts that coders leave in games.

In Cline's novel, those seeking the eggs became "Gunters." Alas, the craft store was all out of plastic eggs. They had little containers shaped like plastic light-bulbs for the clues, an appropriate metaphor.

Here are the first four clues, Gunters!

  1. “Above the Court of the Five Lions, and in a place of honors, there is an egg with a song to awaken the dead”
  2. “Climb many stairs to go to a room of Gargoyles. A treasure is there and a magical word”
  3. “In the Jungle, the mighty jungle....” If you can finish this sentence, you may well find an egg to help you on your quest.
  4. “Do not go here to kiss, the legend says, unless you intend to marry him or her. Look carefully, and below the surface for the egg.”


More on the way! Happy Hunting!

Big Bad Wolf and a Pesky Bat: Enhancing Usher

Location: Usher Crypt, looking for Milk-Bones and saying "Nice doggy!"

One thing was clear in Fall, 2011, when I last ran the House of Usher simulation in OpenSim: students wanted more danger.  That means injury and death, things very much in keeping with Poe's tales. The more macabre and premature one's demise, the better!

Second Life content creators have made a number of combat systems that can maim or kill avtars, but I chose Spellfire precisely so I could use some scripted animals from a content creator named "Restless Swords." This animals can also do Gorean-Meter damage, and though that system is simpler than Spellfire, which includes the need for avatars to sleep and eat, I cannot use anything "Gorean" on a university machine, period.

I went to visit Restless (teleport to his shop by the link here) as both Iggy and Roderick to buy some creatures. After some outstandingly prompt service from Restless to get the current updates for the scripted animals, Usher has a wolf wandering the crypts and a bat in the attic.

We'll see what our actors and student avatars do now. There's one weapon in the House, and they'll have to solve some riddles to get access to it, if they wish to defend themselves.