I don't want random techbros coming in, hence why I'm posting on Den. I hope this is ok.

I'm teaching an online composition class this summer. I got two essays from students that cited sources that don't exist. I called them out on it. Here's what happened.

One copped to using Bard, but then sent a second essay that still clearly reeks of gen AI or other horseshit.

The other copped to using a GenAI search engine unwittingly, and has tried to claim they've read things that, by all accounts, they haven't.

Normally, I would have just failed these students for writing hundreds of words on material that doesn't exist. But I really wanted them to go beyond a basic cop and explain their reasons for using this. This is in part since I have administrative duties around GenAI this year in our program. So I wanted to get data for my fellow instructors (i.e. here's what the student did, here's how we can design better assignments that both teach more carefully and also are harder to use GenAI on, etc. etc.) Instead, I've just hit a brick wall from them. They're insisting that it was only a research error, even though by all accounts, these essays shouldn't exist since the majority is written on things that just literally aren't out there.

Again, they wrote about things that don't exist as if they do. That's GenAI in a nutshell. It's some of the most blatant shit. And these students are still trying to justify their work.

What bugs me most, however, isn't the students. It's the fact that technology like this was thrown out into the ether without any fucking guard rails. These students don't realize the problems with it, so they're fucking themselves. And while maybe they would have found some other way to do this kind of lazy work pre-ChatGPT, the accessibility of these LLM models means that more students will do stupid shit like this and fail, instead of trying to learn.

I'm very doomer about this stuff, not because of some AI takeover, but the total enshittification of everything. The citations-needed episode on it was very good on the other serious labor implications as well. However, there's also a ton of potential added labor or shittiness in the affected fields. After all, my instructors will have to work more for the same amount of pay OR just not bother policing it. Either outcome is terrible. While I'm going to do my damndest to try and help my colleagues build assignments that remain rigorous and have guiderails to avoid genAI production, the fact is, eventually it's coming for all of us. And even if it doesn't take our jobs, it's going to make us all more miserable. Because there's not the structures in place for FALGSC or anything. So we're going to lay people off, pay them less, remove some of the most human pursuits, and for what? A bot that's slightly more convenient and less accurate than wikipedia?

I'd love for someone to un-doomer me about this stuff, but it's just very depressing. I needed to vent among friends. Thanks for listening folks.

I'm still a bloomer at heart, but god damn is it hard to keep up in the face of material conditions.

  • ChestRockwell [comrade/them, any]
    hexagon
    ·
    edit-2
    10 months ago

    Ironically, neurodivergence is probably one of the few "legitimate" uses of this stuff that I'm ok with in the classroom. The situation you described is one I'm actually sympathetic to, and if a student actually had this as their process, I would be far more open to it. Because what you describe there is an actual writing process. Yes, it's not my process, but producing a bunch of slop then hunting through it for the useful material, editing out the hallucinations, etc. -- that's a real writing process! It's similar to a "shitty first draft" on some levels.

    I have some philosophical concerns about this still -- namely, in getting that initial output, you might start contorting yourself to "fit" the machine. Like, prompt design doesn't actually direct the machine -- the machine is directing you in man y real ways. I'm very anxious about this in terms of agency, etc -- I.e. an english language learner just washing their own voice and thinking away in the slurry of slop produced by GPT.

    But that's more a philosophical than pedagogical concern. The fact is, if neurodivergent students wanted to propose that kind of process, I'd be open to it as long as they were open with me about it. Some people write the whole essay in one night, some people produce it over many iterative steps. There's no one right way to write, but you have to do that kind of thinking/engagement that you're describing.

    P.S. my non-GPT way to get that "foundation" on the page is just copy all the relevant quotes/evidence/whatever down into a document, then start doing some commentary/explanation. Then once you have even a token bit of commentary, you can activate the dialectical process of writer/reader to produce more out of it.

    • red_stapler [he/him]
      ·
      edit-2
      10 months ago

      in getting that initial output, you might start contorting yourself to "fit" the machine. Like, prompt design doesn't actually direct the machine

      Perhaps, in my case I know roughly what output I want, so I work with various outputs until I get to something that I can take over from.

      This is all academic since I’m middle aged now and I just want a quick description of my D&D character. chomsky-yes-honey