One argument I've heard is that we shouldn't be encouraging teachers to think they have a nose for this stuff, because it will end in tragedy, with teachers far more likely to "discover" GPT plagiarism in work by poor kids and black kids- whether it's there or not. Thoughts?

Expand full comment

I think that’s a reasonable concern, especially insofar as “eyeballing” vs your expectations of a student is going to bring in any biases you may unconsciously have of a student. (I don’t know whether this is worse in high school, where we know students well enough to think we can form a judgment of them individually, or most college classes, where we can’t.)

A policy that I think would minimize this danger is:

1) “Here’s the tool I’m using to confirm if this is GPT-generated, for assignments where I want you to do that.”

2) If they dawns in something that doesn’t match, they have to revise.

This maybe encourages revision from an LLM-produced base - but then maybe that’s also what writing largely looks like, in the future, IDK.

I do also think the other thing is that basically anything you do is inequitable in an absolute sense, so you have to compare any policy or approach to the most likely alternative, at least when you’re thinking of the options in front of a single teacher or school system. Like middle class families will basically optimize around gaming whatever incentives system you set up and have (more or less by definition) fewer figurative and literal budget constraints in doing so, while working class families have bigger fish to fry. You see this all the time with eg accommodations for special learning needs.

Expand full comment