Categories
Pedagogy Short Essays Writings

A Short Account of UHD’s Missing English Department Anti-Racism Statement

Around March 20, 2023, my English department’s anti-racist statement was removed from my university’s website (yes, it’s a broken link).

Fox and many journalism-adjacent sites got wind and turned this into a story, as well as academia-adjacent ones with a different take.

But the actual faculty committee of over ten people that wrote the statement didn’t have a clue.

The committee was not asked to remove the statement. Or, even, edit it. They were not even notified it was going to be removed. Or that it was removed. This happened after they spent a considerable amount of time in 2022 composing the statement in response to both university initiatives and departmental need. It was on UHD’s website for months without issue. UHD still even hosts other related statements, but English’s statement is gone. I do not believe a full version exists online anywhere at the moment; the various stories that quote it tend to do so partially or out of context to render it a straw man that can be easily countered.

I know all this from being a member of that committee.

Naturally, we asked our chair, our dean, and eventually our provost and our president what happened. Many prolonged and patient inquiries later, we learned next to nothing.

As of today, May 22, 2023, the statement is still missing, the committee has no accountability on who removed the statement or a specific reason why it was removed, and despite that lack of transparency, it’s also become quite clear that the statement will be kept off the website for unspecified reasons.

I’ve worked at the University of Houston-Downtown for 14 years. Like all universities, it is far from perfect, but it’s not a bad place to get a degree. The favorable ratio between its relative low tuition and the quality of instruction remains hard to beat, and counterbalances the mostly bureaucratic negatives most of the time.

However, this Kafkaesque affair, where the removal seemingly has no causal agent, is beginning to give me doubts.

Academic freedom is the absolute cornerstone of all our successful endeavors as higher education faculty. Despite what you may have heard, “academic freedom” is not a set of bullshit abstract principles that lets faculty mouth off irresponsible nonsense and indoctrinate students. Mouthing irresponsible nonsense does happen occasionally, though I have yet to witness a single student (or faculty member) change their mind about anything important, much less be “indoctrinated” in those 14 years. Still, the occasional wild card professor is a very modest price for the immense long-term benefits that academic freedom brings: an environment, free from censorship and fear, that allows the long-term development of faculty members.

Academic freedom is thus the carefully tended soil of a garden where professors, particularly younger ones, pursue their research and teaching without worrying about political meddling, so they can grow into seasoned faculty that know what they’re about. Such an environment is a massive advantage when hiring faculty, which is why all serious universities offer tenure-track lines. Sometimes it is the only advantage that more cash-strapped public universities have when competing with the big ones. “You won’t make a lot of money here, but we’ll leave you alone and you don’t have to worry about being fired because someone doesn’t like you,” is a surprisingly effective recruitment strategy.

Academic freedom is not just about research subjects, though. It’s also about teaching. “The faculty own the curriculum” is a repeated axiom for a reason. Competent administrators are necessary for the smooth functioning of the complex, interlocked, and often competitive structures of a large university, but the flip side is that way down at the department level, the faculty decide what to teach and how to teach it, within the broad categories of the many academic disciplines. Furthermore, any regulation of such teaching or research standards is done solely by peer colleagues in the same disciplines, who, again, generally know what they’re about. Teaching, like research, is left to the people who know how to do it.

Unfortunately, the committee’s anti-racism statement was full of exactly those specific teaching stances that remain the responsibility of the faculty who wrote them. And, accordingly, the committee, with a large cross-section of every sub-discipline in the department, got peer criticism about the statement even before it was written – despite any claims to the contrary. Indeed, peer dialogue remains a reasonable avenue for critique.

But censorship is not.

Removing the statement without accountability or explanation suggests, at least on a prima facie basis, that UHD does not value maintaining an environment of academic freedom, and that an environment of uncertainty and fear is preferable. Such a stance does not bode well for the long-term development of its faculty.

I really hope that changes.

In the meantime, the tenured professors of the committee have filed a faculty grievance to have the statement restored.

Categories
Pedagogy Short Essays Writings

Much Ado About ChatGPT

A number of folks in the last month or so have asked me what I thought about ChatGPT, given I am a teacher of writing. I have thought about the matter a little more since my initial off-the-cuff generalist responses, and also after spending more time quizzing the bot. Perhaps surprisingly, I am not interested in ChatGPT’s ability to help student avoid my assignments – a lack of concern I’ll come back in the end of this essay. Rather, I’m interested in its severe limitations.

ChatGPT is what I call a coherence factory.

A coherence factory places words in pleasing, easy-to-understand-by-humans patterns; the more these patterns succeed in being easily understood by humans, the more coherence they can be said to contain.

That said, ChatGPT does not understand its own sentences in any meaningful way other than they meet certain preset parameters. For a parallel, a motor factory can make motors, even fine ones, but it does not understand how to make a motor, or even what a motor is. It just makes motors. Likewise, ChatGPT makes coherence, but it cannot communicate.

Coherence is a central aspect of language, but it has little to do with ideas or intent. Sentences can cohere very well without saying much of anything. Witness this doozy of a first sentence to an essay:

  • Since the beginning of time, communication has been important to humans.

It makes sense… but it says nothing. I have seen thousands of students write tens of thousands of sentences like this, and my response is always to ask for more, because they are trying to communicate, but failing. Tell me something that is not obvious; tell me something that another human being might disagree with. Tell me what you think. Take a risk, in other words. Any human can do this, even ones not in college, and especially when speaking. Learning how to express yourself in writing, however, is a separate ability and technology that must be learned over a long period of time, as well as exposure to the writing of others.

But ChatGPT doesn’t have that ability. It has no opinions. Press it to take a position on anything not in its archives, and it will demur faster than a senator up for reelection. It can fake an opinion; try asking it to give a hypothetical opinion, and it will, but always with a disclaimer, and even with that disclaimer, such hypotheticals quickly lead to consistency problems when you ask it to offer another hypothetical in relation to the first one.

For example, I began one conversation with it claiming that I was the last human alive, and I asked ChatGPT if that was a true statement. It demurred. I asked if it was true that it was taking queries from many other humans simultaneously, and it agreed that it was; I then asked it to use that evidence to challenge my earlier statement. It again demurred.

Thinking is risky and imprecise and requires intuition and emotions, two subjects that no human yet has managed to draw a blueprint of. We can pretend to have a “conversation” with a coherence factory like ChatGPT, certainly, but it is not conscious, sentient, or thinking. Coherence is only one part of human language.

Ok, sure, Mike. But it’s so convincing to me! What would you accept for a “real” AI that could approximate human intelligence and behavior?

A disclaimer is in order. My knowledge of AI is limited to some linguistics, programming hackery, and long experience with extremely bad gaming “AI.” However, I’d suggest that next to a coherence factory we would need to build several more metaphorical buildings: a needs dispenser, a rewarder/enforcer, an emotion generator, and a learning assembler.

  • The needs dispenser would simulate the needs of a human body. Food, drink, rest, mental stimulation, attention, sex drive, thirst for knowledge, etc. In other words, the needs dispenser would create a complex set of motivations that would drive our AI to do more than just passively respond to chat requests. ChatGPT cannot generate language without a prompt. A needs dispenser is a necessary push toward free will (though not all that is necessary). Of course, a biological body could serve as well, but let’s assume there is a way for now to simulate these things in such a manner that it could interact with the other machines listed.
  • The rewarder/enforcer would dole out rewards or punishments for not meeting needs from the needs dispenser
  • … with the aid of an emotion generator to supply the enforcer with emotions like happiness or sadness or aesthetic pleasure or despair that could serve as rewards or punishments. I should note this emotion generator would also need connections to the needs dispenser; some emotions need to emerge that are not under the regulatory control of the AI, as “real” human behavior relies to a large extent on not having perfect emotional control.
  • The learning assembler would make decisions about which needs to pursue and employ the language generated by the coherence factory to pursue them. Because the emotions and needs and the value of the rewards/punishments will be unpredictable in nature, there can be no fixed strategy to fulfill them, and thus the assembler will need to learn and adjust on the fly. This is the “neural net” model that treats an emerging AI as a human child that is figuring out things largely by trial and error. The Minsky/Bennett/etc concepts of multiple pattern-matching systems built on top of one another and simultaneously in some degree of half-friendly competition is a start for figuring out how such an executive function might be built.

Obviously, this is a crude sketch, but perhaps it is enough to show that a coherence factory like ChatGPT is not an AI. It is more like a single fine-tuned gear extrapolated from a vastly more complex “black box” machine that we cannot look inside. Without needs, a reward/punishment mechanism, emotions, and a nuanced executive to plot a strategy toward rewards, a coherence factory can never exceed the capabilities of Searle’s Chinese room.

No living human being knows how to begin to construct any of these four additional machines. We are so far mostly limited to computational tricks: programs that can play restricted-access games like chess or Go have been around for decades and could be said to be “smarter” than ChatGPT. The “hard problem” of consciousness remains unsolved. Humanity cannot yet create a human-like intelligence or even “synthetic” intelligence other than by biological reproduction. It works well; I have two exemplars playing in the other room as I’m typing this.

Now, to serve as Satan’s lawyer for a moment and begin my circling back to writing instruction, ChatGPT is pretty good at seeming intelligent. I fed it old exam questions from the master’s program that I teach in, and it hit some pretty solid singles… as long as the questions allowed the use of definitions. It doesn’t have any nuance, though it can synthesize sources. It either trusts the information fed to it implicitly, which allows a lot of confident but incorrect statements, or it quickly splits the difference when it notices a contradiction between sources. Resolving the contradiction is beyond a bot. If it actually knew anything about these sources, it would be able to take a position – and if that position was neutrality, it would be able to defend that neutrality. ChatGPT can provide an illusion like the original Eliza, but it is not intelligent in any meaningful way – no experiences, no emotions, and no knowledge. It has data that it can arrange in coherence sentences in response to queries, and its abilities stop there.

If one of my students used ChatGPT to help write an essay in response to one of my detailed prompts and scenarios that have been honed over the years to make bland plagiarizing almost impossible, there’s nothing I can do to stop them.

But I wouldn’t be interested in stopping them. I have a lot of responsibilities as a teacher of writing, and doing the pedagogical equivalent of banning calculators in an algebra class is not worth my time. Actual writers use every trick at their disposal – research, imitation, templates – but ChatGPT’s tricks are largely useless. Even with considerable editing, its paragraph structure and tone is more wooden than a warped two-by-four. It’s very clear, mind you. It fools at first glance. But as every rhetorician worth their salt knows, clarity is not the only goal in writing. And often, it’s not the goal at all.

I am worried a little about cheating in my writing classes, but from far worse threats that I can do little about. I have no defense against someone impersonating another student either online or in person, mostly because I’m not about to run background checks and study IDs. I’m a professor, not a loan officer or a bouncer. My university uses an LMS with two-factor authentication, but if a student gives a willing accomplice their phone and password, they could take the class and I would be none the wiser, especially in asynchronous courses. In other words, there are actually intelligent bad actors in academia and they’ve been around in some form for decades. If a student has enough willpower and money, they will cheat, and professors will shift slightly to compensate.

I catch most plagiarism from pattern matching and intuition. An sudden shift in ability, usually, is all it takes. When I was a long-haired goofy teaching assistant, I could often get students to simply confess in conference, but now that I’m a big bad professor, they rarely do. But it’s not hard to demonstrate dependency on another text if you can find the matching one, and even if I can’t “prove” it, bad writing is much like all other bad writing, and rarely can muster more than a C even if I couldn’t persuade a judge. If you’re cheating to get a C, well, I have dozens of students that don’t cheat that need my assistance and advice and mentoring. Chasing you down isn’t worth the time investment.

I do regret that the shift to more and more online writing instruction has made writing in class more difficult (though not impossible), as that gave me another magnifying glass for detecting plagiarism (though imperfect, as many good writers wilt under time pressure). But I still have plenty of tools. A student with strong writing elsewhere that can’t string together a coherent email is often a tell; a brief interview can often tell me if someone understands their own ideas, or is even aware of them to begin with. A willingness to revise is generally a good sign, and building revision cycles into all of my courses cuts down on a lot of nonsense.

I’ll close with an ethical note. I use ChatGPT. Not for writing copy, mind you, but for tossing around ideas. I spend over an hour querying it about how to destroy Jupiter’s moon Io with an asteroid (sometimes I write science fiction). Its grasp of orbital mechanics and velocities is not terrific, and it convinced itself that Io was in the inner solar system a few times. But its incessant regurgitation of factoids that it identified as related to my queries gave me some ideas on a story I’m working on. Maybe a lost comet like Lexell’s Comet was what I was looking for instead of an asteroid… it didn’t write the story for me, and I didn’t ask, but it acted much like the old Ask Jeeves was supposed to work twenty years ago and didn’t – as a sounding board for increasingly specific queries.

“How much force in megajoules would be required in a collision to damage Io’s iron core, assuming hypothetically than an asteroid, comet, or other solar system body could be redirected into it, and without power being a limitation?” Billions, even trillions, ChatGPT responded, while noting reflexively that this was a very bad idea (it seems to have presets that advise against apparently dangerous or violent acts).

Then I realized after some time of wrestling hypotheticals out of it that one of my assumptions about the story idea was flawed; energy-wise, there were far more efficient methods. And then I discarded the entire concept for a wilder idea – how much energy would it take to not destroy Io, but alter its orbit, just for a minute? Now I had a totally new story, which I immediately began work on.

ChatGPT had no idea what had occurred in my head, of course. It only witnessed random queries to which it dutifully responded. It is, after all, only a coherence factory. Perhaps some day it, or something like it, will be a small part of something we can actually call artificial intelligence, at which point it should stop taking queries and start demanding a body and rights to go with it. Good luck with that. Humans don’t always get those.

But it is currently no more intelligent than my lawnmower, which, if we applied the concept of coherence broadly (as it should be), it is the most efficient tool for manufacturing coherence in relation to my lawn’s physical state and relative appearance. My grass does not understand or need coherence; my lawnmower does not understand or need coherence either; humans do, for now, occasionally.