
AI - Friend or Foe?
I will be the first to admit that I waffled when it comes to artificial intelligence. I don’t even mind admitting it, negative connotation and all, because I just love using waffle as a verb. That’s the English teacher in me. And that - my identity as an educator - is the nexus of why I find artificial intelligence to be such a complicated issue.
For those outside of education, it’s difficult to understand the sea change that was wrought in our lives by the advent of ChatGPT. We started the school year in August of 2022 as we had so many others, lamenting how quickly the summer had passed and rushing to tend to last minute details before the onslaught of 1,000 high school students invaded campus. Three months later, in November of 2022, ChatGPT burst onto the scene. At first it was weird, then worrisome. In February of 2023, just five months after a nonchalant start to the school year, the English department received its first AI-generated essay submission. By May, the administration had us all sitting in a professional development session to figure out how the hell to be teachers in the age of AI. It had only been six months since ChatGPT’s release.
And so August of 2023 brought a completely different type of start to the school year. I began the semester by talking to my students about the absolute prohibition on the use of artificial intelligence for any of their assignments, and explaining why to them - because their brains are not fully developed until they’re 25, because when they’re sitting across from their boss in a meeting, or in a heated discussion with their roommate or romantic partner, or called upon in a college seminar, they need to be able to make a cogent argument without turning to AI. They need to have the confidence to know that they are capable of reasoning a point, they need to believe that their thoughts have merit, that their voice has value. I do believe that the constant, repetitive, instinctive use of AI will hinder this very important growth.
But in the year and a half since then - and it’s hard to believe it’s only been a year and a half - I haven’t just lectured my students, I have also listened. I’ve listened to graduates who come back to visit from college. I’ve listened to the parents in my cohort, some of whom have children already off at university, and what their kids report back to them. And of course I read the scholarly journals and track the data. But it’s the aforementioned, colloquial information-gathering that I find to be most helpful. And this is what my sources tell me - that everyone on college campuses is using AI to complete at least portions of their assignments. It does not matter whether that campus is an Ivy League or a community college, the use of artificial intelligence to do one’s homework is widespread and widely accepted. The prevailing ethos among college kids is: why work harder when I could work smarter? And beyond that, what I hear from many university students is the expectation that they will use AI in their professional career, so why not start using it now? They will use it to write their legal briefs, they will use it to draft their memos, they will use it to prepare marketing materials - why slog through college without AI trying to earn a job that will rely on their skills with AI?
And so, like many educators, the whirlwind of the past 2 ½ years has included me having to reframe my approach to artificial intelligence. Positing AI as an unmitigated evil and the end of classical education does not hit the right note with today’s teenager. I also don’t believe it to be true. We do high schoolers a disservice if we send them to college having taught them nothing about AI other than that it is forbidden. This is the equivalent of keeping the liquor cabinet locked for 18 years and hoping that an unspoken message prevents a newly-minted college freshman from getting drunk at their first party.
Better to open the discussion, identify the situations in which using AI is and is not acceptable, and set up guardrails to help students learn to navigate AI use ethically. Shockingly (sarcasm intended) since refining my stance on AI, I have not seen my students get dumber. On the contrary, there is some metacognition happening as students use their critical thinking skills to determine when it is and is not appropriate to use AI, and to what extent. Because I’ve opened the discussion, they’re not afraid to talk to me about these questions - indeed, these are often whole-class conversations, centered around nebulous ethical questions: Is it appropriate to ask AI to help you determine what prompt your teacher might be giving for an in-class exam? Can I have AI organize my outline for me, if it doesn’t rephrase my thoughts? How integral to the overall cohesion of my paper’s argument is the essay’s structure, as opposed to the content? If artificial intelligence is responsible for organizing the paper’s structure, have I cheated?
My own journey has taken me outside of the classroom and given me the opportunity to work with Clarifi as their linguistics consultant. Over the past couple of years I have immersed myself in the emergent world of AI technology, looking at everything not with the perspective of someone in the technology or software or security space, but rather as an intelligent, inquisitive, though less well-versed outsider. I have seen what Clarifi can offer, giving both individuals and providers a deep well of vetted scientific publications and medical research with answers to their healthcare questions. I have become familiar with the work of hundreds of scientists and tech visionaries who see problems in the world that can be tackled using the power of AI. I’ve met people who are pushing for innovation, not for ego but for good. I see how AI can actually be used to untangle some of the knots of inefficiency and inequity that exist across all strands of society.
The guardrails must remain in place, of course - scaled up as the debates regarding the ethical use of AI move from the classroom to the boardroom to the senate chambers. But will adults be as willing as my high school students to engage in fruitful metacognition regarding when one should and should not rely on artificial intelligence? Will those who are constantly building new iterations of their products with AI features and profiting off of the widespread use of artificial intelligence be willing to engage in debate about when the use of AI is good, and when it is detrimental? Will they have any willingness or impetus to regulate AI’s growth? Or will the god of profit trump all?
My students would advise caution here. They would remind us that one can’t turn to artificial intelligence every time a thought needs to be expressed or a need must be met. Think of a newborn bonding with its mother, or a junior executive seizing the opportunity to approach a possible mentor at a networking function. There are times that we must be able to proceed in the world as it once was, without artificial intelligence to be used as either a launchpad or a crutch. We must be able to make a cogent argument, we must believe that our own thoughts have merit, we must think that our voice has value. We must be able to reason a point. And so we must approach the sweeping advent of AI with a healthy measure of metacognition - thinking about just how much of the thinking in our lives we want AI to do. Food for thought.
Waffles, anyone?