What to make of ChatGPT?
That's the question many educators are wrangling with. What do we do with this new tool that will remove the heavy lifting necessary to writing one's thoughts?
Well, I do know that in my district our students do not have access to ChatGPT. This alone speaks volumes. This AI writing tool is worrying to those of us tasked with teaching students to become better communicators when using the written form.
Some Background: Ramblings & Ray Kurzweil
Be forewarned: This will be a musing, rambling mess on ChatGPT. My mind -somewhat- amok. Call it a backlash to the many formulas for writing ChatGPT is likely to boil the written word down to. Call it the effect of years of interactions with digital gamification forces like binge tv-watching, online shopping, and every-minute breaking news. Call it old age. Call it whatever you like in your own, AI-unassisted, words.
That's right. I have not used ChatGPT to write any part of this blog. Yes, I suspect we will see more notices in our future that some, all, or no part of something printed involved a writing AI.
I am not some Luddite or ill-informed individual. I am keenly aware that
AI assisted technologies have been in play a long time and will not
slow down or go away. ChatGPT could be banned in schools across the world and the technology would still be sought and used by students. If anything, I
fully expect companies like Apple and Microsoft to incorporate some form of AI more fully into their word
processing programs, and I am certain schools will soon adopt writing AI
in some manner, including my own district. In many ways, we have already.
I've read several of Ray Kurzweil's future-projecting books, The Singularity is Near, The Age of Spiritual Machines, and Fantastic Voyage. This was in the early 2000s, and I marveled then at what he projected would arrive on our present-day doorsteps. If I am recalling correctly, I believe his timeline for the arrival of the AI Age was roughly correct. He noted that the period of time from the 2020s through the 2030s would be one of incredible acceleration in the use of AI and the growth of medical breakthroughs. It certainly held my fancy. How incredible it would be to live in a time of nanobots, gene editing, glasses projecting 3-D overlays of the world before you, and the possibility that we might live... forever.
I also wondered about ramifications of it all. What would an AI driven existence do to us? As much as we would gain from it, I contemplated, what might we lose? Our independence? Our curiosity? Our humanity? Our (insert your own idea that amounts to a loss of something that makes us human.) In the end, it bothered me not because Mr. Kurzweil had clear enthusiasm for future machines and the roles they'd play in our lives. He rarely did more than touch on their dangers. There's plenty to debate as to what makes us human. Does becoming machine-reliant negate our humanity, or would it make us more of who we are? Or would we become something new?
Given that AI assistance is already in play to differing degrees, I suspect we have some ideas as to those answers.
Mental Exercises: Spellcheck & Predictive Email
One exercise I try from time to time, when I attempt to project outcomes for myself, is to take something similar from an earlier time and try to track it to now and how it's impacted me. This moment, I am thinking of spellcheck. When this assistive program -godsend to some and bane to others- arrived, teachers debated its place. Some explained: Spellcheck helps those who struggle with spelling focus on writing and does not allow for misspelled words to detract from it. Others proclaimed: It is undermining the learning process how to spell. Today, this debate is an afterthought in education. Almost no one worries if students relied on Spellcheck for their work. Spellcheck is part of the fabric of our day. We only really pay attention Spellcheck these days when our students continually mistype words even as they have spellcheck available to them. Using this rudimentary example, it seems that any reservations I might have on ChatGTP are not warranted. And yet...
...and yet, I do know that our school system has renewed its efforts to teach our youngest learners how to spell without the use of a spellchecker, and this is because there remains real value in learning the rules of spelling as it is the other side to learning to decode words. Learning to spell well helps with phonics, phonemic awareness, affix study, and so forth.
A more recent item I considered regarded predictive writing in emails. Think of how email now offers us suggestions on how to complete our thoughts. It was neat when it first arrived and does work as a shortcut to typing. However, more often than not, and I'm sure I'm in the minority, it perturbs this writer because this fledgling AI so frequently offers pablum of one kind or another and not what I intend to communicate. Honestly, I don't recall much fretting over the arrival of this tool, but I feel as if there should have been. On its simplest level, the predictive writing does a satisfactory job. What concerns me is that we may all soon be starting and ending our thoughts the exact same way as we continually defer to what the AI assumes we're working to write. (More on this subtlety later.)
Though I've had reservations for these advancements, I know these assistive technologies are not going anywhere. We can hold ourselves away from these tools only so long. We are weak. I am weak. (Though I will note for the record that I had avoided owning a cell phone for my entire life until this past year. I watched early on as adults struggled to not touch their phones and knew I didn't want that for myself. Why I caved in and adopted a phone is for another article, but Apple can thank Dunkin Donuts and computer firewalls at work for my change of stance.)
Nothing But A Tool: Chisels & Hammers, Jackhammers, and TNT
ChatGPT is often noted as "nothing more than a tool" by those who wish to downplay the impact it will have. Allow me to almost agree. ChatGPT is a tool. I wouldn't however refer to it as "nothing more than a tool". The thing about tools is that they can be compared along a spectrum. Some tools are simple in their use and impact; others may be simple but have far greater impact.
I'll be literal about tools, to begin.
Picture a hammer and a chisel working to break apart a concrete walkway. It can be done with labor, resolve, and time. Now, picture the jackhammer performing the same task. Hella lot quicker, no? Both are tools, but each have completely different impacts, literally and figuratively. ChatGPT is the jackhammer of writing tools when compared to writing assistive programs like Spellcheck and predictive email salutations. Calling it a "tool" belies how this program will come to change lives.
I recently read an article in my local newspaper in which multiple educators from elementary through college were asked about ChatGPT and what they made of it. It was, surprisingly to me, largely viewed as a positive event, something that needed to be looked at a little more before being brought into classrooms. At least one educator noted that every time we introduce some new and powerful technology we all claim something is finished and done, only to discover it is not. I believe the comparison was how calculators would ruin students' math abilities and did not.
Okay, let's talk about calculators. And, because I am all too familiar with them, I'll toss in a slightly more advanced calculator, the iPad.
When the calculator first arrived, it brought with it serious discussions as to what effect it would have on students and their mathematical abilities. As a kid, for that's what I was when calculators first became inexpensive enough to bring into classrooms, I know that not all teachers allowed them. Many teachers believed that letting a machine do the adding, subtracting, multiplying and dividing would ruin a student's independent abilities with mathematics. Some suggested it might even remove the need to study basic math. While the calculator did not destroy the field of mathematics or ruin the ability to learn it, it has changed it.
I share a mere and lowly anecdotal note as explanation. I spoke with a few of my math colleagues this week and asked them to explain how they felt about calculators in their classrooms. Our students are sixth graders, ages 10-12, and still very much learning basic mathematical principles. Unequivocally, I was told that calculators are accepted -to a point. And that point is when they witness their students using calculators to complete basic tasks such as multiplying 3 times 2. And they do see this. Students become over-reliant on the technology, far more quickly than we want to believe is possible. As bad as the calculator can be as a tool if overused, it is a chisel and hammer; not a jackhammer. The iPad is most certainly a jackhammer.
Assistive AI: Learned Helplessness
We sometimes use the phrase "learned helplessness" in education. Because we are so quick to assist students who do not respond readily to tasks, they -very quickly and unsurprisingly- recognize that they need only sit still, avoid beginning their work, and some adult will rush over to help them. This even when they're utterly capable of doing something independently. More assistive tech "tools" means less self-reliance; more turning to a machine to do the heavy lifting for us. ChatGPT appears well on its way to doing massive lifting. Forget the chisel to jackhammer comparison. It's more like a chisel to TNT.
It is entirely possible to shoot back a response as follows: OK, so what if students turn to assistive tech for answers? That's what it is designed to do and what it does extremely well. I do not dispute this. But I like to believe I have at least some observational abilities.
One item I have witnessed all too often with students and adults is how unlikely they are to turn to assistive technologies even when they're curious. I would have a tough time tallying on one sheet the number of times I've heard a student (or adult) ask some question of a group, a question with good merit in its posing, a question that most others agree is a good question, a question that is easily answered by today's standards and no one bothers to figure it out. No one turns to their iPad or iPhone for an answer.
How is this possible? Why hasn't having all answers at one's fingertips all the time not made us more curious or more likely to hunt for answers? Are we overwhelmed? (Yes, I believe so.) Do we have a type of learned helplessness? (Yes.) Has our true sense of curiosity become blunted? (Yeppers.) It's maddening for this guy, especially when I am among adults.
Here's another anecdote to explain this happening. I work at times to teach student to mull things over. Mulling means considering the possible answers to something that one doesn't have a legitimate answer for. An example I use as practice is "Why are manhole covers round?" Long ago, this got students excited. They came up will all kinds of ideas. Ideas that went from well off-base to nearly correct. Today? Most of my students shrug and wait for me to push them to use the starting thinking stems of "Could it be because...," or, "Maybe it's because...." I've taken to rubbing my chin as if I'm in deep thought to trigger them to respond and it isn't always a guarantee then will initiate a response. Eventually, over the course of a school day, one or two kids will ask if they can look up the answer on their iPad.
Our district has one-to-one iPads, meaning each student has an iPad given to them in elementary school and will have one until graduation. We were early adopters in this. At first the iPad was a novelty; and some jumped readily into using it, others not so much. Covid arrived and it became a "life savor" of sorts. Everything we did with students relied on an iPad/laptop, Google Classroom, and Zoom. When Covid protocols were dialed down and classrooms opened to normal practices, the technology remained in our world, much more deeply rooted in our world. Now, it seems none of my colleagues can back away from this "tool" though all of them, I suspect, understand that it has added to the decline in student focus and critical thinking that we have been witnessing for some time. Let me clear. Students are no less intelligent than when I'd first arrived to teaching 28 years ago. They are less curious, less likely to enjoy reading for pleasure, and more likely to find sustained, critical thinking difficult.
There is too much to pick from, too many ways to entertain ourselves, and most hurdles to life have been removed. I am not hitting on any new notions here, of course. I am only noting them because they require continual noting. Life has become too easy. We are spoiled. No one enjoys Negative Nelly, but I struggle to see how an AI program as potentially powerful as one that will write for you will make those who need to learn to write better. Most people dislike writing to begin with because it can be such a trying exercise. It takes a lot of brainpower to compose stories, reviews, nonfiction pieces, and essays. Having AI do the bulk of this for us seems a little like -no, make that a lot like- the doting parent who builds most of a planetary model for their child so said-child can bring it to school to get that A.
We know so, so much about how easily humans can be nudged and influenced for good or bad. And this is before mass data collection and aggregation came into existence thanks to credit bureaus like Equifax and arguable monopolies likes Google and Amazon came along. I recall watching an episode of 60 Minutes where it was revealed that Amazon, which was at the time only a few years into being, was collecting about a terabyte of information on each of its customers. A terabyte! Not for nothing, but I wasn't even sure if my entire life could provide a terabyte of information, let alone a few years of shopping on burgeoning Amazon. Still, that Bezos affirmed statement was incredible. ChatGPT is another potential gargantuan aggregator of information that will boil us down to patterns of thinking and in turn make future writers patterns of thinking. Or, in this case, non-thinking.
My small rebellion to our love-fest with heavy technological tools is that I have returned to pencil and paper methods of instruction for the last two years. I believe it has bettered student learning. Pencil and paper are single-use items and do not come with chimes, pop-ups, color palettes, or any of the distractions electronic devices hold. They also have no assistive AI presence for a student to lean on. And, I also believe there is an advantage to the feel of a pencil as one reads, draws, and manually writes. The physical process of moving a pencil across a page and working to use one's own wording is powerful in its ability to reinforce understanding. Not so sure on this? Consider this. When phones phones removed their physical buttons they were replaced with haptics because both users and designers recognized that we need the physical feedback.
Paper and pencil instructional only? Hm, maybe I do have some Luddite leanings after all. It's a joke, of course. I consider myself more of a skeptic. Most skeptics require proof. Proof and time. Jumping into ChatGPT is a bad idea to someone like me.
AI Defenders: Far or Short Sightedness?
I've seen some early defenders of ChatGPT. Individuals who are writing articles explaining how ChatGPT can readily be brought into the classrooms to help improve student writing, the most recent article suggesting that students can have the AI write articles on topics being studied in class and then go about editing the article for errors. Right now, ChatGPT isn't a perfect "writer" and the suggested class activity would be applicable, but, is this really a reason to embrace ChatGPT in classrooms?
It is a critical thinking activity to track the thread of an article and see if it follows proper conventions, has good organization, and properly builds a set of ideas, but that's only one aspect to writing and in this instant it is not your own writing but that of an AI that has examined millions of previously written samples. Writing for yourself matters.
I sometimes wonder about those who jump into defending game-changing technologies. Are they doing so because they truly believe in them, or are they looking to be an early defender so they can preen, "See? I told you we'd be using this? I was way ahead of everyone with it." Sometimes, these stances simply gain acknowledgment in the public discourse because they make for the other side of the debate. I think that's worrying. I am willing to recognize that many are also earnest and see ChatGPT and another item that should be brought into classrooms so it can be better understood and potentially harnessed. But, I'm not sure enough people recognize the power, the negative power, AI like this may eventually hold.
I recall when I first saw parent and student-generated articles during Covid that brashly claimed distance learning was the best things to ever happen for their kids. Many explained that their children were more comfortable at home and that having everything online made it easier to complete work. Any honest educator will tell you that what we taught in those times were below normal expectations. We had lowered the bar. Most early post-Covid evidence points to just how important being in a physical location together to learn from an experienced adult really is.
I've read the articles where people argue that each time some new and disruptive technology arrives it means freeing us up from something we consider a chore and allowing us to explore new things, e.g. free time to stay fit, self-educate, or pursue some passion. Options which few of us choose. Instead, we interact with some digital device while consuming more calories than we should. Humans literally carry more weight than we ever have in history because of technological advancements: more calorie-dense foods, more eating options, greater quantities, and more opportunities to avoid exercising or other physical chores.
I believe there is merit to physical and mental chores and depriving ourselves of too many of these harms more than helps. I also have a hard time not picturing the opening scene to WALL-E where humans are these amorphous blobs chugging sodas as they ride a hover-seat and watch a computer screen. But hey, that's me. Interestingly, that movie rolled out in 2008 and looks eerily accurate the rare times I go into a movie theater and see those large, reclining seats with buckets of popcorn on them. The only thing missing is the hovering part.
ChatGPT is going to erode more than we are likely willing to admit. It is going to make our brains dense. Anytime there is a major advancement that replaces the human ability to accomplish something there are effects, and not always good ones. And. And, I'll push this stance further, the bigger the tool the bigger the effects. ChatGPT is TNT.
Ethical Considerations, Valuations, and the Elusivity of Truth
Other concerns I have delve into the ethical production of AI written work. As in, what is likely to become our valuation of AI produced writing and the creation of yet another barrier to the truth.
There are already concerns about the ethical usage of AI to write something as the writing may come whole cloth from original sources of writing. AI has been caught borrowing a little too much of its writing from others, i.e. AI plagiarizes. How long before AI is borrowing its writings from other AI-produced work and what will that look like? Does everything grind down to sheer uniformity? I can't help but think of tic-tac-toe, which is fun until you play someone at your level and every game can only end in a draw. There are no surprises then. Does AI writing become that? I digress and will move to another ideas rattling inside my brain pan.
I contend that AI written work should be less valuable, even as I recognize the complexity in defending this statement.
AI does not spend undue time working to knit ideas together, conduct research, create something new in written form, or find a voice. If it does any of these things, it's because code directed it to do so and it likely got its voice, style, and information from someone else's writing. Yes, there is an argument to made that human generated writing is no less the result of some code we're given and based on the writing of others. Code: When writing, we are expected to follow writing conventions, syntax, and writing structures. Voice & Style: We are influenced by the reading we do (i.e. the writing of others) as well as the beliefs, values and thinking of our time. Is this that different from a set of coded instructions on a chip?
Maybe it's better to judge the value of writing based on what are soon-to-be our limitations as writers once AI takes over. Any human writing that can approximate some near-future AI's writing will be deemed amazing. Or, maybe the value is to be found in voice. Voice in writing, when obtained, is an accomplishment as it is an individual expression. Will AI be able to find its own voice? Will it just be, as noted above, a collection of stolen voices? Or, will AI find its own, convincing voice? If it does form its own voice, will we even be able to understand it?
Here's another way to try and separate the value of AI writing from human writing. Try this thought experiment. You are ready to purchase a significant piece of art in a time when human and AI art are commonly bought and sold. You meet with the art dealer and are told that you have the chance to choose between two pieces of equally valuable art, based on money valuations. They are literally valued exactly the same, these two pieces of accepted art. One work is AI generated while the other is created from human hand's, which do you believe most people would covet most? Which object do you sink your money into?
I'd like to believe the art done by hand would hold more value. Yeah, yeah, I cheated some in my thought experiment noting that the AI "generated" the art while a human "created" its art. There's bias in that setup, but there's also a kernel of truth in how it feels at this point in time. AI generates; humans create. (*See below for my rudimentary rationale for choosing these words.) Human writing should be more valuable as it likely represents hours of work, original thinking, and the human quality of inspiration. For me, it is the difference between an original signature on a document and a copy of that signature. One is more valuable, more original.
* I used generate for computer art because ultimately I believe you can trace back the specific commands that directed a computer to make something. As of this point in time, we do not seem to be able to do the same with the human brain. We've learned a lot about the brain and we recognize we've a lot more to figure out. I'm not sure anyone can definitively explain where ideas come from. That seems very much like creation, for lack of a better word. A slight differentiation that I believe matters.
I warned you this would be a rambling venture and it has been. There is only a rabbit hole to dive into when I am given the unfettered ability to write whatever I'd like to write. Here's more of that rabbit warren of thinking. It could be that the single reason to value human produced writing over AI writing in the near future is because any decently written thing that can approximate the deemed perfection of AI writing will necessarily demand our respect. A machine can draw a perfect circle; a man doing the same in freehand would be a marvel.
More off-track thoughts on where this may be leading.
Think of the evolution (or is it devolution?) of saying Happy Birthday to someone. Long ago, we would wish someone a happy birthday by attending a function and/or making a phone call. Then Hallmark came along and put its stamp on things, making it a near requirement to include a card (with a pre-written sentiment) as a means of wishing someone joy on their special day. This evolved into the, initially uneasy, approach some took of sending -not physical cards- but virtual cards via email to those they cared about. Physical and virtual cards still remain in place, and even phone calls and parties to some extent, but it has also become completely acceptable to text someone "Happy Birthday" in lieu of any of the above. I'm unsure if you're following where I'm going with this and so I'll attempt to be clear.
Assistive technologies are making too many things too easy. Shooting off a text takes next to no thought. It also has, in this thinker's opinion, the effect of devaluing one another's sentiments. Some things still matter. The true effort you put into something as an individual without the assistance of a pre-written card or some app, matters. Your physical presence matters and speaks loudly as to how much you value others. ChatGPT is yet another means of devaluing individual efforts. We'll be sending each other AI written communications bearing our sentiments for which we've put no more effort into that saying, "Alexa, write my mom a heartfelt thanks for being my mom this Mother's Day."
Truth. As we have discovered through the proliferation of social media, the truth is now harder to know. This is hard to fathom as we have access to more information than ever before, more eyes on one another, more cameras, more people reporting what they've witnessed, better sciences -and yet, the truth seems less available to us, or maybe more accurately more easy to ignore in favor of some other "truth". People with bad intentions have as big a mic as ever to spread their inaccuracies, and I believe ChatGPT will do the same to written works. Who will know what the truth is anymore when something is given to them in writing? (I am aware that there are people working on this very concern. People who are working to make AI written works noted as AI written, ZeroGTP et al. Good luck with that.)
If I have to make a projection as to where all this will go, it is down a path that seems...bizarre. I think that education will not only adopt AI writing programs like ChatGPT but that education will embrace them as another accepted means of writing. The new teaching will not be how to write from scratch but how to use AI writing devices to "write". Student will be "writing" using ChatGPT to express themselves from beginning to end. They'll tell the AI to make the story more personal, or in first person, or funnier in some section, and to end with a clear lesson. The AI will do the heavy lifting and our time spent of from-scratch writing will decrease because students will find it terribly challenging and forever ask, "Why can't we just use the AI?" It's a balancing act, is the common phrase teachers love to use to express the real difficulty in something, but this is one where most of the weight will be stacked on one side of us -the opposite side of us.
Things do fade in education, even as slow as it is to let things go. Cursive writing is gone, printed writing is being to fade, silent reading is almost an option now with audio options for everything, etc.
Ok, maybe, possibly, I'm running too far into the negative with this, but I do know that once someone or something takes over a duty for us, it becomes too easy to rely on it over doing it for ourselves. No one bothers to remember phone numbers or directions anymore. I think of my wife, the luckiest woman in the world to many :). She is the repository for a lot of information I do not bother myself with: birthdays, upcoming events, doctor's names, where the cleaning products are, clothes washing, etc. I rely on her to provide this information, to me. Without her, I would have a harder time navigating life. Could I figure these items out? Of course. But, I don't. It's lazy and unkind in many ways to those in my immediate life that I don't seem willing to learn these important things.
Maybe this is the item before so many teachers and students: Will we make the true effort to learn to write both with AI assistance and without it? Common sense and Vegas will tell you all need to know about which outcome is far, far more likely to happen.
If you made it through all of this, you are either a glutton for punishment or an AI program gobbling up a poor human's sad attempt to understand the inevitable.