Following the trail left in my last newsletter, when an AI creates, who is the creator: you (who prompted) or the AI?
I’ve been reflecting a lot lately on the ethical implications of our creations, and artificial intelligence really highlights this need. Mainly because we can only grasp what AI will be used for in the future, most of its repercussions are still unknown to most of us.
A good example of AI’s ethical discussion is in Education. Teachers are having a hard time dealing with AI in the classroom. Copying and pasting from Google is not a problem anymore, because now students are using ChatGPT to answer the questions for them.
Well, students trying to cheat on a test are not exactly news -the one who has never cheated on a test please throw the first stone. But Professor Boris Steipe from the University of Toronto raised some interesting questions:
“What if a professor suspected a student had used ChatGPT but couldn’t prove it? A plagiarism allegation is no small thing: you can’t risk ruining a student’s academic career on a hunch.”
If you are not familiar with ChatGPT by now, it is an AI that can give you some seemingly credible answers with eloquent language -credible enough to even pass an MBA exam. But AI’s “correctness” is a whole other topic, so let’s focus here just on its capacity to give answers “like a person”.
This resemblance with human prose can be tricky to spot -maybe that’s why students are using it- and schools and universities are creating methods to detect its use or even ban it completely.
On the other hand, ChatGPT, reached one million users in just five days, making it the fastest-growing app in history.
(If you are into infographics, check this one.)
The real question here should not be what schools and universities can do to ban AI from their campus, but if it is really possible. As a teacher myself (and a former student, for that matter), I know students will ALWAYS find a new ingenious way to use it.
Like Prof. Steipe said, “as professors, we shouldn’t be focusing our energy on punishing students who use ChatGPT, but instead reconfiguring our lesson plans to work on critical-thinking skills that can’t be outsourced to an AI. If an algorithm can pass our tests, what value are we providing?”
Dramatic pause.
Chances are that AI is here to stay and it doesn’t need to be a bad thing; we just need to learn how to properly use it. For example, instead of teachers focusing on the “right” answer, they can put an effort into developing skills that so far have been normally neglected, such as critical thinking, communication, collaboration, curiosity, and an open inquiring mind.
Finally, Prof. Boris concluded that
“it’s up to us as professors to provide an education that remains relevant as technology around us evolves at an alarming rate. If we outsource all our knowledge and thinking to algorithms, that might lead to an unfortunate poverty in our curiosity and creativity. We have to be wary of that.”
(P.S.: If you are one of those people who thinks this “AI nightmare” is somewhere near its end, here is some bad news for you: Google is already working on its own AI, Bard.)
I believe that ethics is indispensable for us to (re)think about the future and our relationship with emerging technologies, and that, through it, we find ways to make the most of this symbiosis between humans and machines.
What are your expectations about AI?
How do you think it will change the way we create?