• Pastimes
  • Posts
  • Do The Work. Don’t Let AI Do It For You

Do The Work. Don’t Let AI Do It For You

And if you can’t, find someone who can.

Pastimes is a journal about community and how we spend our time (and, at weekends, NYT Connections), founded by tech, gaming and culture writer Kris Holt. You can discover what Pastimes is about here.

If you aren’t subscribed to Pastimes yet, you can sign up for free here. I promise I’ll always try to make it worth your time. Your trust and support matters. <3

I initially planned to make this a very broad look at this AI tidal wave and why it’s so terrible. But as I started piecing things together, I had reached around 1,500 words before even getting into the meat of any topic. 

There’s so much to get into with AI, such as the rampant plagiarism (in the US and the UK, companies are pushing for a free pass to train their systems on copyrighted material i.e. the countless hours of hard work of millions of creative humans), the environmental impact, the inherent bias in many of these systems and how they can “reinforce harmful ideas.” 

There’s also how the companies behind generative AI tech are destroying the web as we know it (and the capacity of many people to make a living online) to enrich themselves, and how these chatbots simply get things wrong, over and over and over and over and over again. They often just make stuff up. We can’t blindly trust this stuff or believe that the companies behind it have humanity’s best interests at heart.

In effect, a broad overview of generative AI wasn’t going to work here. So we’ll look at what I think is one of the most pressing concerns of this (probable) boondoggle: its impact on our capacity to think and learn.

Podcast problem

I’ve long had the idea in the back of my head I would start my own podcast. But, beyond a couple of trial runs a decade or more ago, I haven’t really explored that for a few reasons. I’m not sure the world really needs another podcast from the perspective of a straight, cis white dude, for one thing (this is arguably the text version of such a thing, in fairness).

But the main reason is I don’t want AI companies to scrape my voice and use it to help train their systems. That has almost certainly happened with my written work, given the thousands of pieces of my writing that exist on the internet and the cavalier attitude regarding intellectual property that these companies appear to have.

That made it more galling to read about something that Beehiiv, the platform Pastimes runs on, has started offering to subscribers on certain tiers. The company has, for a while, offered a way for users to turn their newsletters into a podcast read by an AI voice.

Now, Beehiiv newsletter operators have the option to create an AI clone of their own voice. The tool can then turn a newsletter into a podcast with an AI voice that sounds just like the newsletter writer.

That sounds pretty terrible to me! And just one of the many reasons why I think we need to treat these AI tools and systems that are popping up all over the place with a great deal of caution and a lot of skepticism.

We’re going to forget how to think for ourselves

(I love that movie, In Bruges, so very much. AI can’t love a movie. AI can’t love anything.)

I’ve been reading quite a bit recently about how huge a problem AI cheating already is in schools, colleges and universities. In the big picture, this is shaping up to be a disaster. We’re on the precipice of hundreds of thousands of people entering the workplace having used the likes of ChatGPT to complete every single piece of their work throughout their entire time in higher education.

They’ll have a piece of paper that says they graduated, but no new skills they they developed throughout college or university. Except for maybe the ability to adapt an assignment into a prompt (i.e. instruction) for a chatbot and to tweak the output juuuust enough to thwart AI-detection tools.

For many, the most enriching part of going to college has historically been the experience, not the end result. Students spend some of the most important formative years of their lives making connections while learning how to do things, gaining important skills that will stand them in good stead for their careers. It’s not necessarily about getting the right answers, but developing the skills that can help you figure something out. 

Turning over all of that to AI could irreparably harm our collective capacity for critical thinking. For instance, the more we rely on these tools, the less knowledge we’ll have in our personal memory banks. Our ability to draw connections between a piece of new information and something that has happened in the past to reach new conclusions and make interesting points would be significantly diminished.

That’s not just me saying that. Scientific studies — including one conducted by Microsoft, a significant player in the AI landgrab, and Carnegie Mellon University — are finding correlations between the faith one places in generative AI and the level of critical thinking. The Microsoft and CMU researchers wrote in their paper, which was published earlier this year, that:

Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.

When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows. To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers.

That’s pretty troubling. Critical thinking skills are going to become more important in the coming years, not less. As these AI tools greatly accelerate humanity’s ability to generate misinformation, it will be increasingly vital that we’re able to use our own knowledge and common sense to detect bullshit.

I fear that too many people will accept AI chatbots’ output at face value, even though much of it is rife with errors. Right now, many of the images and videos that are generated by AI systems look like junk, but the tech is becoming increasingly adept at whipping up realistic visuals with very little human input. The ability of chatbots to churn out text that sounds reads more like a human’s work is improving very quickly too.

So now is the time that society really has to be learning how to identify and address misinformation, and understanding how to validate statements and claimed facts.

I’m reasonably confident that I can spot some of the tell-tale signs of AI-generated material. But I’m absolutely certain that I’ll be tricked by some of this stuff into believing it was created by a person sooner rather than later.

Questioning everything we see and hear may help — as someone who likes to give people the benefit of the doubt, I admit that’s often tough for me. But having a personal AI-detection radar blooping away 24/7 isn’t really possible. That won’t be an answer to these problems.

Media literacy is eroding rapidly, and I can’t see that improving. It doesn’t help that many profit-driven companies involved in AI and those in the corridors of power are undermining legitimate media outlets that hold themselves to high ethical standards.

Trust and reputation will be more essential than ever as we try to filter out AI-generated misinformation to get to the truth. Those tenets are earned over time by people doing good work with skills they have developed over years, in journalism, fact-checking organizations and beyond. Legacy media and other trustworthy sources are going to be key.

…I’m digressing a little here. AI’s impact on the media is another rant for another time.

There are arguments to be made that AI tools simply build on things we already use. I’d wager that fewer people than ever are doing long multiplication by hand when so many of us have a calculator in our pockets at all times, but I know how to do it. Similarly, we can look up just about any public information within seconds on those same devices.

Generative AI is different, though. Because we’re willing to let it carry the mental load for us.

Hi, I’m a cheater too

Students have cheated their way through college and university for centuries. Confession time: I’m no different.

In my final year of studying English at university, I took a class focused entirely on two novels, Don Quixote and (I had to look this up just now because I could not recall it) Arcadia.

I didn’t read either novel. I skipped out on some seminars to hang out with my buddy — and was marked down for my absences. I used Spark Notes and Wikipedia to find the information I needed write my papers.

I still passed the class.

I’m not especially proud of this. In a sense, it’s a bit rich for me to rail against cheating using generative AI. But I still had to use my brain.

I didn’t turn over the cognitive part of the work to a chatbot. I used other resources and the critical thinking skills I had developed to complete my assignments.

Thanks to those skills — which I use every single day now by assembling a hodge-podge of new information and context into news stories — I’m making a good living without ever getting AI to do any of my work for me.

I’d be embarrassed to publish any AI-generated material under my name. I’m glad that Engadget and Forbes (the two outlets I write for most often) explicitly prohibit that practice. My writing and thoughts are my own, and they are informed by the research that I do.

I have no real idea of what the job market of 2030 or 2035 is going to look like. Some employers (including at ostensibly journalistic organizations) are already canning human workers in favor of AI. Either that or these employers are just using AI (which is fundamentally incapable of replacing people) as an excuse to lay off a bunch of long-tenured workers and later hire newcomers on lower pay. Students who are cheating their way through college with chatbots may find decently paying entry-level positions (ones that they might not have even had the skillset to actually do) in their field impossible to come by.

I worry that the more we allow AI to do the mental work for us, the more harm it will cause society. Generative AI tools and how we use them may well shape life both online and in the physical world for the foreseeable future, but we don’t have to simply accept what tech companies are trying to sell us on.

There’s a case to be made to that it’s okay to use translation and transcription AI apps, in large part because they’ve been around for decades and are somewhat distinct from generative AI chatbots like ChatGPT and Google Gemini. For the sake of transparency, I do occasionally use translation and transcription tools.

There may come a day where I actively have to use other AI tools in order not to get left behind in my line of work. I hope that day never arrives.

(Tech companies force feeding generative AI into pretty much every app and service they have over the last couple of years has been pretty demoralizing, largely for the reasons I mentioned at the beginning of this piece. That’s not exactly ideal for someone who makes a living writing about technology.)

(Also, Google is charging two hundred and fifty American dollars per month for access to all of its AI tools and early access to new features. lol. lmao.)

It is essential that we have the skills, ability and knowhow to actually carry out tasks ourselves and not let AI systems, which cannot actually “think” (they use math to guess the best response), always handle the cognitive work.

Relying on that could rot our brains even further — and that’s coming from someone who has spent much of the last 20 years mindlessly scrolling through junk on social media.

Most troubling of all, educators at high school, kindergarten and even pre-K levels are being pushed to adopt AI tools in their teaching and to get their students to use these things that are quickly barging into so many facets of our lives.

It’s not too late to stop that, but battling back against such efforts, and convincing decision makers not to go down this path, sooner rather than later is imperative. Prevention is better than any cure.

(For more on this subject, check out this great piece by Nicholas Carr of New Cartographies: The Myth of Automated Learning)

We gain joy and satisfaction from learning skills

Generative AI is having a massive impact on creative industries. Everything from animation to voice acting, and coding to making games is being upended by AI tools and people who are embracing those instead of actually learning the skills required to succeed in those fields.

This is beyond icky. It’s already putting talented people out of work. AI-generated material has no soul or meaning, but many of those at the top who are trying to increase profitability won’t care.

Generative AI is inherently anti-worker. By and large, those behind such tools are ripping off creatives’ work without their consent. Many users of AI tools are replacing the work those creatives without having to pay for their skills and labor.

It’s worth supporting creative workers who are fighting to make sure employers don’t shut them out in favor of AI. If — and more likely, when — that happens, we’ll all be worse off for it.

Just as one example, I don’t want to listen to a song someone generated using AI. I’d rather listen to a bunch of kids playing a terrible song they recorded on a phone in a garage, three days after picking up their instruments for the first time.

Developing skills takes time. We may not have the bandwidth to develop all the skills we’d like to have, but that’s no real excuse to turn to generative AI instead.

I’d been meaning to get a logo in place for Pastimes for a while. It would have taken me maybe five minutes and very little effort to open an AI tool that can generate images, tell it my concept and choose from one of several options it spat out or keep refining the prompt until I had what I wanted.

I couldn’t bring myself to do that. Instead, I spent a bit longer putting together a logo myself, even though I have zero graphic design skills.

For the sake of posterity (and in case you hadn’t seen it), the current logo is above. It’s a “P” for Pastimes, with a clock in the center.

This, to be clear, is very much a placeholder. It’s shit, but at least it’s one made by an actual person. I do fully intend on actually paying a designer to come up with a logo and visual identity for Pastimes (if you can recommend someone talented, let me know!).

I don’t have the skills to make a pretty logo and look for this place myself. But I certainly can find someone who can do that. And I can compensate them fairly for their efforts and the work they put in to developing the skills required to carry out such work.

We don’t just learn skills for professional reasons, though. We often do that just for our own gratification. That’s super important to having an authentic, lived human experience.

My mind keeps drifting back to the point I made up top, about the possibility of creating an AI clone of my voice that can turn this essay into a podcast.

Screw that. I don’t necessarily have plans to start a podcast, but I would rather record me reading this piece myself than let a robot do it.

So, perhaps contrary to the advice I offered in my previous essay, I might not begin a podcast and figure it out as I go.

Or maybe I will if I can figure out a decent format for it.

My concerns about my voice and/or likeness being used to train AI are ultimately kind of moot anyway, since it could be scraped from YouTube.

In any case, I feel like I could stand to improve my verbal and presentation skills. I might not need to use them anytime soon, but having them in my pocket will certainly be helpful. I’m sure developing those skills will help boost my confidence in certain settings. All the same, they’ll be fun to learn and improve on.

I’ll wrap up by encouraging you (once again) to make things for the fun of it. Make them by yourself, with your family, your colleagues, your friends.

Share them.

Or don’t.

Just learn skills and make things. Learn by doing. That’s one way to fend off this AI scourge.

(One last piece of hard, useful advice: if you don’t want to see those horrible AI summaries in Google search results, here’s how to get rid of them: Ten Blue Links)

Okay! I tried to keep that as concise as possible while making all the points I wanted to hit. I hope I did that. There’s so much so say about this complex topic and there’s no way I was going to cover all of it. Maybe I’ll explore those other aspects of AI another time, but I’ll be turning my attention elsewhere for the next few essays at least.

I’d love to hear your feedback. You can leave a comment below, but make sure to log in first! A couple of folks tried to leave a comment last time, and it didn’t work because they weren’t logged in.

You can also reply to this email, send me an email ([email protected]) or chat with me about it in the Pastimes channel on our Discord.

If you’re reading Pastimes for the first time, please subscribe!

If you enjoyed this essay or it resonated with you in any way, please share it with someone who you think might find it interesting or on social media! Word of mouth is by far the best way for us to grow Pastimes together.

And I do mean together. Building Pastimes as a community of voices and ideas is important to me. I never want to treat it solely as my own personal pulpit or anything like that.

To that end, I’d love to hear from you if you have any questions or comments you have about the topic I have in mind in the next non-Connections edition of Pastimes (it’s more fun to do things together, after all). I’m aiming to send that out on June 17/18.

Unless something else comes up in the meantime that I feel more compelled to dig into, I’ll be delving into the importance of caring about the right things.

If you’d like to send me questions or comments about that topic, just hit reply if you’re reading this via email, leave a comment on the web version, send me a message on Discord or contact me at [email protected].

Have a great day! Stay hydrated! Call someone you love!

Kris xo

P.S I’ve got a double whammy of relevant recommendations for you this time. Just as I was starting to prep for this essay last week, the maddeningly prolific and excellent Josh Johnson dropped a standup set about… AI and its place in education. Sigh.

I didn’t watch this until I finished writing everything above. But, as ever, Johnson gets his points across clearly and far funnier than I could. It’s very good. The man is such an inspiration:

Secondly, on the subject of making fun stuff with people you love, here’s a fun thing I did with my friends. I wrote a little bit about this for an Engadget story on YouTube’s 20th anniversary, so I can’t exactly keep it hidden from you fine folks as well.

Some [mumbles] years ago, we decided to enter an MTV contest in which participants were asked to make a music video for a Foo Fighters song called “Low.” From a list of around 50 ideas, we went with a short film about a guy (hi, that’s me) having the worst day of his life.

It didn’t make a whole lot of sense narratively. The editing isn’t great. That’s entirely my fault — quite embarrassing for someone who has legitimately won a filmmaking award. The name of our would-be production company is deeply regrettable too. But the video was so much fun to make. It’s still one of the things I’m most proud of.

We might never make another project like this together, but I’m still really looking forward to catching up with all these dorks in person soon:

Reply

or to participate.