The Lumberjack



Students Serving The Cal Poly Humboldt Campus and Community Since 1929

Tag: artificial intelligence

  • Cal Poly Humboldt embraces ChatGPT

    by Carlina Grillo

    In recent years, ChatGPT and other generative artificial intelligence (AI) models have become mainstream, with many people having begun to wonder if this is something to fear, or something to embrace. Students and faculty at Cal Poly Humboldt have been testing the waters and, despite mixed reviews, it is apparent ChatGPT is here to stay. 

    “I do think that there is a role for ChatGPT and gen AI models in the classroom. However, I think my recommendation for its use varies according to the level of the student,” said associate professor of computer science and software engineering program lead, Dr. Sherrene Bogle. “As higher educators, we have a responsibility to teach our students how to use the technology effectively.”

    “ChatGPT is a tool applied in academia for diverse research like linguistics, psychology and computer science. It aids in hypothesis generation, language analysis and ethical exploration. Used in education and interdisciplinary studies, it’s showcased in conferences, with further advancements documented in recent academic publications,” according to ChatGPT itself.

    Bogle first heard about ChatGPT last year and, although reserved at first, has since embraced it. Cal Poly Humboldt students studying computer science were offered an elective course this semester that focuses on AI, CS 480, taught by Bogle. One exercise she instructed in the lab was to use a selection of AI technology such as ChatGPT, Bard or Claude and find examples of where the programs hallucinate. 

     Noah Zerbe, a professor in the department of politics, defined what the term ‘hallucinate’ means for generative AI programs.

     “Sometimes [generative AI] will just make things up for you,” said Zerbe. “Often, one of the problems a lot of people have noted is that it makes up citations. So, it’ll say that a quote comes from this source, or something like that, and that source may not even exist. Or, it can go down a rabbit hole of factually incorrect information.” 

    One mainstream example of ChatGPT ‘hallucinating’ is apparent in an incident where Colorado lawyer, Zachariah Crabill, was fired from Baker Law Group for using ChatGPT, and got caught when the program created fake scenarios in legal documents.

    Another example of a generative AI model ‘hallucinating’ took place in the Cal Poly Humboldt AI class. According to Bogle, two students were able to convince an AI chatbot that they had lunch with Michael Jackson and Tupac. 

    “They just said a few things, changed the prompt, and the technology apologized for saying that these people were dead and wanted to hear more about their conversation and was basically adding that to its database,” said Bogle.

    As an advocate for programs like ChatGPT in academia, Zerbe encourages students to explore the program, but introduces generative AI models into the classroom with a warning.

    “My approach to it is really centered on teaching students how to use it effectively,” said Zerbe. “It’s providing you feedback on your paper, trust it as much as you would trust feedback that you got from a friend down the hallway.”

    Professors aren’t the only ones exploring this new territory with caution. Maddie Haus, a junior at Cal Poly Humboldt majoring in environmental studies, was suspicious at first. 

    “I was stoked because it helped me so much, but I also found it scary because it seemed too good to be true,” said Haus. “I think it can be useful for achieving academic success. I do think there’s a thin line between doing the work yourself and just having ChatGPT do the work for you. Overall though, I think it’s a useful tool.”

    Mikey Crispin, a Cal Poly Humboldt graduate who now works for the PBLC and the university as a scheduling and support analyst, has used ChatGPT on and off. 

    “I didn’t really use AI for school because it didn’t explode until last November,” Crispin said. “By that time I was only taking the upper level computer science classes and could get all the info from professors.”

    Now that he’s graduated, Crispin uses ChatGPT mainly for troubleshooting while writing software. 

    “GPT doesn’t always know what you’re trying to do, but from a basic coding perspective, if you’re giving it info just to cover some gaps you might have, then it is a great assistant that can help you gain a better understanding of how something might work,” Crispin said. “I have no experience asking GPT for help for anything outside of a technological viewpoint. It did write a really good email for me once in the style of a Cat in the Hat book, so that was cool.”

    All in all, ChatGPT, along with other generative AI programs, seem to be the future of education, but using AI as a crutch is inevitably harmful. 

     Bogle illustrates her perspective towards generative AI.

     “If you think about it, we don’t want our elementary school students and kindergarten students to be using a calculator, because we want them to learn to count, to carry their hundreds, tens, and ones, etc., to be able to do long division,” Bogle said. “But, after they have mastered being able to do their basic arithmetic with pencil and paper, we’re comfortable with them using the calculator. So, it’s similar with gen AI, which is why I said I think students need to master honing the particular skill first, before they become reliant on gen AI. That way, I think we have the best of both worlds.”

  • Artificial Intelligence Generates Real Jokes on Twitter

    Artificial Intelligence Generates Real Jokes on Twitter

    Creators say we shouldn’t worry about being replaced yet

    There is a new breed of bot accounts coming to Twitter, but these aren’t put there by Russia or the CIA or whoever else is trying to influence an election. They’re novelty accounts, posting large quantities of tweets that mimic the style of existing users.

    Twitter user @kingdomakrillic runs one of these accounts. He asked to only be referenced by his Twitter account. His parody account, @dril_gpt2, sends out a new tweet in the style of @dril several times a day. @dril is a somewhat mysterious, absurdist comedy account that posts their jokes from behind the pseudo-anonymity of a profile image of an incredibly blurry Jack Nicholson. @kingdomakrillic explains their reasoning for choosing @dril to imitate.

    “I wanted to do a GPT-2 bot of someone who was both famous and whose voice on Twitter was near-exclusively comedic,” he says. “If I did, say, a Trump bot, the only humor would come from the novelty of a bot generating Trump-like tweets.”

    These imitation @dril tweets can be shockingly on-brand yet original at times. It’s not uncommon to see replies wondering if the tweets from the account are still created by a bot.

    @kingdomakrillic assures me the tweets are bot-written but hand-selected.

    “Curating the tweets is like DJing. I pace the content out, placing tweets I’m sure are funny next to ones I’m more uncertain about,” @kingdomakrillic says. “Sometimes I screw up. It’s a skill, not 1/10th of the skill that goes into actually writing tweets like dril’s, but it’s still something I need to improve on. There’s no excuse to post duds when you can output infinite text.”

    That infinite text doesn’t come from nowhere. It comes from GPT-2, a language model created by OpenAI, a research group with a focus on machine learning.

    Sherrene Bogle is a computer science professor at HSU with experience using machine learning. Conceptually, teaching an algorithm how to do something is a lot like teaching a person. Bogle uses the example of teaching an algorithm to recognize whether a bird is in the foreground or the background of an image. First the algorithm is given a set of bird pictures that are already labeled as to whether the bird is in the foreground or background, allowing it to figure out the differences. Then it’s given unlabeled bird images, where it looks for those same differences. The difference between a human in a machine doing this task is that the machine doesn’t actually understand what it’s doing. The machine simply recognizes patterns.

    Instead of looking for where birds are in pictures, GPT-2 looks for patterns in text. It’s job is to predict not just the next word, but the next couple paragraphs. GPT-2 is so good at this task that it can make paragraphs of human-readable text after being given only a handful of words. The output text can be about anything, but in order to generate text that mimics the style of a Twitter user, programmers need to retrain the model.

    “The potential for harm is less than current human bad actors.”

    Max woolf

    @kingdomakrilic says he retrained GPT-2 on 9,500 tweets, totaling about 750 kilobytes. This focuses the original GPT-2 training data, consisting of almost 40 gigabytes of data, to accomplish a more simple task. The more simple a task, the better an AI can imitate it. Imitating tweets is simple, and with GPT-2’s vast capabilities, imitation yields good results.

    There is also @kingdomakrilic’s curation, which gives many of his followers the impression that the AI is better than it really is.

    Max Woolf is a data scientist at BuzzFeed, and the person responsible for making these twitter bots so easy to create. He built a tool, called GPT-2 Simple, to easily retrain GPT-2 with any new data—tweets—and wrote an accompanying tutorial. Some people think AI is a threat to humanity, but Woolf says otherwise.

    “The potential for harm is less than current human bad actors,” Woolf says.

    @kingdomakrilic agrees with this sentiment.

    “Some people get freaked out at the fact that GPT-2 can produce sentences that have humanlike coherence, but are made with no meaning or intent on the bot’s part besides to imitate how humans write,” he says. “Markov chains, Madlibs, autocompletors, esquisite corpses—they’re also capable of creating coherent text with the illusion of intent. They’re just not mysterious black box programs like GPT-2.”

  • Big Brother is still watching you

    Big Brother is still watching you

    Personalized ads, location tracking services and obsessive use of social media. Technology is on track to outgrow human intelligence as it continues to ingrain and spread itself throughout our increasingly globalized society. In recent years, George Orwell’s “1984” about a dystopian world of mass surveillance has become our reality. What feeds Big Brother’s insatiable desire for global brainwashing and espionage is our growing dependence on technology.

    The current world population of 7.6 billion is expected to grow to 9.8 billion by 2050. Consequently, the growing population ensures that there will be future consumers to continue the dependence on technology. Overpopulation in combination with more smart phones, computers and other surveillance devices means that there will be more documentation of our private lives.

    Digital technologies are woven into our classrooms, offices and personal lives. We rely on it for communication, GPS and a myriad of other apps that make our lives easier. But Big Brother is tracing everything we do on these devices. Information is mined, processed and sent to ad agencies to seduce us with products we don’t need. Moreover, consumerism distracts us from the issues happening all around us every single day. A population of latent minds is exactly what Big Brother wants.

    “We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of,” said propaganda expert Edward Bernays.

    This is happening without our consent and with little concern to us. Civilization is full of sheeple conforming to Big Brother’s values, agenda and desires. The future isn’t looking so bright either. Some futurists predict artificial intelligence dominating the human race. Unless tenacious, drastic and global measures are taken, we will inevitably succumb to its irresistible powers. Until then, Big Brother is still watching you.