IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI Is Already Changing Bay Area College Campuses, Courses

“I realized my students have to have something to talk about in interviews with companies, and be able to talk about AI,” said San Jose State professor Ahmed Banafa.

Artificial intelligence is poised to rapidly change everything from how we drive to how we live, work and play. And in the Bay Area, where the nascent industry is largely based, it is already changing how, and what, universities teach.

Professors at schools across the region said they are increasingly using AI to solve classroom tasks and administrative snags while making familiarity with the technology a requirement for their students. And, they said, students across all sorts of disciplines — not just technical fields — are showing much more interest in how AI affects their chosen area of study.

From Stanford to UC Berkeley, San Francisco State to San Jose State, pupils studying law, philosophy, medicine, and other topics are being drawn to understand AI and its effect on their subjects. In response, schools are offering new classes on how to incorporate the technology into everything from accounting to filmmaking to ethics.

San Francisco State computer science professor Dragutin Petkovic has offered a certificate in ethical and explainable AI since 2019, taught alongside courses in philosophy and business. He said the release of ChatGPT last year and the boom in interest in generative programs that can seemingly write and create images has changed who takes his class — and what’s discussed.

“There is much more interest in the last year,” Petkovic said, adding that the class was about half computer science students during the last semester, with the other half a mix of philosophy students and those from other disciplines.

“This year I have a lawyer, a dissident from Iran, and a former employee of OpenAI,” among others, he said.

Even his software engineering courses, which challenge students to come up with and code different kinds of applications, have incorporated generative software to write and edit code, Petkovic said. He allows his students to use the technology to help create their programs, but requires them to author papers on the experience and its comparative benefits versus traditional methods.

One of Petkovic’s students, John Kongtcheu, used an AI tool called GitHub Copilot to help with coding his final project, having used AI programs in the past to work on software projects. “I use it like a buddy,” Kongtcheu said. He said it was particularly useful to complete time-intensive tasks more quickly and for editing code he has already written.

Another of Petkovic’s students, Brandon Watanabe, said he’ll sometimes drop a chunk of unfamiliar code into an AI engine and ask what it is for, and get a response written in fluent English instead of having to pull it apart himself or scroll online message boards.

When the AI certificate was first offered four years ago, “the problem was quite new,” said San Francisco State professor Carlos Montemayor, who teaches the philosophy section and just published a book he is also using in his course called The Prospect of a Humanitarian Artificial Intelligence.

“The majority of students I had four years ago were a little bit more siloed,” Montemayor said. Now, he teaches philosophy students curious about AI — as well as computer science students curious about the philosophy of AI, among others.

San Francisco State also offered courses this academic year that it did not the year before. Among those were classes on using AI, blockchain technology and automation in accounting, and another called Filmmaking in the Age of AI.

Stanford also has added courses, including one on medical applications of generative AI programs such as ChatGPT and image generator DALL-E, where students pitch and brainstorm applications for the technology in the field to peers and faculty.

New and different course offerings and increased interest in them aren’t the only changes being wrought by the technology at the university level.

UC Berkeley professor Narges Norouzi started encouraging her computer science students this fall to take advantage of OpenAI’s GPT-4, albeit in a limited way, to solve knotty coding challenges. The technology’s AI-generated coding hints can help students push through tough problems and get faster results.

The effort is still in the testing phase, mainly in her introduction to coding class, Norouzi said, and does not give students access to the full-on chatbot. Instead, a program she fashioned using the technology allows students to click a button that then prompts GPT-4 for a hint based on the material they are working on and, increasingly, what the program knows about the class and student learning patterns.

Students have been critiquing the coding hints, which Norouzi said will be incorporated into future versions of the tool through a process called reinforcement learning from human feedback. Over time, it will be able to increasingly understand how students learn and make better prompts.

Another program still in the testing phase that Norouzi hopes to incorporate into classes next year will feed GPT-4 with information from course texts and an online student Q&A portal, called Edison. The goal is for the program to learn from that material and be able to speedily answer many of the thousands of questions students pose to Edison every semester.

One data course has roughly 1,300 students, Norouzi said, and teachers’ assistants can be heavily burdened with answering every question that pops up on Edison.

“We’ve been working on how to give the (model) the right context,” Norouzi said. That means training it to know whether to reference a course text or a homework description or the syllabus to see whether a question has already been asked and answered.

She said a related tool that she plans to make available to students next semester can perform similar context queries of course materials, but with an eye toward shortening the long waits that can crop up when students sign up for office hours for one-on-one time with professors.

“The queues get as long as four hours’ wait time,” Norouzi said. So she has created a chatbot based on GPT-4 versed in the course material that students can converse with, potentially solving their problem without having to wait in line.

San Jose State professor Ahmed Banafa has made familiarity with artificial intelligence a requirement for graduating from his courses.

“The minute ChatGPT came out ... I realized immediately this is different,” said Banafa, who teaches classes in computer networking, operating systems, cybersecurity and other topics. He said that since ChatGPT’s release, he has required undergraduate senior projects and the work of the master’s degree students he supervises to have an AI component, be it a chatbot to interface with an app that a student built or some other kind of integration with OpenAI’s software.

“I realized my students have to have something to talk about in interviews with companies, and be able to talk about AI,” Banafa said.

Campuses, particularly public universities, also have a broader role to play in parsing what it means that AI is likely here to stay, beyond just incorporating the technology into a range of classes.

“There is a larger debate that is just starting,” said Montemayor, the philosophy professor. While some believe AI should be incorporated into all aspects of life, others are concerned about its ability to concentrate power and wealth in the hands of a few executives and large companies and its potential effect on the environment and the workforce of the future, he said.

“Most of the people that I’ve seen talk at Stanford (about AI) work at Google or are very close to Silicon Valley,” and tend to express the dominant voice of the AI industry, Montemayor said.

“The voices of public institutions really need to be considered as a way of balancing” those two sides, he said.

(c)2023 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.