IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: Getting Around the Real Challenges of Artificial Intelligence

“Just as with any Internet search, one must validate the sources of AI,” writes senior IT analyst Ben Palacio. “Would you really believe just anything on the Internet? Does the same question apply to AI? (By the way, you should answer no!)”

With AI being the hot-ticket item in government technology, it’s become apparent that there is a lack of understanding of the entire concept. There are a few things to keep in mind when referring to AI: context, data models, training, accuracy and hallucinations.

Let me first start by setting the stage: I have been working in the AI/machine learning realm since late 2018, on a project called “Ask Placer.” This was an innovative project in 2018, and it included integrations with Google Assistant, Alexa and a website — all of which utilized the same cloud-based data storage. But since 2018, nothing has significantly changed with the architecture.

AI still strongly relies on context to build more accurate results. Another key component — again, the same as with the 2018 project — is context analysis. You can note that most AI products run in session-like states, requiring resets if a topic changes, which in turn refreshes the context stream.

The big problem I see coming forth in AI is the inability to own the training models or how this would even become standardized across platforms. The reason for this is the startup time needed for a good AI implementation. For example, with a low user base of 100 users per day, it takes approximately six months to build up the training of the models to have accuracy within acceptable ranges. The issue, if you have not already come to it, is the challenge of switching solutions midstream — a common practice for many agencies. Each time a solution changes or a new one is required, it takes about six months to spin up the new product before it is very accurate. There are a few load-based strategies, but these are usually found with high-end, custom projects with load testing, so they can provide quicker spin-up times. In smaller solutions with more targeted AI streams, or a solution with a pre-defined/pre-trained model set, this might not be a significant problem.

With bigger solutions, another major issue is having the data prepared. A chatbot on an agency website, for example, requires data from that site in order to provide responses. This data must be collected and prepared prior to AI even attempting to process requests. This also points to the requirement to verify and maintain the accuracy of data posted in various places on the web.

As for accuracy and hallucinations: What is an AI hallucination? Simply put, a hallucination is the concept of the AI product providing incorrect data in a response to a user’s request. In a recent review of Microsoft’s new copilot (try it out: https://copilot.microsoft.com/), I noticed something other AI solutions do not do: They provide what they call “Learn more” site links at the end of the response stream, providing a link to the data source used in the AI responses. One can only assume this might help reduce the possibility of hallucinations.

One thing is for sure, though: Just as with any Internet search, one must validate the sources of AI. Would you really believe just anything on the Internet? Does the same question apply to AI? Why not? (By the way, you should answer no!)
Benjamin Palacio is a Senior IT Analyst on the ESSG-Enterprise Solutions Team in the Placer County Information Technology Department and is a CSAC-credentialed IT Executive. The views expressed here are his own. He may be reached at ben.palacio@gmail.com