IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Commentary: Reducing the Risk of AI Isn’t an Impossible Mission

The procurement process might need a clause stating that if a non-AI product ever decides to implement a new feature including AI, it triggers a need to re-evaluate the product for security and risk. This is the only way I can see to mitigate the risk of accidental breaches of data.

Illustration of a robot in front of a computer circuit board.
The further we travel into the future, the question will always continue to arise: Is AI dangerous?

The simple answer is no, we are far away from any AI that can cause turmoil, as seen in the recent movie Mission Impossible — Dead Reckoning Part One. The AI antagonist in the movie, called The Entity, is a stretch to anything the world has at any level today. The largest-scale models and AI engines take warehouses of supercomputers to get even close to something like what was portrayed as fitting in a small part of one submarine in the movie.

The AI technology and LLM modeling we are using in products today can be viewed as “Theory of Mind” AI, which, according to aForbesarticle from 2019, is only the third level of what they wrote as a seven-tier AI structure. I believe we are at this level, which Forbes defines as being able to decipher the “needs, emotions, beliefs, and thought processes” of the individual interacting with the AI product. This is called “artificial emotional awareness”; others might call this emotional intelligence.

Much to my surprise, the Placer County chatbot, which underwent a recent AI enhancement, can now decipher people’s emotions. The AI-driven bot is able to determine the individual’s intent and apply a more compassionate response. For example, it might respond, “I am sorry about your situation, here is some information about where you can find support for homelessness.” The section of this response, “I am sorry about your situation,” tells me the bot was able to decipher that the person was asking about homelessness, a stressful situation when it happens, and it provided a more human-like response.

I want to back up a step to the concept of the recent enhancement with AI. In the case of the Placer County chatbot, this was not a concern, since it deals with 100 percent public data. However, there could be more concerns when dealing with internal systems and integrations with specific regulations and security requirements — PII, HIPAA, or CJIS data classifications, for example.

There are a lot of discussions around the procurement of AI products and how to implement best practices, standards and policy. However, what’s really needed is an evaluation of future products that have no AI component at the time of purchase. Of concern is when a non-AI system is running in production with no AI and no AI evaluation for security or policy, and then undergoes an update that suddenly adds an AI component without evaluation. This may seem harmless, and many companies might not even feel a need to disclose this information. Unfortunately, not only does it add risk to the agency using the product, it adds risk and liability to the vendor that creates the product, maybe unknowingly.

The procurement process might need a clause stating that if a non-AI product ever decides to implement a new feature including AI, it triggers a need to re-evaluate the product for security and risk. This is the only way I can see to mitigate the risk of accidental breaches of data.

It also brings to light the requirement to better classify data within networks and internal systems. Vendors should also take more concern and responsibility to address these types of updates, and not “slip” the AI into an environment without the customer’s knowledge. The other problem is that many AI web services are still not analyzed for security and should be used only for publicly accessible data. Any information pertaining to the use of AI being disclosed to the agencies should be considered beneficial and support the changes. For example, the solution might already be FedRAMP-certified or on the federal General Services Administration’s purchasing list. This usually requires a significant amount of analysis by the federal government, which would ensure the solution has been determined to be secure and low-risk.

After all, AI at this level is here to stay — there is no turning back now. From here, we all must try to collaborate to make it work in the safest manner to enhance products, not forgetting security or compliance, and to mitigate risk. In return, the hope is that this will reduce the probability of breaches and data loss.
Benjamin Palacio is a Senior IT Analyst on the ESSG-Enterprise Solutions Team for the Placer County Information Technology Department and is a CSAC-credentialed IT Executive. He is also an expert in public sector AI, chatbot, and integration techniques. The views expressed here are his own. He may be reached at ben.palacio@gmail.com.