IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

State Legislator Provides Framework for Future AI Legislation

State Rep. Giovanni Capriglione detailed legislation to be introduced in the upcoming legislative session at the state Capitol on Tuesday.

technology legislation, law
Shutterstock
State Rep. Giovanni Capriglione (R-98) hosted the first AI stakeholder meeting at the Capitol on Tuesday, where he outlined a framework developed by the Innovation and Technology Caucus of the Texas Legislature (IT Caucus) for legislation regarding artificial intelligence to be introduced in the upcoming 89th legislative session.

Capriglione first noted that a report from every state agency detailing all AI tools or software being used internally will soon be due to the IT Caucus to provide a better understanding of how the technology is being used in the state.

The representative emphasized the need for a comprehensive regulatory framework for AI in Texas, highlighting the importance of protecting individual rights and privacy, creating a level playing field for responsible AI actors and addressing risks associated with AI.

Capriglione shared a truncated version of the results of a request for information (RFI) released in January regarding legislative considerations surrounding AI, which received 47 responses.

Respondents agreed regarding limiting regulatory frameworks to high-risk applications with the potential to affect individual rights, the most common of which were identified as automated decision-making systems and foundational models. Respondents also expressed a preference for a single enforcement agency while rejecting any potential licensing mechanism or private rights of action. Whether said enforcer would be the the attorney general or an AI commission has not been determined.

Responses included varied opinions on existing legislation in other countries and states, although most aligned with the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) and the Organisation for Economic Co-operation and Development (OECD) AI Principles.

Capriglione then listed unacceptable risks associated with AI to be addressed in the framework that have already been defined, which include the manipulation of human behavior to circumvent free will, exploiting vulnerabilities, social scoring, untargeted facial recognition, emotional recognition, biometric categorization and child sexual abuse material.

He also noted high-risk automated decision-making systems (HR ADMS) and other AI tools used to make consequential decisions, such as financial and lending services, education enrollment, employment opportunities, or criminal justice, as areas requiring additional oversight.

In the proposed framework, deployers of HR ADMS would be required to follow a risk management policy that aligns with the NIST RMF, submit a consumer notice, report any discovered harm and comply with an impact assessment.

HR ADMS developers would be required to provide a statement of intended uses to deployers and report any discovered harm and may submit an impact assessment on behalf of a deployer.

Foundational models and GenAI models will be required to adhere to many of the same policies in addition to providing a statement of intended uses that details the source of training data.

All AI systems will be required to either request consent or provide notice to consumers interacting with AI systems and provide prompt transformation disclosures when proposed prompts are altered before generation.

A first draft of the proposed legislation is expected to be completed this summer, with a second draft to follow in the fall and the final bill in November.
Chandler Treon is an Austin-based staff writer. He has a bachelor’s degree in English, a master’s degree in literature and a master’s degree in technical communication, all from Texas State University.