IE11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AIF Coalition for the Future of AI in Business Shares Policy Recommendations

One of the coalition’s recent reports focuses on some of the state’s AI-related legislation and offers recommendations on Florida’s approach to artificial intelligence regulation.

A layout of a brain formed by blue lines with one side looking like a computer chip to indicate artificial intelligence. Gradient blue and black background.
In one of its recent reports, the AIF Coalition for the Future of AI in Business has outlined several recommendations regarding the state’s approach to artificial intelligence regulation.

The coalition, launched last year by the Associated Industries of Florida, aims to “develop guidelines for accountable and innovative AI policies” to “educate and engage with policymakers to ensure a responsible regulatory structure.”

HOUSE BILL 919


HB 919 defines generative artificial intelligence as “a machine-based system that can, for a given set of human-defined objectives, emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, videos, audio, text and other digital content.”

One of the coalition’s concerns about this definition is that it combines an AI definition with a specific type of artificial intelligence, limiting the flexibility of future legislation and interoperability with other states.

Recommendations include monitoring the work of the National Institute of Standards and Technology for more guidance and to consider the following when defining AI:
  • Limit definitions to systems that make decisions and impact the public 
  • Ensure definitions are clear and concise to align with areas of perceived risk and avoid interpretation outside the intended context 
  • Align definitions with others that are approved to provide regulatory certainty for businesses across the country and avoid confusion about consumer rights   

HOUSE BILL 1459


HB 1459 aims to establish an AI-related transparency process for businesses.

The bill states that an entity or person who produces AI content or technology for a commercial purpose to the public must “alert consumers that such content or technology is generated by artificial intelligence” and “allow such content or technology to be recognizable as generated by artificial intelligence to other artificial intelligence.”

The coalition raised a concern that the language might be open to interpretation, especially given the lack of industry-wide or federally mandated standards.

Specific concerns and recommendations regarding HB 1459 include:

High-Risk Versus Low-Risk AI Uses

The report identifies and separates AI systems into high-risk and low-risk categories. Low-risk systems and applications include chatbots or basic data analysis tools, while high-risk systems are often used in health care, autonomous vehicles or critical infrastructure.

By “tailoring regulations to the specific risk profiles of AI applications,” the report states, it “can ensure a balanced environment in Florida that promotes responsible innovation while safeguarding the welfare of its residents.”

The coalition recommends identifying high-risk and low-risk uses of AI when developing a regulatory framework to enhance innovation and safety.

Regulatory Scope

According to the report, the term “available to the Florida public” is used broadly and appears to regulate those with a website, including companies that do not intend to reach Florida residents, posing constitutional concerns.

As a result, the coalition recommends:
  • Limiting regulation of high-risk uses of AI that directly make consequential significant decisions about consumers and video or images that will materially mislead consumers about actual events 
  • Clarifying the intent of any proposed legislation to reach companies doing business in Florida or engaging with Florida residents 
Standards and Disclosure

The requirement in the proposed legislation that a company “create safety and transparency standards” and that those standards “alert consumers” and “allow such content” to be recognized as AI-generated is unclear, according to the report. The proposed language, which states “offers for use or interaction,” could be interpreted to include the presence of text or images on a website,” the report states.

To fix this, the coalition recommends clarifying the terms “use” or “interact” in the legislation. This could further answer whether static web content counts as an “interaction” when consumers visit a website or read and view specific content.

The coalition also urged lawmakers to consider requiring disclosures on webpages where users will interact with high-risk AI systems. They should also encourage businesses to “do the right thing” by proactively building safeguards, taking action when vulnerabilities are identified and penalizing companies that fail to act when vulnerabilities or risks are identified.
Katya Diaz is an Orlando-based e.Republic staff writer. She has a bachelor’s degree in journalism and a master’s degree in global strategic communications from Florida International University.