artificial intelligence ai robot

According to a survey by Pew Research Center, 78% of Democrats and 68% of Republicans said there should be more government regulation on how firms handle customer info.

Colorado lawmakers have advanced legislation that sponsors say would enhance consumer protections against discrimination by artificial intelligence systems, though worries persist regarding its potential effects on small businesses and innovation.

At its core, Senate Bill 205 establishes regulations governing the development and use of artificial intelligence in Colorado and focuses on combatting "algorithmic discrimination."

The bill defines "algorithmic discrimination" to mean any condition in which AI increases the risk of "unlawful differential treatment" that then "disfavors" an individual or group of people on the basis of age, color, disability, ethnicity,  genetic information, race, religion, veteran status, English proficiency and other classes protected by state laws.

The measure is among several bills at the state Capitol dealing with AI technology, whose explosion in the last few years have caused both excitement — for those who view it as offering tangible solutions to society's problems — and worry for those who fear it might be deployed for nefarious purposes.

At the state Capitol, another bill seeks to regulate the use of content generated by artificial intelligence, such as “deepfakes,” in communications about election candidates. Outside of the Capitol, a panel of federal and state judges told a group of attorneys, in explicit terms, that artificial intelligence is here to stay and they must educate themselves about how to use it responsibly.  

To supporters, the bill is a critical step in curbing AI's potential excesses. To critics, it could stifle innovation — without actually resolving the issues it seeks to remedy.

Senate Majority Leader Robert Rodriguez, the bill's sponsor, said bias exists within AI systems in housing, bank loans and job applications. He said he has been collaborating with Connecticut state Sen. James Moroney, who is running a similar measure in his state.  

Rodriguez said an amendment tweaked and clarified some of the bill's definitions, while also postponing its effective date to October 2025. He said more changes are likely coming, but for the time being, it offers a "basic model framework" for the state.

The Senate Judiciary Committee approved the bill, 3-2, and sent it to the full chamber for a debate.

The bill requires developers to exercise "reasonable care" to prevent discrimination when using "high-risk" artificial intelligence systems, which are defined as systems involved in making "substantial or consequential" decisions. It requires developers to complete risk assessments, implement risk management strategies, and report instances of "algorithmic discrimination" to the Attorney General within 90 days of discovery. 

The bill also seeks to increase consumer transparency by requiring businesses that employ artificial intelligence to disclose the types of systems they use and notify consumers when a high-risk artificial intelligence system will be used to make "consequential" decisions.

Rodriguez noted that despite calls from tech giants, such as Mark Zuckerberg and Elon Musk, for federal AI regulations, Congress has not acted, prompting him to introduce the bill at the state level.

"At the base of this bill and policy is accountability, assessments, and disclosures that people need to know when they're interacting with artificial intelligence," he said. "We're in a groundbreaking place on this policy, similar to we were with data privacy, but every year we delay it, the more engrained it becomes and the harder it is to unravel."

'More harm than good'

Eli Wood, the founder of software company Black Flag Design, expressed worries that the bill could inadvertently disadvantage small startups, such as his company, that heavily depend on open-source AI systems.

Such systems serve as publicly available blueprints, enabling developers to access and customize them to craft artificial intelligence solutions. Major corporations such as OpenAI, the creator of ChatGPT, often contribute to these open-source systems. 

Wood said the the bill could penalize small businesses for "algorithmic" bias identified in their system, even if the bias originated from the open-source system rather than the one developed by the small business itself.

Because of this, Wood argued that generative AI models created by major corporations should be the bill's target — not small startups.

"AI is the defining technology of our generation, and I believe it's in the best interest of every Coloradan that we're having this discussion today, but the bill in its current form will do more harm than good," he said. "At first glance, it seems like it's a sensible solution to control impacts of this technology before it negatively impacts society, but I believe it will severely curtail the ability of small organizations like ours and negatively impact democratizing the technology for societal good." 

Logan Cerkovnik, founder of Thumper AI Corporation, said the bill would effectively ban his company's platform and constitute a "de facto" ban on leasing open source AI models, "while failing to stop algorithmic discrimination due to loopholes."

Cerkovnik noted that Connecticut's governor has threatened not to enact the state's similar bill into law unless startup protections are incorporated. He advocated for scrapping the bill and introducing a revised version in the next legislative session after thorough discussions between the sponsor and artificial intelligence experts — because, he said, "the future of AI in Colorado is too important to be banned by poorly drafted regulations."

Michael McReynolds from the Governor's Office of Information Technology acknowledged that "safeguarding consumers is paramount” but agreed that further engagement with parties needs to occur to avoid unintentional adverse effects and ensure that the bill's provisions are applicable.

Furthermore, McReynolds referenced an executive order issued by President Joe Biden last year regarding regulations on artificial intelligence, suggesting that forthcoming federal legislation might conflict with the state bill if enacted before federal action.

"Innovation should be encouraged and not stifled, and any legislative measure should strike a balance between consumers and fostering technological advancement," he said. "The bill is implementing measures that may not be feasible or effective."

'I have the right to know what models are deciding my future'

Several high school students interested in artificial intelligence argued the bill is necessary, even if it isn't perfect.

Benjapon Frankel said artificial intelligence can be found in "most everything," saying he is concerned about algorithmic discrimination's increasing prevalence.

While acknowledging concerns about stifling innovation, Frankel argued that preventing discrimination is a more pressing priority.

"These are injustices that cannot stand nor continue to be reinforced by these models," he said. "Even regardless of personal willingness to allow these to stand in the name of innovation of progress, rejecting this bill and bills like it comes at the direct cost of transparency, accountability, and justice."

He added: "The defense of innovation for the sake of innovation fails to mean something when that innovation is concentrated in the hands of the few, held behind opaque curtains, and is a source of systemic abuses."

Cherry Creek High School junior Shourya Hooda also said the bill provides the state with a "strong base off which to build a robust, innovative AI regulation framework." He emphasized the significance of the bill's consumer notification provisions and argued that it ensures that the market is not oversaturated with "useless businesses." 

"This bill ensures that careful developers stay that way and makes reckless development impossible," he said. 

Beth Rudden, CEO of Bast AI, called the bill a "pragmatic and necessary measure" to maintain the integrity of artificial intelligence systems. She argued that the bill is not just about compliance within the industry, but also about holding developers accountable for unethical actions. 

"By supporting this bill, we commit to a path that respects consumer rights, promotes transparency, and fosters trust in the technologies that are shaping our future," she concluded. 

Newsletters

Get OutThere

Signup today for free and be the first to get notified on new updates.

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.