April 23, 2024

Senate Democrats, Business Leaders, Students Advocate for AI Regulation

Senate Democrats pushed this week to demonstrate broad-based support for evolving legislation to regulate emergent artificial intelligence technology through AI literacy provisions and prohibitions on harmful applications like deep fake pornography.

The Senate’s majority Democrats held a press conference Monday in the Legislative Office Building featuring a handful of tech industry stakeholders, meant to illustrate support for Senate Bill 2 — the legislature’s latest attempt to regulate a form of technology that has become increasingly prevalent in American life.

“One of the things most important to note is that AI decisions impact us all,” Sen. James Maroney, D-Milford, said. “We’re not saying not to do these things; we’re saying we need to look at things, make sure they are safe and then release them to the public.”

The bill, which advanced out of the legislature’s Judiciary Committee Monday, builds on a legislation spearheaded by Maroney last year, which regulated the use of AI and algorithms in state government decision making.

This year’s bill takes a two-pronged approach to AI oversight.

The proposal recognizes the practical applications of the technology and includes provisions meant to support its potential to optimize services like health care delivery and workforce training.

The legislation also seeks to rein in some of the more exploitative byproducts of AI, like deep fake pornography and its potential for manipulating voters during election years.

Senate Majority Leader Bob Duff, D-Norwalk, pointed to the data privacy issues that emerged in the absence of strong Internet regulations in the 1990s.

​​”Our job here is not to repeat the sins of the past but to put necessary guardrails and parameters around this technology,” Duff said.

However, the bill must pass through both chambers of the legislature prior to the constitutional conclusion of this year’s session at midnight on May 9 and some policymakers have viewed the regulations with skepticism.

Monday’s press conference served to illustrate support for the proposal among leaders of Connecticut’s tech industry.

“AI is very much ink spreading in water right now,” Matthew Wallace, CEO of VRSIM, said. “It’s influencing everything we touch from Netflix to bank rates. It needs to be thought about; it should not be the social media of the next 10 years. You want to be thoughtful of how we would regulate and to know that impact.”

Dr. Kevin Carr, CEO of the National Coordination Center, stressed the importance of regulation to oversee advances in technology, even as those advances lead to medical breakthroughs.

“As we leap forward with innovation in health information technology, there is always a need to be able to continue to push the envelope,” Carr said. “There are times when that technology moves too quickly, where innovators leap forward without clinical advisors in place to oversee and test algorithms.”

In the absence of regulation, exploitative images and nonconsensual videos continue to proliferate on the Internet.

Student advocate Naja Bennett of LiveGirl, a Connecticut-based nonprofit dedicated to supporting women leaders, said this AI-generated content comes largely at the expense of women and girls.

“Imagine waking up and seeing your face on a video you didn’t consent to, or the image of a loved one,” Bennett said. “We don’t know how far this can go. However, I hope we can take a stand and ensure that this legislation is passed as a protection against this.”