Today, state Senator James Maroney (D-Milford), Chair of the General Law Committee, led Senate Passage on a Senate Democratic Caucus priority bill concerning the application of Artificial Intelligence.
Senate Bill 2, ‘An Act Concerning Artificial Intelligence’, will work to create regulations for Artificial Intelligence in Connecticut. This bill will focus on:
1.Transparency and accountability;
2.Training Connecticut’s workforce to use artificial intelligence;
3.Criminalization of non-consensual intimate images
“With this legislation, we are taking a big step forward to protect our communities from the risks of artificial intelligence, promote innovation that reflects our values, and empower every resident with the skills they need to thrive,” said Senator Maroney. “This bill positions us as a national leader in responsible innovation—ensuring that AI works for everyone and that no one is left behind in our rapidly evolving economy.”
Senate Bill 2:
Transparency and Accountability
This legislation will work to put in appropriate safety guardrails in areas where AI is being used to make important decisions about people’s lives, like housing, lending, employment, and government services. 80-88% of companies are using AI to make employment decisions. 50-70% (depending on survey) or large landlords are using AI for screening tenants.
A key provision of the newly introduced bill focuses on transparency in the use of artificial intelligence, ensuring that consumers are fully informed when interacting with AI systems. Under this legislation, companies will be required to clearly disclose when an individual is engaging with AI rather than a human. If AI is being used to make consequential decisions—such as those affecting employment, credit, housing, healthcare, or other critical services—businesses must explicitly inform individuals of this use. By mandating clear disclosures, the bill aims to uphold consumer rights, promote accountability, and build public trust in the deployment of AI technologies.
Connecticut will build upon legislation passed in 2023 that encompasses transparency and accountability surrounding AI. These companies will need to show proper safety parameters are being made to protect consumers from the potential hazards of AI.
Artificial intelligence is fast becoming a regular part of daily life, shaping the way Americans work, play and receive essential services. A recent Pew Research Center report highlighted an increasing call for regulation in this area finding that six-in-tend US adults say they’re skeptical of industry efforts around responsible AI and that companies will not go far enough to develop and use AI responsibility without regulation.
Training Connecticut’s Workforce to use Artificial Intelligence
The intersection of workforce development and artificial intelligence (AI) presents both opportunities and challenges. While AI can improve productivity and lead to innovations, its impact on the workforce has raised concerns about potential negative consequences.
Challenges can include automation, skill gaps and economic inequality. While AI can create new jobs, these roles often require specialized skills, meaning employees may need to reskill, which can be difficult without access to education or training programs. To mitigate these challenges, workforce retraining should be made accessible. This legislation will work to provide training opportunities to Connecticut residents while reaching people where they are.
The bill calls for the creation of the AI Safety Institute, a new resource in the state of Connecticut dedicated to promoting responsible, ethical and trust-worthy AI development. The Institute will provide tools, guidance, and best practices to help businesses—particularly small and medium-sized enterprises—design AI systems that are safe, unbiased, and aligned with internationally recognized standards. It will encourage organizations to conduct impact assessments and adopt industry-leading practices from the outset of AI development. By working at the cross-section of industry, academic research, and policy, the AI Safety institute will help ensure that there is a collaborative approach to stay current and push the boundary of how trust-worthy AI is developed and deployed promoting innovation in the region and beyond.
In tandem, the newly established AI Academy will offer training courses to help individuals and businesses learn how to use AI technologies effectively and responsibly. Together, the AI Safety Institute and AI Academy will serve as vital resources to ensure that AI systems used in Connecticut are transparent, trustworthy, and aligned with the public interest.
Recognizing that artificial intelligence will significantly shape the future of work, the bill directs the Department of Labor to begin monitoring and reporting on the impact of AI on the workforce. This includes tracking job displacement, job creation, and changes in the nature of work driven by AI technologies. By systematically collecting and analyzing this data, the state aims to better understand emerging labor trends and ensure that workforce development strategies are aligned with the evolving demands of the AI era.
50% of gateway jobs are at risk of being automated by generative AI. Under this legislation, we will work to provide opportunities to get careers and provide skills to stay relevant in today’s job market.
Hiring algorithms have been shown to discriminate based on age. Some algorithms have given higher interest rates for loans based on race, and many government used algorithms in other states, ranging from provision of SNAP benefits to deciding when to investigate reported incidents of child abuse, have been shown to discriminate based on income.
The online world has the increased capacity to store data online that can relinquish unwanted results. AI can produce ethical challenges including lack of transparency and un-neutralized decisions. Choices made through AI can be susceptible to inaccuracies, discriminatory outcomes, and inserted bias.
Recently, an AI Caucus was created in Connecticut. The AI Caucus will advocate for policies that ensure transparency, accountability, and ethical standards in the development and deployment of artificial intelligence. By fostering collaboration between lawmakers, industry leaders, and experts, the caucus aims to promote AI systems that are fair, unbiased, and aligned with public interests.
Criminalizing Deepfake Porn
Under this legislation, the bill will work to prohibit the use of AI to make deepfake porn of people, including the use of AI to create revenge porn.
In November of 2023, an undisclosed number of girls at a New Jersey high school learned that one or more students at their school had used an artificial intelligence tool to generate what appeared to be nude images of them. Those images were being shared among classmates. These AI-generated images that impose a face or body onto another to make it look like someone else is a called a deepfake photo.
Not all deepfake photos are porn; any time a face is imposed onto another body or a face is used to assign spoken words to someone who did not say the thing, it is a deepfake (no nudity required).
Deepfakes can use the face, voice or partial image of a real person and meld it with other imagery to make it look or sound like a depiction of that person. Under this proposal, there will be an update made to the revenge porn statutes to include generative AI images & put prohibition on models on child porn or nonconsensual images.
More Background
On May 17, 2024, Colorado passed the first comprehensive Artificial Intelligence bill in the United States. Colorado’s bill will impose obligations on developers and deployers of high-risk AI systems in an effort to protect consumers from discriminatory consequential decisions by such systems. It primarily targets AI systems that make significant decisions impacting individuals access to services like education, employment and healthcare.
This year, Connecticut formed the Artificial Intelligence Caucus with the foal to ensure that AI serves the public good and does not become a tool for harm. The AI Caucus will work with our advocates to facilitate innovation and prevent discrimination in AI.
Share this page: