Client Insight: Artificial Intelligence Insights The Current Regulatory Landscape
Gunderson Dettmer is pleased to present this insight highlighting the latest legal updates for companies regarding artificial intelligence (“AI”). This update spotlights recent regulatory themes impacting the development and use of AI-powered technology and key takeaways for participants of this ecosystem. For more education and insights into the development of generative AI, please refer to our Generative AI Resources portal.
As companies use, develop and deploy AI technologies, they must navigate an increasingly complex regulatory and commercial landscape.
At a federal level, the White House issued Executive Order 14110 on the “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” on October 30, 2023 (“Executive Order”), which outlined steps for AI oversight and regulation and enlisted agencies to adopt regulations. These agencies include, for example:
-
- The White House Office of Management and Budget, which has been tasked with identifying, evaluating and recognizing areas for potential development of standards and techniques for the authentication, detection and labeling of AI-generated content, as well as the protection of commercially available and personally identifiable information through the introduction of new AI-specific privacy and data protection strategies.
- The Federal Trade Commission, which has been tasked with developing policies and regulations to promote competition in AI and technologies, and exercising its rulemaking authority to ensure fair competition in the AI marketplace and protect consumers and workers from harms enabled through use of AI technology.
- The Federal Communications Commission, which has been tasked with evaluating how AI technology will impact communications networks and consumers, including by targeting innovative use of AI to improve spectrum management and support ongoing efforts to improve the security, resiliency and interoperability of networks, and by using its rulemaking authority to help consumers limit and block unwanted robocalls and robotexts.
- The United States Patent and Trademark Office and United States Copyright Office, which have been tasked with publishing guidance addressing how the use of generative AI technology impacts inventorship claims and eligibility for patent, copyright and trademark protections.
Over the past several months, the United States has seen the most activity at the state level, including 17 bills signed by Governor Newsom in California last month, the enactment of the Colorado AI Act on May 17, 2024, and countless other bills introduced at both federal and state levels.
Key themes of these regulations and guidelines include:
Laws Addressing Harms from Algorithmic Discrimination
AI Interaction Disclosure Laws
Intellectual Property, Rights of Publicity and Privacy Protections for Individuals
Watermarking Requirements on AI-Generated Content
- Laws Addressing Harms from Algorithmic Discrimination
-
- Regulatory Focus: Prohibiting or limiting the potential for algorithmic discrimination in the use of automated decision-making technology, including by imposing oversight, notification and impact and risk assessment duties on developers and deployers of AI systems.
- Algorithmic discrimination refers to the use of automated systems in a manner that contributes to unjustified treatment based on protected characteristics such as race, ethnicity, sex or religion.
- These laws apply to both (1) developers of these systems and (2) deployers/users of these systems.
- Recently Enacted Legislation (Not Yet Effective):
- The Consumer Protections for Artificial Intelligence Act was enacted in Colorado on May 17, 2024 and will go into effect on February 1, 2026. The Act:
- Requires (1) developers of a high-risk artificial intelligence system (“high-risk system”)[1] to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system, and (2) deployers (or users) of a high-risk system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system.
- Defines an AI system as “high-risk” if the system “when deployed, makes, or is a substantial factor in making a consequential decision.”
- Defines a “consequential decision” as a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (1) education; (2) employment; (3) financial services; (4) essential government services; (5) health care; (6) housing; (7) insurance; or (8) legal services.
- Developer Obligations: There is a rebuttable presumption of compliance for developers of high-risk systems who make available certain required information to deployers, including notifying of the risk and facilitating risk assessments, and making disclosures to the attorney general and users any such risks subsequently discovered.
- Deployer Obligations: There is a rebuttable presumption of compliance for deployers of high-risk systems who follow a number of risk management requirements, including implementing a risk management policy, conducting impact assessments and annual reviews, making public statements, facilitating consumer requests to correct information and appeal consequential decisions and making disclosures to the attorney general of any algorithmic discrimination that is subsequently caused.
- The Consumer Protections for Artificial Intelligence Act was enacted in Colorado on May 17, 2024 and will go into effect on February 1, 2026. The Act:
- Existing Legislation:
- The European Union Artificial Intelligence Act, became effective on August 1, 2024, but will roll out in phases with anticipated full applicability on August 2, 2026 and a deadline for full compliance set for August 2, 2027. The Act:
-
- Was enacted in the European Union (the “EU”), but will also apply to companies outside of the EU that have a link to the EU market, including, for example, companies with AI systems that produce outputs used within the EU.
- Establishes a risk-based approach to obligations for providers, deployers, distributers and product manufacturers of AI systems where the higher the risk presented by an AI model, the more stringent the applicable requirements.
- Classifies biometric uses of AI and AI used in law enforcement, employment, education and critical infrastructure, for example, as high risk.
- May, depending on the level of risk assigned to a given AI model, require companies to publish transparency notices regarding interactions with AI systems, establish risk management systems, evaluate data governance practices and implement detailed record-keeping and human oversight procedures.
- Imposes fines for non-compliance of up to €35 million or 7% of annual worldwide revenue.
- Please refer to our forthcoming “Demystifying the EU AI Act” publication series for further detail.
-
- Colorado Senate Bill (SB) 21-169, Protecting Consumers from Unfair Discrimination in Insurance Practices was signed into law on July 6, 2021 and regulates algorithmic discrimination in insurance by empowering Colorado’s Insurance Commissioner to work with stakeholders (i.e. insurance companies, insurance agents/producers, consumer representatives, and other interested parties) to adopt rules regarding how companies should test and demonstrate that their use of big data is not unfairly discriminating against consumers.
- New York City Local Law 144, Automated Employment Decision Tools became effective of January 1, 2023 with enforcement beginning on July 5, 2023. This law requires employers in New York City to conduct bias audits on AI tools used in hiring and employment decisions. It aims to ensure that AI systems do not produce discriminatory outcomes and requires certain notice obligations in connection with the use of automated employment decision tools.
-
- See our 2023 Employment and Labor Law Update for information about this and other state and city laws regulating the use of AI in the workplace.
-
- The European Union Artificial Intelligence Act, became effective on August 1, 2024, but will roll out in phases with anticipated full applicability on August 2, 2026 and a deadline for full compliance set for August 2, 2027. The Act:
- Legislatures Considering Similar Bills: California, Connecticut, Georgia, Hawaii, Illinois, New York, Oklahoma, Rhode Island, Vermont, Virginia, Washington and at the federal level.
-
- Bills have been proposed in Maryland, New Jersey and New York specifically regulating how employers leverage automated decision-making technology when making personnel hiring, onboarding, compensation and promotion decisions, and bills more specifically focused on high-risk uses of AI have been proposed in states like New York (i.e., regulating how to responsibly use generative or surveillance advanced AI systems in a non-discriminatory manner).
-
- Key Takeaways:
-
- Exercise particular caution when leveraging AI in automated decision-making processes, especially for decisions with significant consequences, such as those related to finance, credit or employment.
- Implement human review and bias-detection safeguards when deploying or distributing AI-powered technology to mitigate risk.
- Consult your legal advisors to ensure you navigate applicable regulatory obligations carefully.
-
- Regulatory Focus: Prohibiting or limiting the potential for algorithmic discrimination in the use of automated decision-making technology, including by imposing oversight, notification and impact and risk assessment duties on developers and deployers of AI systems.
2. AI Interaction Disclosure Laws
-
- Regulatory Focus: Requiring clear disclosure of the use of generative AI technology in certain contexts.
- Recently Enacted Legislation:
- The Artificial Intelligence Policy Act was enacted in Utah (the “UAIP”) on March 13, 2024 and became effective on May 1, 2024. The Act:
- Requires prominent disclosures of use of generative AI (including AI-enabled chat bots) to interact with a person by professionals whose occupations require a license or state certification, including, for example, medical and accounting providers.
- Requires other businesses where a license or state certification is not required to provide clear and conspicuous disclosure of interactions with generative AI tools upon user request.
- Invalidates the defense that statements generated using AI tools that violate consumer deception standards are not made by the company leveraging the AI tools.
- Imposes fines for violations.
- Governor Newsom recently approved legislation in California that will require an announcement to inform call recipients when prerecorded messages use an artificial voice.
- The Artificial Intelligence Policy Act was enacted in Utah (the “UAIP”) on March 13, 2024 and became effective on May 1, 2024. The Act:
- Existing Legislation:
- The California Bot Disclosure Law was enacted in California on September 28, 2018 and became effective of July 1, 2019. The Law:
- Requires clear and conspicuous disclosure of use of “bots” in a manner reasonably designed to inform those communicating or interacting with bots of their artificial identity.
- Defines “bots” as “automated online account where all or substantially all of the actions or posts of that account are not the result of a person.”
- Prohibits use of a bot to communicate or interact with another person in California online with the intent to mislead the person about the bot’s artificial identity to incentivize a purchase or influence a vote.
- Expressly provides for enforcement by California’s Attorney General, which may result in per violation fines as well as the award of equitable remedies (like injunctions, for example).
- The California Bot Disclosure Law was enacted in California on September 28, 2018 and became effective of July 1, 2019. The Law:
- Key Takeaways: Provide transparent notices to users when they are interacting with AI-powered technology that they may mistakenly think is human – including chat bots, voice and text message impersonating telemarketing technology.
3. Intellectual Property, Rights of Publicity and Privacy Protections for Individuals
-
- Regulatory Focus: Curtailing deceptive or objectionable use of synthetic media, including audio and video content, substantially produced through use of generative AI technology.
- Recently Enacted Legislation:
- The Ensuring Likeness, Voice and Image Security Act (the “ELVIS Act”) was enacted in Tennessee on March 21, 2024 and became effective of July 1, 2024. The Act:
- Protects individuals from the use of their persona in connection with “deepfakes.” Deepfakes are fake content generated by AI that a user is led to mistakenly believe is authentic.
- Expands the existing Tennessee Personal Rights Protection Act of 1984 to prohibit unauthorized commercial use of a person’s photograph, voice or likeness.
- Creates a private right of action against anyone who unlawfully publishes, performs, distributes, transmits or otherwise makes available to the public a person’s photograph, voice or likeness.
- In California, Governor Gavin Newsom signed California Assembly Bill (AB) 1836 and California Assembly Bill (AB) 2602 into law on September 17, 2024 and will become effective on January 1, 2025. These laws will regulate the use of deceased performers’ likeness without the consent of their families and implement more stringent requirements for using AI to replicate the likeness of performers who are still alive.
- The Ensuring Likeness, Voice and Image Security Act (the “ELVIS Act”) was enacted in Tennessee on March 21, 2024 and became effective of July 1, 2024. The Act:
- Legislatures Reviewing Relevant Proposed Bills: California, Tennessee and at the federal level (through House Bills introduced to curtail use of AI technology to protect individuals’ voice and likeness), as well as a report by the United States Copyright Office regarding the unauthorized use of digital replicas).
- Key Takeaways: Avoid using AI-powered technology to impersonate a specific person’s image, voice or likeness without their permission, particularly for use as part of a commercial offering, and regardless of whether that person is living.
4. Watermarking and Transparency Tool Requirements on AI-Generated Content
-
- Regulatory Focus: Requiring clear disclosure of synthetic audio or video content substantially produced through the use of generative AI technology, or banning the use of this type of media in certain industries like political advertising, telemarketing or adult content.
- Recently Enacted Legislation (Not Yet Effective):
- California AI Transparency Act, Senate Bill (SB) 942 was signed into law on September 19, 2024 and will become effective on January 1, 2026.
- This newly enacted legislation will require providers of AI systems to create AI detection tools accessible through such providers’ websites and/or mobile applications.
- This AI-generated provenance (i.e., “watermarking”) and detection requirement will allow users to enable options to add disclosures to generated content noting whether content is AI-generated.
- California Assembly Bill (AB) 3030 was signed into law on September 28, 2024 and will become effective on January 1, 2025.
- This newly enacted legislation will require specific healthcare providers to disclose the use of AI systems and technologies to generate communications to a patient pertaining to patient clinical information.
- California Senate Bill (SB) 896 was signed into law on September 29, 2024 and will become effective on January 1, 2025.
- This newly enacted legislation will require state agencies and departments in California to clearly and conspicuously notify users when they are interacting with AI technology (and require these agencies and departments to conduct a risk analysis for automated decision-making systems utilized by states agencies before adoption).
- California AI Transparency Act, Senate Bill (SB) 942 was signed into law on September 19, 2024 and will become effective on January 1, 2026.
- Executive Order:
- In Executive Order 14110, President Biden directed the Secretary of Commerce, in consultation with the heads of other relevant agencies, to develop standards and techniques for identifying and labeling “synthetic content,” including by “watermarking” this content, and to report on the implementation of these standards and techniques.
-
- “Synthetic content” is defined as “information, such as images, videos, audio clips, and text, that has been significantly modified or generated by algorithms, including by AI.”
- “Watermarking” is defined as “the act of embedding information, which is typically difficult to remove, into outputs created by AI—including into outputs such as photos, videos, audio clips, or text—for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance.”
-
- In Executive Order 14110, President Biden directed the Secretary of Commerce, in consultation with the heads of other relevant agencies, to develop standards and techniques for identifying and labeling “synthetic content,” including by “watermarking” this content, and to report on the implementation of these standards and techniques.
- Platform Implementation:
-
- Several leading generative AI platform providers have implemented labeling and watermarking features as a best practice.
- All images created through OpenAI’s DALL-E, for example, will now include a metadata tag disclosing that such image was AI generated.
- Similarly, Google Gemini (f/k/a Bard) uses SynthID to embed digitally identifiable watermarks into the pixels of generated images.
-
- Legislatures Reviewing Relevant Proposed Bills: Massachusetts, New York, Ohio, Tennessee and at the federal level (through House Bills introduced to prevent the use of AI voice or text message impersonation in telemarketing, including with respect to the use of robocalls and similar technology).
- Key Takeaways:
-
- Provide transparent notices to users when they are interacting with AI-powered technology that they may mistakenly think is human, including chat bots, voice and text message impersonating telemarketing technology and similar features.
- Clearly identify AI-generated content and information as AI-generated to proactively address any misconception that any information shared was generated by a human.
-
5. Training Data Restrictions
-
- Regulatory Focus: Increasing consumer control over the use of certain types of consumer data to train AI models.
- Legislatures Reviewing Relevant Proposed Bills:
- California Assembly Bill (AB) 2877 would prohibit developers of AI systems and technology from using the personal information of consumers under the age of 16 to train AI models without parental consent.
- At the federal level, Senate Bill 3975 would require AI systems providers to seek informed and express opt-in consent from users before using their data to train AI models.
- Key Takeaways:
- Be thoughtful about how to accommodate increased user control over whether model developers and providers can leverage user data and inputs for training purposes.
6. Training Data Transparency
-
- Regulatory Focus: Increasing transparency regarding the type of data leveraged to train AI models.
- Existing Legislation:
- The EU AI Act, as described above, incorporates requirements that AI model providers draw up and make publicly available sufficiently detailed summaries of the content and data used for training models, the provenance of such data and information regarding the providers’ training methodologies and techniques.
- Recently Enacted Legislation (Not Yet Effective):
- California Assembly Bill (AB) 2013 was signed into law on September 28, 2024.
- This newly enacted legislation will require “developers” of generative artificial intelligence made available to California residents to publicly disclose detailed documentation on their websites regarding the datasets used to develop and train such AI systems and technology.
- “Generative artificial intelligence” means artificial intelligence that can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence’s training data.
- In addition to original developers of AI systems and technology, the definition of covered “developers” extends to any person or entity that has made any “substantial modification,” through new versions, releases, retraining or fine-tuning that materially changes the functionality or performance, to AI systems or technology released after January 1, 2022, with some limited carve outs.
- This law goes into effect on January 1, 2026.
- California Assembly Bill (AB) 2013 was signed into law on September 28, 2024.
- Key Takeaways:
- Track and be prepared to disclose the data sets used to train AI models.
7. Addressing Catastrophic Risk
-
- Regulatory Focus: Promotion of safety-by-design and policies and procedures for handling harm that may be caused through use of sophisticated AI systems.
- Recently Enacted Legislation:
- The EU AI Act, as described above.
- Other Regulatory Updates:
- On September 29, 2024, California Governor Gavin Newsom vetoed California Senate Bill (SB) 1047, The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
- If passed, this Act would have established some of the first regulations on large-scale AI models (i.e., trained using a quantity of computing power greater than 10^26 integer or floating-point operations or models with similar performance to that of a state-of-the-art foundation model) by requiring providers of such AI models to conduct safety assessments and tests, offer whistleblower protections to tech workers, develop safety plans if training spend exceeds $100M for an AI model and impose liability of highly sophisticated AI systems for harm caused or non-compliance.
- In a statement released in connection with this veto, Governor Newsom previewed that while this bill was “well-intentioned”, it “does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data”, and instead “applies stringent standards to even the most basic functions - so long as a large system deploys it.”
- Governor Newsom also previewed that the state will be partnering with industry experts to develop alternative guardrails around powerful AI models.
- On September 29, 2024, California Governor Gavin Newsom vetoed California Senate Bill (SB) 1047, The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
- Key Takeaways:
- While mitigating the potential risks presented by use of AI technology remains a legislative priority at the state, federal and international level, lawmakers continue to grapple with how to appropriately scope relevant regulations to best address anticipated risks.
[1] High-risk systems explicitly exclude: (1) systems that perform narrow procedural tasks or detect decision-making patterns not intended to replace a human assessment, and (2) the following technologies, unless they are a substantial factor in making a consequential decision: (1) anti-fraud technology that does not use facial recognition, (2) anti-malware, anti-virus, cybersecurity, firewalls and spam/robocall filtering, (3) AI-enabled video games, (4) databases and data storage, (5) internet domain registration, internet web loading, networking, web hosting, and web caching, (6) calculators, spell-checking, spreadsheets, and (7) technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations and answering questions and subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.

AI @ GD
Gunderson Dettmer's Generative AI Resources
Gunderson Dettmer is committed to fostering AI education for the innovation economy by supporting startups and venture capital firms.
Discover our AI-focused resources designed to provide updates, education, and insights into the development of AI and generative AI.