News

Client Alert: Legislating the Future of AI in Employment: NYC's Impending Law on Automated Decision Tools

December 5, 2022Insights

***UPDATE –In a December 2022 update, the New York City Department of Consumer and Worker Protection (“DCWP”) announced that it will not enforce Local Law 144 until April 15, 2023.  Further, due to the high volume of public comments, the DCWP is planning a second public hearing to discuss and collect feedback regarding the new law. Local Law 144 still comes into effect on January 1, 2023, but the DCWP’s decision to delay enforcement may provide companies with needed additional time and information for understanding and complying with the law.***

Companies are increasingly using automation and artificial intelligence (“AI”) to identify and hire qualified candidates more efficiently, accurately, and objectively. In response, regulators and legislators are beginning to enact laws that address AI’s potential for bias and perceived lack of transparency and accountability. New York City Council enacted Local Law 144 of 2021 (“LL 144”) in December 2021, and similar local, state, and federal efforts are on the horizon. This law, which comes into effect on January 1, 2023, will make it unlawful for an employer or employment agency to use “automated employment decision tools” to evaluate New York City (NYC) candidates and employees unless certain steps are taken, such as conducting a bias audit before using the tool and providing notices to candidates.

In its current form, LL 144 is still vague with respect to several key terms, and leaves important questions unanswered. In September, the NYC Department of Consumer and Worker Protection (“DCWP”) proposed additional rules (the “Proposed Rules”) in an attempt to help employers better understand how to comply with the law and its requirements. A public hearing on the Proposed Rules was held on November 4, 2022 where participants were able to submit testimony. As of the date of the publication of this article, the Proposed Rules are still not in final form. Nevertheless, the impending January 1, 2023 effective date has not been postponed.

This alert summarizes the scope and requirements of LL 144, recommends steps companies can take now to prepare for the upcoming regulations, and provides an overview of similar laws and guidance in other jurisdictions.

Additionally, Gunderson Dettmer will cover these topics in greater depth during a webinar, scheduled for December 14, 2022. (Register here.)

An Overview of NYC’s Law on Automated-Decision Making in Employment

Who does the law apply to? The law will apply to employers that are using automated decision-making tools to screen candidates and employees for employment or promotion within NYC (“Covered Entities”). It is not entirely clear if the employer’s physical location must be in NYC. Further, the law does not explicitly address remote workers. Covered Entities that use automated decision-making tools to screen candidates for employment or promotions within NYC must notify each of those candidates who resides in the city.

What is an “automated employment decision tool”? The term “automated employment decision tool” (or “AEDT”) is broadly defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

This could include, for example, tools that automatically screen resumes in order to make employment decisions, such as who to interview or hire. On the other hand, tools that do not automate, support, substantially assist or replace discretionary decision-making processes and that do not materially impact natural persons are not included, such as a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data. The Proposed Rules would also exclude “analytical tools that translate or transcribe existing text” from the definition of “simplified output.”

What obligations does the law impose? A Covered Entity must:

  1. Conduct a bias audit no more than one year prior to use of the tool. Notably, this audit must be conducted by an independent auditor and must evaluate the tool’s disparate impact on the basis of race, ethnicity, or sex.
  2. Publish a summary of the most recent bias audit results on its website, in a manner that is available to the public.
  3. If the summary published on the Covered Entity’s website does not address the type of data collected for the AEDT, the source of such data, and the employer’s data retention policy, this information must be provided to a candidate or employee who resides in the city, upon written request. If a candidate or employee does not request this information, or it is already contained on the website, no further action is needed.
  4. No less than 10 days before use of the AEDT, provide a notice to an employee or candidate who resides in the city that, among other things: (a) notifies the individual that they may request an alternative process or accommodation to the AEDT; and (b) identifies the job qualifications and characteristics that the AEDT will use in the assessment of the candidate or employee. Note: Although employees and candidates have the right to request an accommodation, that request may be denied.

When does the law become effective? The law is scheduled to take effect on January 1, 2023. However, in order to be compliant on that date, Covered Entities must take certain steps, including completing a bias audit of the AEDT in order to use it on January 1, 2023.

How is the law enforced? The law will be enforced by the NYC Corporation Counsel or other individuals designated by the Corporation Counsel. Additionally, candidates and employees have the right to bring a civil action in any competent court. A person that violates the law may be liable for a civil penalty of up to $500 for each violation and each additional violation occurring on the same day as the first violation, and $500 to $1,500 for each subsequent violation, with each day on which an AEDT is employed constituting a separate violation.

NYC’s Proposed Rules regarding LL 144 (Published Sept. 23, 2022)

While the DCWP has yet to provide a date on which the Proposed Rules will be finalized, if adopted in current form, NYC’s Proposed Rules offer important guidance for employers, including more comprehensive answers to the questions above. For example:

  • Who does the law apply to? Under the Proposed Rules, a candidate is someone who resides in NYC and applied for a specific employment position by submitting the necessary information, in the appropriate format, to the Covered Entity. This definition appears to exclude individuals whose resumes are reviewed and rejected by AI tools without the individual ever applying for a job.
  • What is an “automated employment decision tool”? The Proposed Rules focus the scope of the “substantially assist or replace discretionary decision making” standard and explain that AEDTs are covered by the law when they rely primarily on simplified outputs (such as scores, tags, rankings, or a candidate’s estimated technical skills) to make employment decisions, or to substantially alter a conclusion that was based on other factors, including human decision making. The Proposed Rules also define “machine learning, statistical modeling, data analytics, or artificial intelligence,” and what it means to “screen” candidates.
  • What obligations does the law impose? Under the Proposed Rules, a “bias audit” is required when an AEDT selects which individuals move forward in the hiring process, or classifies individuals into groups. Further, a bias audit must analyze and disclose the rates at which individuals in protected categories (e.g., race, ethnicity, or sex) are either selected to move forward in the hiring process or assigned a classification by the AEDT, and how those rates compare to selection rates of individuals in the most selected category (the “impact ratio”). The proposed rules include several examples of how this data might be organized and analyzed.

The Proposed Rules define “independent auditor” as “a person or group that is not involved in using or developing an AEDT that is responsible for conducting a bias audit of such AEDT.” If adopted, this definition might allow for internal audits by an independent, in-house compliance team, as well as outside consultants or contractors. 

The Proposed Rules also include additional guidance regarding “notice” to an employee or candidate who resides in NYC, clarifying that the notice requirement can be satisfied by posting the notice on the jobs or careers section of the employer’s website, including in a job posting, or by mailing or e-mailing the notice to candidates.

The DCWP collected comments on the Proposed Rules, held a November 4, 2022 public hearing, and then closed the record on the Proposed Rules. The timeframe for adopting the Proposed Rules and the impact on the law’s January 1, 2023 effective date remain to be seen.

Practical Steps: Preparing for Regulations Regarding Automated Employment Decision-making

With the compliance deadline for various laws fast approaching, companies should evaluate whether their activities trigger state or federal laws regulating automation or use of AI in the employment context. Companies can take the following steps now to prepare:

  1. Evaluate your tools, including those used by third-party vendors with whom you contract – to determine whether they utilize machine learning, AI, statistical modeling, or data analytics to generate a score or prediction, classification, or recommendation that you rely upon when making employment decisions. Examples of automation could include automated systems that:
    1. Score candidates’ responses to technical questions;
    2. Review or screen candidates’ resumes;
    3. Collect information on candidates’ skills and availability using chat bots; and
    4. Create rubrics based on candidates’ past performance to assist with hiring decisions.
  2. Conduct a bias audit of your AEDT, which, under LL 144, involves an impartial evaluation by an independent auditor. Auditors should ensure that the standards and criteria they use align with LL 144, any additional NYC regulations or guidance (including the Proposed Rules, if/when they are adopted), and recognized best practices for identifying and eliminating bias in AI-based employment tools. As of now, a bias audit should focus on:
    1. What data is being collected;
    2. Why and how the data is being analyzed;
    3. Whether the criteria used to evaluate the candidate or employee are linked to the relevant job requirements and likelihood of success (and are not merely traits exhibited by previously successful employees, but not linked to work performance);
    4. Whether the data collection and evaluation are sufficiently transparent for the employer to review and explain to others (now and on an ongoing basis);
    5. A detailed analysis of the ethnicity, race, and gender-identity of the candidates who are selected to move forward or are classified in some way by the AEDT; and
    6. Whether, based on an analysis of the selection ratios, any AEDT is having an adverse impact on any protected categories of job applicants and employees, especially on the basis of race, ethnicity, sex, or disability.
  3. Provide candidates and employees who reside in NYC and apply for positions within NYC with notice of your AEDT and how to request alternatives. These steps, some of which are required by NYC’s law, may include:
    1. Advising candidates and employees about the AEDT or type of technology being used and how the applicants will be evaluated. 
    2. Advising candidates and employees of the results of any AEDT audits or assessments.  If the Proposed Rules are adopted, this step may involve posting raw data and ratios reflecting the impact of selection and categorization by the AEDT on any protected categories of job applicants and employees, especially on the basis of race, ethnicity, sex, or disability.
    3. Advising candidates and employees with disabilities of any challenges they may encounter using the AEDT (especially interactions with the AEDT that may result in the individual being “screened-out” from consideration).
    4. Allowing candidates or employees to opt-out of, or request alternative processes or accommodations to, the automated decision-making. 
  4. Implement alternative means for rating performance if AEDTs adversely impacts candidates or employees on the basis of race, ethnicity, sex, or disability.
  5. Train staff on identifying and offering reasonable accommodations to using AEDT tools, and on alternative methods for rating performance.

Other Noteworthy State and Federal Law Developments Relating to Automated Decision-Making in Employment

State Law Developments.

  • On January 1, 2020, Illinois’s Artificial Intelligence Interview Act went into effect and requires employers to take certain steps if they ask applicants to record video interviews and the employer uses an artificial intelligence analysis of the applicant submitted videos.
  • In May 2020, Maryland passed a law requiring that companies obtain an applicant’s written consent to use facial recognition technology during pre-employment job interviews. This law went into effect in October 2020.
  • On March 15, 2022, the California Civil Rights Council (formerly known as the Fair Employment and Housing Council) issued draft regulations that would impose requirements on companies that screen out applicants or classes of employees on the basis of a protected characteristic, subject to certain exceptions.
  • On January 1, 2023, the California Privacy Rights Act (“CPRA”) comes into effect, amending the California Consumer Privacy Act (“CCPA”). Under the CPRA, consumers will have the right to opt-out of automated decision-making and profiling, meaning any automated processing of personal information to evaluate personal aspects of the consumer. Because the CCPA’s exemption for candidate and employee data will expire on January 1, 2023, the CPRA will apply to California candidates and employees of companies that are subject to the law.

Federal Law Developments. On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) and the Department of Justice (“DOJ”) issued guidance warning that the algorithms and methodologies underpinning AI may be biased against job applicants and employees with disabilities. However, the EEOC and DOJ also acknowledge the benefits of AI and recommend steps employers can take to utilize the new technology and avoid violating federal anti-discrimination laws.

The EEOC’s guidance, entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” warns that violations of the Americans with Disabilities Act (“ADA”) may occur when:

  1. All applicants or employees must take the same AI-based test, even though people with certain disabilities will struggle with the testing format;
  2. Algorithms fail to consider legally-required reasonable accommodations when determining whether an applicant can perform the essential functions of a job, thereby “screening them out”;
  3. Algorithms fail to consider legally-required reasonable accommodations when rating an existing employee’s job performance; 
  4. Certain “gamified” tests, which use video games to measure abilities, personality traits, and other qualities, to assess applicants and employees, fail to measure whether someone with disabilities can perform the essential functions of the job; and
  5. Certain AI interview questions that focus on disability elicit information about physical or mental impairments and morph into an unlawful medical examination or otherwise violate the ADA. 

The DOJ’s guidance, entitled “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring,” addresses these issues, as well. Both the EEOC and DOJ assert that employers may be liable for disability discrimination even when a third-party vendor performs the AI testing for the employer.

White House AI Bill of Rights proposal

On October 4, 2022, the Biden Administration published the White House’s “Blueprint for an AI Bill of Rights.” The Blueprint sets out voluntary guidelines that are supposed to ensure AI systems do not harm the American public’s rights, opportunities, or access to critical resources. The five guidelines include:

  1. The right to be protected from ineffective systems that intentionally or unintentionally harm individuals or communities. According to this guideline, AI systems should be pre-tested for their specific intended uses before any interactions with the public.
  2. The right to be protected from discrimination caused by algorithms. This means, among other things, that AI systems should be used and designed in an equitable way.
  3. The right to be protected from abusive data practices via built-in protections. According to this principle, Americans should have agency over how their data is used.
  4. The right to know whether you are being evaluated or subjected to an AI evaluation, and the right to understand how the AI system works and what criteria it is considering.
  5. The right to opt out of an AI interaction in favor of in-person human assistance, where appropriate. Appropriateness should be based on reasonable expectations.

The Biden Administration’s interest in AI and AI-related regulation is noteworthy, and will be monitored closely. Significant additional guidance and related proposals are expected.

For assistance with evaluating your obligations under the laws and guidance discussed above, including whether your tools may constitute AEDTs, please reach out to Anna Westfelt, Natalie Pierce, or any of your other Gunderson Dettmer attorneys.

Please also join us for Gunderson Dettmer’s webinar on this issue, scheduled for December 14, 2022. (Register here.)