News

Client Insight: Legislating the Future of AI in Employment: NYC's Law on Automated Decision Tools & Other Important Developments

July 26, 2023Insights

[Note: This alert updates and replaces our original Client Insight dated December 5, 2022.]

Companies are increasingly using automation and artificial intelligence (“AI”) to identify and hire qualified candidates more efficiently, accurately, and objectively. In response, regulators and legislators are enacting laws and rules specifically addressing AI’s potential for bias and perceived lack of transparency and accountability.

On July 5, 2023, New York City’s Department of Consumer and Worker Protection (“DCWP”) began enforcing Local Law 144 of 2021(“LL 144”). LL 144 makes it unlawful for an employer or employment agency to use an “automated employment decision tool” to evaluate New York City (NYC) job candidates unless certain steps are taken, such as conducting a bias audit of the tool and providing notices to candidates.  LL 144 took effect on January 1, 2023, however DCWP delayed enforcement due to the high volume of public comments it received in response to the proposed rules. On July 1, 2023, in an effort to address ongoing confusion, DCWP published AEDT FAQs that clarify several important issues relating to LL 144 and its enforcement.

This alert updates and replaces our December 5, 2022 Client Insight, summarizes the scope and requirements of LL 144, recommends steps companies should take now to comply with the law, and provides an overview of similar laws and guidance in other jurisdictions.

An Overview of NYC’s Law on Automated-Decision Making in Employment

When and where does LL 144 apply?

LL 144 applies to employers and employment agencies that use an AEDT to assist with hiring or promotion decisions in NYC. If the position in question is located at least part-time in NYC, or the position is fully-remote but primarily associated with an office located in NYC, LL 144 applies. Employers that use an AEDT to substantially help them assess or screen candidates at any point in the hiring or promotion process must notify job candidates who reside in NYC of that fact.  LL 144 defines a “Candidate for Employment” as a person who applied for a specific employment position by submitting the necessary information, in the format required by the employer. Individuals whose resumes are reviewed and rejected by AI tools without the individual ever applying for a job are NOT candidates under LL 144.

What is an “automated employment decision tool”?

The term “automated employment decision tool” (or “AEDT”) is broadly defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”  This could include, for example, tools that automatically screen resumes in order to make employment decisions, such as who to interview or hire, or predict a candidate’s likelihood of success on the job. On the other hand, tools that do not automate, support, substantially assist or replace discretionary decision-making processes and that do not materially impact natural persons are not included, such as a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.

Under the NYC Rules, “substantially assist or replace discretionary decision making” means the employer: (1) relies solely on simplified output (such as scores, tags, rankings, or a candidate’s estimated technical skills) and no other criteria; (2) uses a simplified output as one of several criteria, but considers the simplified output to be more significant than other factors; or (3) uses a simplified output to overrule conclusions derived from other factors, including human decision-making.

What are the employer’s obligations regarding audits and notice?

When an AEDT plays a substantial role in which individuals move forward in the hiring or promotion process, the employer must perform an independent bias audit on the AEDT and publish the results no more than one year prior to use of the tool.

What is a bias audit?

Where an AEDT selects candidates for employment, a bias audit must analyze and disclose the rates at which individuals in protected categories (e.g., race, ethnicity, or sex, and intersectional combinations of these categories) are either selected to move forward in the hiring or promotion process or assigned a classification by the AEDT, and how those rates compare to selection rates of individuals in the most selected category (the “impact ratio”). Where an AEDT scores candidates for promotions, a bias audit must calculate the median score for the full sample of applicants; calculate the scoring rate for individuals in each category; and calculate the impact ratio for each category, including the impact on sex categories, race/ethnicity categories, and intersectional categories. The NYC Rules include several examples of how this data might be organized and analyzed.

What data is used to conduct a bias audit?

Historical data of the AEDT must be used to conduct a bias audit. “Historical data” is the data collected during an employer’s use of an AEDT to assess candidates for employment or employees for promotion. A bias audit can use the historical data of multiple employers or employment agencies that use the same AEDT; however, employers and employment agencies can only rely on such an audit if: (a) they provided historical data from their use of the AEDT to the independent auditor conducting the bias audit; or (b) it is the first time they are using the AEDT.

If the employer has insufficient historical data available to conduct a statistically significant bias audit, they can use historical data from other employers or employment agencies, or test data. However, the bias audit’s summary results must include the source and explanation of the data used to conduct the bias audit. In addition, if the bias audit used test data, the summary should explain how the data was sourced or developed. If a bias audit uses test data, the summary of results of the bias audit must explain why historical data was not used and describe how the test data used was generated and obtained. (To allow for flexibility and development of best practices, DCWP has not set requirements for test data.)

When can employers exclude a category of employees?

If a category represents less than 2% of the data used for the bias audit, it can be excluded from the required calculations. However, the calculations must include all other categories.

Can you rely on vendors to conduct bias audits?

Vendors that create or sell an AEDT are not responsible for bias audits of the tool. Although vendors may subject their tools to bias audits and may assist clients with collecting data for the clients’ bias audits, it is the employer’s obligation to ensure a bias audit takes place before using an AEDT.

Who is an “Independent Auditor”?

The bias audit must be conducted by an independent auditor. An independent auditor exercises objective and impartial judgment in the performance of a bias audit. Auditors are not independent if they: (a) work for the employer or employment agency that will use the AEDT or the vendor that developed or distributes the AEDT; (b) were involved in using, developing, or distributing the AEDT regardless of where they work currently; or (c) have a direct financial interest or a material indirect financial interest in the employer or employment agency that will use the AEDT or the vendor that developed or distributed the AEDT.

Employers must publish a summary of the most recent bias audit results online, in a manner that is available to candidates.

Employers must publish a summary of the bias audit results on their company website in a clear and conspicuous place. Alternatively, employers may publish the summary on a separate website so long as they provide all candidates with an active hyperlink to the website. The published summary must include: (a) the date the employer began using the AEDT; (b) the date of the most recent bias audit; (c) the source and explanation of the data used to conduct the bias audit; (d) the number of individuals the AEDT assessed that fall within an unknown category; and (e) the number of applicants or candidates, the selection or scoring rates, as applicable, and the impact ratios for all categories. The employer must keep the summary of results posted for at least 6 months after the latest use of the AEDT for an employment decision.

Timing of Notice

No less than 10 days before use of the AEDT, the employer must provide a notice to an employee or candidate who resides in the city that, among other things: (a) notifies the individual that they may request an alternative process or accommodation to the AEDT; and (b) identifies the job qualifications and characteristics that the AEDT will use in the assessment of the candidate or employee. If not disclosed on the employer or employment agency's website, information about the type of data collected for the automated employment decision tool, the source of such data and the employer or employment agency's data retention policy shall be available upon written request by a candidate or employee. Such information shall be provided within 30 days of the written request.  Although candidates have the right to ask for an alternative process, LL 144 does not require employers to provide one.

How is the law enforced?

The law will be enforced by the NYC Corporation Counsel or other individuals designated by the Corporation Counsel. Additionally, candidates and employees have the right to bring a civil action in any competent court. A person that violates the law may be liable for a civil penalty of up to $500 for each violation and each additional violation occurring on the same day as the first violation, and $500 to $1,500 for each subsequent violation, with each day on which an AEDT is employed constituting a separate violation.

Practical Steps: Complying with AEDT and AI Regulations

To ensure compliance with LL 144, employers should immediately evaluate whether their activities trigger state or federal laws regulating automation or use of AI in the employment context. Companies can take the following steps:

  1. Evaluate your tools, including those used by third-party vendors with whom you contract, to determine whether they utilize machine learning, AI, statistical modeling, or data analytics to generate a score or prediction, classification, or recommendation that you rely upon when making hiring or promotion decisions. Examples of automation could include automated systems that:

    a. Score candidates' response to technical questions; 

    b. Review or screen candidates' resumes; 

    c. Collect information on candidates' skills and availability using chat bots; and 

    d. Create rubrics based on candidates' past performance to assist with hiring decisions. 
     
  2. Conduct a bias audit of your AEDT, which, under LL 144, involves an impartial evaluation by an independent auditor. Employers and auditors should ensure that the standards and criteria they use align with LL 144 and the requirements discussed above. Employers should also consider:

    a. What data is being collected; 

    b. Why and how the data is being analyzed;

    c. Whether the criteria used to evaluate the candidate or employee are linked to the relevant job requirements and likelihood of success (and are not merely traits exhibited by previously successful employees, but not linked to work performance);

    d. Whether the data collection and evaluation are sufficiently transparent for the employer to review and explain to others (now and on an ongoing basis); and

    e. Whether, based on an analysis of the selection ratios, any AEDT is having an adverse impact on any protected categories of job applicants and employees, especially on the basis of race, ethnicity, sex, or disability.
     
  3. Provide candidates and employees who reside in NYC and apply for positions within NYC with notice of your AEDT and how to request alternatives. These steps, some of which are required by LL 144, may include:

    a. Advising candidates and employees about the AEDT or type of technology being used and how the applicants will be evaluated.

    b. Advising candidates and employees of the results of AEDT bias audits, in accordance with the guidance discussed above.

    c. Advising candidates and employees with disabilities of any challenges they may encounter using the AEDT (especially interactions with the AEDT that may result in the individual being “screened-out” from consideration).

    d. Allowing candidates or employees to opt-out of, or request alternative processes or accommodations to, the automated decision-making.
     
  4. Implement alternative means for rating performance if your AEDT adversely impacts candidates or employees on the basis of race, ethnicity, sex, or disability.
     
  5. Train staff on identifying and offering reasonable accommodations to using AEDT tools, and on alternative methods for rating performance.

Contact your Gunderson employment attorneys for assistance with any of these steps.

Other Noteworthy State and Federal Law Developments Relating to Automated Decision-Making in Employment

State Law Developments.

  • On January 1, 2020, Illinois’s Artificial Intelligence Interview Act went into effect and requires employers to take certain steps if they ask applicants to record video interviews and the employer uses an artificial intelligence analysis of the applicant submitted videos.
  • In May 2020, Maryland passed a law requiring that companies obtain an applicant’s written consent to use facial recognition technology during pre-employment job interviews. This law went into effect in October 2020.
  • On March 15, 2022, the California Civil Rights Council (formerly known as the Fair Employment and Housing Council) issued draft regulations that would impose requirements on companies that screen out applicants or classes of employees on the basis of a protected characteristic, subject to certain exceptions.
  • On January 1, 2023, the California Privacy Rights Act (“CPRA”) comes into effect, amending the California Consumer Privacy Act (“CCPA”). Under the CPRA, consumers will have the right to opt-out of automated decision-making and profiling, meaning any automated processing of personal information to evaluate personal aspects of the consumer. Because the CCPA’s exemption for candidate and employee data will expire on January 1, 2023, the CPRA will apply to California candidates and employees of companies that are subject to the law.

Federal Law Developments. On May 12, 2022, the Equal Employment Opportunity Commission (“EEOC”) and the Department of Justice (“DOJ”) issued guidance warning that the algorithms and methodologies underpinning AI may be biased against job applicants and employees with disabilities. However, the EEOC and DOJ also acknowledge the benefits of AI and recommend steps employers can take to utilize the new technology and avoid violating federal anti-discrimination laws.

The EEOC’s guidance, entitled “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” warns that violations of the Americans with Disabilities Act (“ADA”) may occur when:

  1. All applicants or employees must take the same AI-based test, even though people with certain disabilities will struggle with the testing format;
  2. Algorithms fail to consider legally-required reasonable accommodations when determining whether an applicant can perform the essential functions of a job, thereby “screening them out”;
  3. Algorithms fail to consider legally-required reasonable accommodations when rating an existing employee’s job performance;
  4. Certain “gamified” tests, which use video games to measure abilities, personality traits, and other qualities, to assess applicants and employees, fail to measure whether someone with disabilities can perform the essential functions of the job; and
  5. Certain AI interview questions that focus on disability elicit information about physical or mental impairments and morph into an unlawful medical examination or otherwise violate the ADA.

The DOJ’s guidance, entitled “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring,” addresses these issues, as well. Both the EEOC and DOJ assert that employers may be liable for disability discrimination even when a third-party vendor performs the AI testing for the employer.

April 2023 Joint Statement by Numerous Federal Agencies:

On April 25, 2023, the Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), DOJ, and EEOC released a joint statement on AI reiterating their intention to use their authority and power to protect civil rights, fair competition, consumer protection, and equal opportunity against discrimination and bias caused by automated systems and artificial intelligence. The joint statement noted that potential discrimination in automated systems may come from various sources including, (a) data and datasets (which may contain errors or biases); (b) model opacity and access (and the fact that many automated systems are “black boxes”; and (c) design and use (which may contain flawed assumptions).

White House AI Bill of Rights proposal

On October 4, 2022, the Biden Administration published the White House’s “Blueprint for an AI Bill of Rights.” The Blueprint sets out voluntary guidelines that are supposed to ensure AI systems do not harm the American public’s rights, opportunities, or access to critical resources. The five guidelines include:

  1. The right to be protected from ineffective systems that intentionally or unintentionally harm individuals or communities. According to this guideline, AI systems should be pre-tested for their specific intended uses before any interactions with the public.
  2. The right to be protected from discrimination caused by algorithms. This means, among other things, that AI systems should be used and designed in an equitable way.
  3. The right to be protected from abusive data practices via built-in protections. According to this principle, Americans should have agency over how their data is used.
  4. The right to know whether you are being evaluated or subjected to an AI evaluation, and the right to understand how the AI system works and what criteria it is considering.
  5. The right to opt out of an AI interaction in favor of in-person human assistance, where appropriate. Appropriateness should be based on reasonable expectations.

The Biden Administration’s interest in AI and AI-related regulation is noteworthy, and will be monitored closely. Significant additional guidance and related proposals are expected.

Trends: AI Regulation across the U.S. and Worldwide

Governments throughout the world are taking note of AI and AI-related regulation is spreading rapidly. According to Stanford University’s 2023 AI Index Report, “growing policy interest in AI can be seen at the state level within the U.S., with 60 AI-related bills [i.e., laws containing mentions of AI] proposed in 2022—a dramatic increase from the 5 bills proposed in 2015.” The proportion of those bills being passed is also rising. “In 2015, 1 bill was passed, representing 16% of the total bills proposed that year; while in 2022, 21 bills were passed, or 35% out of the total that were proposed.” Globally, a review of the legislative records of 127 countries shows that the number of bills containing the term “artificial intelligence” that were passed into law grew from just 1 in 2016 to 37 in 2022. Now more than ever, companies utilizing AI and AI-related tools must be hyper-vigilant when it comes to their obligations under new and pending laws.

For assistance with evaluating your obligations under the laws and guidance discussed above, including whether your tools may constitute AEDTs, please reach out to Natalie Pierce, Anna Westfelt, or any of your other Gunderson Dettmer attorneys.