October 8, 2024

Massive change to AI regulation in 2022

Massive change to AI regulation in 2022

With all the public outrage from viral news stories on the dangers of artificial intelligence, it’s no small secret that governments have been gearing up to set the ground rules on AI for some time now. For those of us involved in these efforts, every other week of 2021 felt like it brought about a new official body publishing guidance, standards, or a Request for Information (RFI), signaling a new and major transformation that’s right around the corner.

It may be true that 2021 was a huge year for regulating AI, but if these tea leaves can be trusted, 2022 will be truly massive. What will the world of algorithmic governance for 2022 look like? Here are 10 predictions for AI regulation:

1. The U.S. will continue to push for voluntary standards and frameworks, despite clear evidence that they won’t work.

Standards-setting bodies like the National Institute for Standards and Technology (NIST) and the Institute of Electrical and Electronics Engineers (IEEE) have been requesting information, reading comments, and drafting proposals for codes of conduct and voluntary frameworks to mitigate risk and root out discrimination in AI. The drive to self-regulate was inevitable, and certainly it can be said that voluntary frameworks are a great first step. 

But, internationally, similar efforts have fallen conspicuously flat. Last month, The United Nations Educational, Scientific, and Cultural Organization (UNESCO) launched the first global agreement on AI Principles, with 193 member states signing on. This framework, which is entirely without provisions for enforcement, prohibits things like social credit scoring and pervasive surveillance. Which is why it surprised many that the Principles were also adopted by China, whose practices are in clear violation of the agreement. If there was ever any evidence of the limitations of codes of conduct as an instrument to defend human rights, this hypocrisy by the CCP is the strongest bit yet. For 2022, it’s highly likely we’ll see this trend continue, with private organizations throwing their hats into the self-regulation ring. 

2. Governments around the world (perhaps excluding the U.S.) will pass national algorithmic governance laws.

The U.S. has lagged significantly behind our international counterparts in tackling AI safety at the federal level, with each nation coding their own national values into the work. In September, China released draft rules targeting many applications of AI, ranging from algorithmic transparency to ensuring the spread of pro-CCP online content. Other nations like Singapore have been far ahead of the curve dating back to 2019, issuing and updating voluntary frameworks that provide an excellent jumping-off point for adding stronger teeth. Still other nations like Japan have issued reports and national strategies signaling that deliberations are taking place, and rules are on the horizon. 

With so many differing approaches to the topic, it’s hard to say whether these rules will have a huge impact on U.S. businesses directly, but they will provide strong test cases for or against claims of stifled innovation. Significant disruption for U.S. markets remains unlikely with most of these for now, with one glaring exception.

3. The AI Act will pass in the EU, and most companies in the world will have to comply with it.

With their rich history of multistakeholder collaboration, the EU is poised to “set the standard” of AI regulation for all of us. It’s a phenomenon that came to pass with the General Data Protection Act (GDPR), in which countries and U.S. states seeking similar protections simply copied most provisions of the EU law into their own jurisdictions. The GDPR (and the AI Act) have serious implications for U.S. companies, given that these rules apply to any technologies their citizens use, even if the company operates elsewhere. The AI Act also leaves room for further complication, given that some portions of the law will be up to member states for enforcement and clarifying guidance. This “regulatory divergence” problem is already a huge drain on compliance departments, and is about to get a whole lot worse. 

4. AI export controls will tighten.

The Biden Administration dealt a huge blow to the international surveillance market last month by adding the notorious Israeli NSO Group to the prohibited list of acceptable technologies for purchase, leading to the prompt resignation of the company’s new CEO. The news of AI-driven oppression over the last few years has garnered huge bipartisan scrutiny, mainly leveled against China for their oppression of Uighur Muslims in Xinjiang. If international tensions escalate, we expect to see more economic retaliation on both sides, as the U.S. moves to protect its research from exploitation against minorities around the world. 

5. Activist litigators will push the limits of the courts to underscore American civil liberties and freedom from discrimination.

Experts have long said, with some merit, that the laws to prevent algorithmic discrimination already exist in the U.S. While mainly true, we’ve not yet seen a multitude of legal challenges against AI. This is due in large part to the lack of access to data and code that prosecutors and affected minorities would need to bring any case. Last year saw the first ever regulatory ruling on an AI-based product accused of discrimination in the financial sector. The NY Department of Financial Services (DFS) shockingly concluded their investigation with a plea to federal regulators that the governing laws are badly overdue for an update. The remaining undecided probe against AI discrimination against United Healthcare Group, borne of a hospital management algorithm, had lacked significant updates. But we’re likely to see further activity this year: We could see a civil claim brought under antidiscrimination law, a statement that the law is insufficient to bring a claim, or no action.

Meanwhile, little-known state laws, like Illinois’s 2008 Biometric Information Privacy Act (BIPA), have become a hot zone for activist litigation. Several cases have emerged against intrusive surveillance technologies like those sold by Clearview AI in the U.S., while international regulators have been hard at work suspending and fining the company in their own jurisdictions. Since these cases have by and large been fairly successful, enterprises should only combine biometric data with AI at their own peril. 

6. Members of Congress will try (again) to pass protective federal legislation.

We’re hearing rumblings among colleagues on the Hill that a few bits of legislation are on the near-term horizon. In fact, the 2019 Algorithmic Accountability Act, sponsored by Yvette Clarke, Ron Wyden, and many others, is due to be reintroduced with much more detail in the weeks ahead. If passed, it would require companies to create yearly impact assessments and share them directly with the FTC, creating new transparency around when and how automated systems are used. Almost certainly, we’ll see the FTC play a major role in regulating U.S. algorithms, which we’ll talk about later. 

We also expect to see proposals of further bills that take aim at algorithmic discrimination and transparency. If history is any indicator, it’s a reasonable assumption that these protective bills would focus on narrow segments of the population for which protections have mass appeal (think: children and the disability community). Any proposed legislation has an uphill battle ahead of them in our partisan Congress, but with a Democratic majority through at least 2022, there is certainly hope!

7. Local jurisdictions will not wait on Congress and will pass their own algorithmic oversight laws.

This year saw the nation’s first AI auditing legislation at the local level, when the New York City Council passed Int. 1894 with an overwhelming majority. The law reaffirms residents’ shield against so-called employment disparate impact, or the unintentional discrimination against protected classes like race, gender, and age. This law is one of the more watered-down versions of algorithmic discrimination protections we’re likely to see, but it is a resounding confirmation that “disparate impact” will be the legal doctrine that governs further such proposals. The legislation, which passed passively into law last month, would require enterprises to seek outside expertise for their algorithmic audits. It’s a big open question as to whether this provision will appear in future attempts.

For instance, just last week the Attorney General of the District of Columbia proposed his own bill in collaboration with civil society to protect residents against the same sorts of issues. The much more comprehensive bill, Stop Discrimination by Algorithms Act (SDAA), would not require third-party scrutiny. Instead, it would require the submission of AI impact assessments directly to the government for review. The bill applies to algorithms used in “education, employment, housing, and public accommodations including credit, healthcare, and insurance,” again citing “disparate impact” as the driving legal doctrine behind enforcement. Interestingly, the SDAA would also extend the core financial compliance need for “adverse action reporting” to new industries outside of finance. Multiple states and cities are having similar conversations, and we should expect many of these popular proposals to succeed.

And now, onto the really interesting stuff.

8. U.S. federal regulators will use their rulemaking powers to update guidance on existing laws to update them for the machine learning age.

The FDA is the furthest along on its own journey toward new rules on AI-powered medical devices, with one exception we’ll further outline in another prediction below. In the FDA’s case, the RFI dates back to 2019 and has already evolved into an action plan, outlining five steps that the agency plans to undertake. But similar RFIs in other categories were also released in 2021, with efforts underway at the EEOC, OCC, FDIC, Federal Reserve, CFPB, and the NCUA

This guidance will be critical for enterprises to watch, as it has the potential to clarify a lot of tricky issues around using AI in healthcare, employment, and finance. Some of the issues we expect (and hope!) are up for clarification are things like:

  • Can we use black box models for high-risk categories if they work better?
  • Do post hoc explanations count for adverse action reporting?
  • Aren’t there better ways to infer or collect protected demographic labels of our users?
  • What definitions of fairness are the right ones to measure?
  • What does “monitoring,” required by banking regulators per SR 11-7, mean in practice?
  • Do we need to hire third party auditors for AI model validation?
  • What data sources are OK to use, and which ones are prohibited?

Some of these will take quite a while to fully work through and reach consensus, but significant movement in 2022 is all but guaranteed. As it is also guaranteed that the FTC will be a significant player in algorithmic oversight with its bombshell disclosure.

9. The FTC will set new rules that govern most consumer-facing AI.

Proponents of AI oversight rules have long discussed the FTC as the most appropriate (and empowered) regulator to take up the fight for consumers’ rights. With broad rulemaking powers and a progressive, expert staff, the FTC’s February 2022 agenda item signals change is coming, and fast. This will be one to watch, as it’s unclear whether February’s time slot will simply open up a period of public comment, or whether they already have draft rules in mind. Our money is on the latter.

10. The White House Office of Science and Technology Policy will publish the nation’s first Algorithmic Bill of Rights.

The Biden Administration’s thoughtful promotion of the White House OSTP to a cabinet-level position will bring us a landmark set of rights to govern AI in practice. With world-renowned experts like Dr. Alondra Nelson at the helm, this will be something to watch closely. The RFI for this initiative is underway through January 15th, and it’s likely to be one of the more progressive and comprehensive efforts to date. We expect to see highly protective and expert-informed provisions, like those that would mandate consumer notification and agency over algorithmic choice for any and all applications of AI. If done right, this effort could set the very definition of what it means to have “Responsible AI” in the U.S.

With all of this activity, it’s no wonder smart enterprises are taking steps to future-proof their own practices around fair and equitable AI. The cost of complying with this evolving and patchwork regulatory environment will surely be significant, but companies can get ahead of the curve by committing to regular audits with practitioners who fully understand the landscape. Unfortunately, many American companies have been slow to adopt Responsible AI frameworks. But January brings the promise of a new year, which is set to be a whole new ballgame. 

Liz O’Sullivan is a cofounder and VP of responsible AI at Arthur, the AI monitoring company. She also serves as technology director for STOP (The Surveillance Technology Oversight Project).