社会创新
day one project

现代化执行《民权法》,以减轻算法伤害,以确定联邦福利

03.01.23 | 7分钟阅读 | 文字Alejandro Jimenez Jaramillo

概括

The Department of Justice should modernize the enforcement of Title VI of the Civil Rights Act to guide effective corrective action for algorithmic systems that produce discriminatory outcomes with regard to federal benefits. To do so, the Department of Justice should clarify the definition of “algorithmic discrimination” in the context of federal benefits, establish systems to identify which federally funded public benefits offices use machine-learning algorithms, and secure the necessary human resources to properly address algorithmic discrimination. This crucial action would leverage a demonstrable, growing interest in regulating algorithms that has bloomed over the past year via policy actions in both the White House and Congress but has yet to concretely establish an appropriate enforcement mechanism for acting on instances of demonstrated algorithmic harm.

挑战和机会

算法系统是inescapable在现代生活中。它们已成为日常活动的核心要素,例如surfing the web,,,,开车上班,,,,和applying for a job。It is virtually impossible to go through life without encountering an algorithmic system multiple times per day.

As machine-learning technologies have become more pervasive, they have also become gatekeepers for crucial resources, like获得信用,,,,receiving healthcare,,,,securing housing,,,,和获得抵押。两个都localfederal政府已经接受了算法决策,以确定哪些成分能够访问关键服务,通常几乎没有透明度(如果有的话),那么对于那些受到这种决策的人来说。

When it comes to federal benefits, imperfections in these systems scale significantly. For example, the deployment of flawed algorithmic tools led to the wrongful termination of Medicaid for 19% of beneficiaries inArkansas,,,,the wrongful termination of Social Security income for thousands in纽约,,,,wrongful termination of $78 million worth of Medicaid and Supplemental Nutrition Assistance Program benefits in印第安纳州,,,,和erroneous unemployment fraud charges for 40,000 people inMichigan。这些错误特别是对低收入美国人有害对于他们获得信贷,住房,工作机会和医疗保健的人尤为重要。

Over the past year, momentum for regulating algorithmic systems has grown, resulting in several key policy actions. In February 2022, Senators Ron Wyden and Cory Booker and Representative Yvette Clarke introduced the算法责任法Endorsed by AI experts,该法案将要求算法系统的部署者进行并公开分享其系统的影响评估。2022年10月,白宫发布了其蓝图人权法案。Although not legally enforceable, this robust rights-based framework for algorithmic systems was developed with a broad coalition of support through an intensive, yearlong公共咨询过程与社区成员,私营部门代表,技术工作者和政策制定者。同样在2022年10月AI Training Act被裁定。该立法要求开发一项培训课程,涵盖有限范围的联邦雇员人工智能中的核心概念,主要是从事采购的角色。最后,2023年1月看到了NIST的引入AI Risk Management Framework指导组织和个人如何设计,开发,部署或使用人工智能来管理风险并促进负责任的使用。

Collectively, these actions demonstrate clear interest in preventing harm caused by algorithmic systems, but none of them provide clear enforcement mechanisms for federal agencies to pursue corrective action in the wake of demonstrated algorithmic harm.

However,标题VIof the Civil Rights Act offers a viable and legally enforceable mechanism to aid anti-discrimination efforts in the algorithmic age. At its core, Title VI bans the use of federal funding to support programs (including state and local governments, educational institutions, and private companies) that discriminate on the basis of race, color, or national origin. Modernizing the enforcement of Title VI, specifically in the context of federal benefits, offers a clear opportunity for developing and refining a modern enforcement approach to civil rights law that can respond appropriately and effectively to algorithmic discrimination.

Plan of Action

从根本上讲,该行动计划旨在:

阐明联邦福利算法偏见的框架

建议1。基金司法部(DOJ)开发new working group focused specifically on civil rights concerns around artificial intelligence.

The DOJ has already requested funding for and justified the existence of this unit in itsFY2023 Performance Budget。In that budget, the DOJ requested $4.45 million to support 24 staff.

这种类型的横断面工作组的明确先例已经存在于司法部(例如,印度工作组LGBTQI+ Working Group). Both of these groups contain members of the 11 sections of the Civil Rights Division to ensure a comprehensive strategy for protecting the civil rights of Indigenous peoples and the LGBTQ+ community, respectively. The pervasiveness of algorithmic systems in modern life suggests a similarly broad scope is appropriate for this issue.

建议2。Direct the working group to develop a framework that defines algorithmic discrimination and appropriate corrective action specifically in the context of public benefits.

A clear framework or rubric for assessing when algorithmic discrimination has occurred is a prerequisite for appropriate corrective action. Despite having a specific technical definition, the term “algorithmic bias” can vary widely in its interpretation depending on the specific context in which an automated decision is being made. Even if algorithmic bias does exist,研究人员法律学者案例表明,根据一致性和相对的行为变化,偏见的算法可能比偏见的人类决策者更可取。因此,司法部应开发一个特定于上下文的框架,以确定算法偏差何时会导致联邦福利系统的有害歧视性结果,从社会保障和医疗保险/医疗补助等主要联邦系统开始。

例如,布鲁金斯机构制作了一份有用的报告,说明了对定义算法偏差在特定的情况下。与此蓝图交叉行动现有标题VI程序可以产生有关司法部如何通知相关办公室算法歧视和行动纠正措施的准则。

确定使用算法工具的联邦福利系统

建议3。建立一个联邦登记册或数据库,用于管理联邦资助的公共利益,以便在使用机器学习算法时进行记录。

该系统应专门详细介绍算法系统的开发人员和使用该系统的办公室。如果可能的话,还应包括相关培训数据的描述,尤其是在这些数据是联邦财产的情况下。考虑与Office of Federal Contract Compliance Programsto secure this information from current and future government contractors within the federal benefits domain.

在成本方面,此类数据库以前的预算请求从200万美元到500万美元不等。

建议4。Provide public access to the federal register.

将联邦公开公开提供baseline transparency关于算法系统的联邦资金。这将促进外部调查工作,以确定公共利益算法歧视的可能实例,这将通过将有限的联邦工作人员带宽指导已经确定的案件来补充内部努力。该注册表的公共面向公共部分应构成尊重适当的隐私和贸易保密限制

Recommendation 5.Link the public-facing register to a public-facing form for submitting claims of algorithmic discrimination in the context of federal benefits.

此步骤将有助于传达公众关于算法歧视主张的反馈,并具有足够高的阈值,以最大程度地减少轻率的主张。一个精心设计的系统将要求提供证据和数据,以证明任何算法歧视的索赔是合理的,使联邦雇员可以优先考虑提出的主张。

装备Agencies with Necessary Resources for Addressing Algorithmic Discrimination

Recommendation 6.授权在联邦监管机构的执法机构中为技术员工提供资金,包括但不限于司法部。

Effective enforcement of anti-discrimination statutes today requires technical fluency in machine-learning techniques. In addition to the DOJ’s Civil Rights Division (see Recommendation 1), consider directing funds to hire or train technical experts within the enforcement arms of other federal agencies with explicit anti-discrimination enforcement authority, including theFederal Trade Commission,,,,Federal Communications Commission,,,,和教育部门

Recommendation 7.通过通过国家评估法案停止非法负面的机器影响

This act was introduced withbipartisan supportin the Senate at the very end of the 2021–2022 legislative session by Senator Rob Portman. The short bill seeks to clarify that civil rights legislation applies to artificial intelligence systems and decisions made by these systems will be liable to claims of discrimination under said legislation, including the Civil Rights Act, the Americans with Disabilities Act, and the Age Discrimination Act of 1975, among others. Passing the bill is a simple but effective way to indicate to federal regulatory agencies (and those they regulate) that artificial intelligence systems must comply with civil rights law and affirms the federal government’s authority to ensure they do so.

Conclusion

在任职的第一天,拜登总统签署了行政命令解决基于美国服务不足社区的平等机会的根深蒂固的否认。确保通过对低收入美国人和有色美国人的算法歧视,没有系统地否认联邦福利对于成功实现该命令的目标以及想要对算法系统有意义的监管的声音的兴起至关重要。在联邦福利的背景下,此类法规的权力已经存在。为了确保在现代有效地执行权威,联邦政府需要在联邦福利的背景下明确定义算法歧视,确定联邦资金在何处支持对联邦福利的算法确定,并招募必要的人才来验证实例的实例算法歧视。

经常问的问题
What is an algorithm? How is it different from machine learning or artificial intelligence?

An algorithm is a structured set of steps for doing something. In the context of this memo, an algorithm usually means computer code that is written to do something in a structured, repeatable way, such as determining if someone is eligible for Medicare, identifying someone’s face using a facial recognition tool, or matching someone’s demographic profile to a certain kind of advertisement.


Machine-learning techniques are a specific set of algorithms that train a computer to do different tasks by taking in a massive amount of data and looking for patterns. Artificial intelligence generally refers to technical systems that have been trained to perform tasks with minimal human oversight. Machine learning and artificial intelligence are similar and often used as interchangeable terms.

How can we determine if an algorithm is biased?

We can identify algorithmic bias by comparing the expected outputs of an algorithm to the actual outputs for an algorithm. For example, if we find that an algorithm uses race as a decisive factor in determining whether someone is eligible for federal benefits that should be race-neutral, that would be an example of algorithmic bias. In practice, these assessments often take the form of statistical tests that are run over multiple outputs of the same algorithmic system.

算法偏见本质上是不好的吗?

尽管许多算法是有偏见的,并不是所有的biases are equally harmful. This is due to the highly contextual nature in which an algorithm is used. For example, a false positive in a criminal-sentencing algorithm arguably causes more harm than a false positive in a federal benefits determination. Algorithmic bias is not inherently a bad thing and, in some cases, can actually advance equity and inclusion efforts depending on the specific contexts (consider a hiring algorithm for higher-level management that weights non-male gender or non-white race more heavily for selection).

金博宝更改账户
See all金博宝更改账户