In pursuit of lower costs and improved decision-making, federal agencies have begun to adopt artificial intelligence (AI) to assist in government decision-making and public administration. As AI occupies a growing role within the federal government, algorithmic design and evaluation will increasingly become a key site of policy decisions. Yet a2020年报告发现所有联邦机构使用AI的几乎一半（47％）是外部采购的，其中三分之一是从私营公司那里采购的。为了确保代理使用AI工具是合法，有效和公平的，Biden-Harris Administmation应建立联邦人工智能计划来管理算法技术的采购。此外，AI计划应围绕识别和减轻这些技术歧视所需的种族数据收集的严格数据收集协议。
研究和进行算法审核的研究人员突出了importance of race datafor effective anti-discrimination interventions, the挑战of category misalignment between data sources, and the需要政策干预to ensure accessible and high-quality data for audit purposes. However, inconsistencies in the collection and reporting of race data significantly limit the extent to which the government can identify and address racial discrimination in technical systems. Moreover, given significant flexibility in how their products are presented during the procurement process, technology companies can manipulate race categories in order to obscure discriminatory practices.
To ensure that the AI Program can evaluate any inequities at the point of procurement, the Office of Science and Technology Policy (OSTP) National Science and Technology Council Subcommittee on Equitable Data should establish guidelines and best practices for the collection and reporting of race data. In particular, the Subcommittee should produce a report that identifies the minimum level of data private companies should be required to collect and in what format they should report such data during the procurement process. These guidelines will facilitate the enforcement of existing anti-discrimination laws and help the Biden-Harris Administration pursue their stated racial equity agenda. Furthermore, these guidelines can help to establish best practices for algorithm development and evaluation in the private sector. As technology plays an increasingly important role in public life and government administration, it is essential not only that government agencies are able to access race data for the purposes of anti-discrimination enforcement—but also that the race categories within this data are not determined on the basis of how favorable they are to the private companies responsible for their collection.
Challenge and Opportunity
Research表明，政府在创建和实施他们采购的算法技术时通常几乎没有有关关键设计选择的信息。通常，这些选择不会记录下来或由承包商记录，但在采购过程中从未提供给政府客户。Existingregulation为信息技术的采购提供了具体要求，例如安全性和隐私风险，但是这些要求do not accountfor the specific risks of AI—such as its propensity to encode structural biases. Under theFederal Acquisition Regulation, agencies can only evaluate vendor proposals based on the criteria specified in the associated solicitation. Therefore, written guidance is needed to ensure that these criteria include sufficient information to assess the fairness of AI systems acquired during procurement.
办公室的管理ment and Budget (OMB) definesminimum standards用于在联邦报告中收集种族和种族数据。种族和种族类别分为两个问题，其中有五个最低类别的种族数据类别（美洲印第安人或阿拉斯加本地人，亚洲人，黑人或非裔美国人，夏威夷人或其他太平洋岛民，以及白人），一个最低类别的种族数据（西班牙裔或西班牙裔或拉丁裔）。尽管有这些标准，但在联邦机构甚至特定计划中，使用种族类别的指南也有所不同。例如，人口普查局分类方案包括其他机构数据收集实践中未使用的“其他种族”选项。此外，收集和报告数据的指南并不总是一致。例如，美国教育部建议分别收集种族和种族数据，而无需“两个或更多种族”类别，并允许受访者选择适用的所有种族类别。但是，在报告期间，任何种族西班牙裔或拉丁裔的人都仅为西班牙裔或拉丁裔，而不是任何其他种族。同时，任何选择多种种族选择的受访者都会在“两个或多个种族”类别中报告，而不是在他们确定的任何种族群体中。
这些不一致会在私营部门加剧，在私营部门，公司不受相同的OMB标准统一的限制，而是被零碎的立法所涵盖的。在就业背景下，私人公司必须根据OMB最低标准收集和报告其员工的人口统计细节。另一方面，在消费者贷款环境中，贷方通常不允许收集有关受保护阶级的数据，例如种族和性别。在收集受保护的类数据的情况下，这些数据通常被视为特权信息，政府无法访问。就算法技术而言，公司通常能够通过种族来歧视，而无需通过使用功能或一组功能作为受保护类的代理来明确收集种族数据。例如，Facebook的广告算法可用于target race and ethnicity无需访问竞赛数据。
Additionally, by establishing a program to oversee the procurement of artificial intelligence, the federal government can ensure that agencies have access to the necessary technical expertise to evaluate complex algorithmic systems. This expertise is crucial not only during the procurement stage but also—given the adaptable nature of AI—for ongoing oversight of algorithmic technologies used within government.
The Biden-Harris Administration should create a Federal AI Program to create standards for information disclosure and enable evaluation of AI during the procurement process. Following the two-part test outlined in theAI Bill of Rights，拟议的联邦AI计划将监督任何“(1) automated systems that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.”
The goals of this program will be to (1) establish and enforce quality standards for AI used in government, (2) enforce rigorous equity standards for AI used in government, (3) establish transparency practices that enable public participation and political accountability, and (4) provide guidelines for AI program development in the private sector.
建议2。Produce a report to establish what data are needed in order to evaluate the equity of algorithmic technologies during procurement.
To support the AI Program’s operations, the OSTP National Science and Technology Council Subcommittee on Equitable Data should produce a report to establish guidelines for the collection and reporting of race data that balances three goals: (1) high-quality data for enforcing existing anti-discrimination law, (2) consistency in race categories to reduce administrative burdens and curb possible manipulation, and (3) prioritizing the needs of groups most affected by discrimination. The report should include opportunities and recommendations for integrating its findings into policy. To ensure the recommendations and standards are instituted, the President should direct the General Services Administration (GSA) or OMB to issue guidance and request that agencies document how they will ensure new standards are integrated into future procurement vehicles. The report could also suggest opportunities to update or amend the Federal Acquisition Regulations.
The new guidelines should make efforts to ensure the reliability of race data furnished during the procurement process. In particular:
- Self-identification should be used whenever possible to ascertain race. As of 2021, Food and Nutrition Serviceguidance建议根据可靠性，尊重受访者的尊严以及儿童和成人护理食品计划的反馈和夏季食品服务计划参与者使用视觉识别。
- The new guidelines should attempt to reduce missing data. People may be reluctant to share race information for many legitimate reasons, including uncertainty about how personal data will be used, fear of discrimination, and not identifying with predefined race categories. These concerns can severely impact data quality and should be addressed to the extent possible in the OMB guidelines. New York’s state health insurance marketplace saw a20% increasein response rate for race by making several changes to the way they collect data. These changes included explaining how the data would be used and not allowing respondents to leave the question blank but instead allowing them to select “choose not to answer” or “don’t know.” Similarly, the Census Bureau found that a single combined race and ethnicity questionimproved data qualityand consistency by reducing the rate of “some other race,” missing, and invalid responses as compared with two separate questions (one for race and one for ethnicity).
- The new guidelines should follow best practices established through rigorous research and feedback from a variety of stakeholders. In June 2022, the OMB宣布a formal review process to revise Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity. While this review process is intended for the revision of federal data requirements, its findings can help inform best practices for collection and reporting requirements for nongovernmental data as well.
Consistency in Data Reporting
Whenever possible and contextually appropriate, the guidelines for data reporting should align with the OMB guidelines for federal data reporting to reduce administrative burdens. However, the report may find that other data is needed that goes beyond the OMB guidelines for the evaluation of privately developed AI.
Prioritizing the Needs of Affected Groups
In theirToolkitfor Centering Racial Equity Throughout Data Integration, the Actionable Intelligence for Social Policy group at the University of Pennsylvania identifies best practices for ensuring that data collection serves the groups most affected by discrimination. In particular, this toolkit emphasizes the need for strong privacy protections and stakeholder engagement. In their final report, the Subcommittee on Equitable Data should establish protocols to secure data and for carefully considered role-based access to it.
社区stak最终报告也应该参与eholders in determining which data should be collected and establish a plan for ongoing review that engages with relevant stakeholders, prioritizing affected populations and racial equity advocacy groups. The report should evaluate the appropriate level of transparency in the AI procurement process, in particular, trade-offs between desired levels of transparency and privacy.
Under existing procurement law, the government cannot outsource “inherently governmental functions.” Yet key policy decisions are embedded within the design and implementation of algorithmic technology. Consequently, it is important that policymakers have the necessary resources and information throughout the acquisition and use of procured AI tools. A Federal Artificial Intelligence Program would provide expertise and authority within the federal government to assess these decisions during procurement and to monitor the use of AI in government. In particular, this would strengthen the Biden-Harris Administration’songoing effortsto advance racial equity. The proposed program can build on both long-standing and ongoing work within the federal government to develop best practices for data collection and reporting. These best practices will not only ensure that the public use of algorithms is governed by strong equity and transparency standards in the public sector but also provide a powerful avenue for shaping the development of AI in the private sector.
The authors propose that the White House Task Force to Address Online Harassment and Abuse convene government actors, civil society organizations, and industry representatives to create an Anti-Online Harassment (AOH) Hub to improve and standardize responses to online harassment and to provide evidence-based recommendations to the Task Force.
If the 118th Congress decides to reauthorize the ESRA, ALI urges the HELP committee to strengthen our education system by prioritizing the following policies.
大部分工作人员是有色人种, and the nature of their temporary and largely unregulated work can leave them vulnerable to economic instability and workplace abuse. To increase protections for fair work, the Department of Labor should create an Office of the Ombudsman for Fair Work.