Consumer Financial Protection Bureau
Attention: Chief FOIA Officer
1700 G Street, NWWashington, DC 20552
Re: Request for Information, Office of Science and Technology Policy; National Priorities for ArtificialIntelligence (88 Fed. Reg.34,194-34,196, May 26, 2023)
To Whom It May Concern,
The U.S. Chamber of Commerce’s Technology Engagement Center (“C_TEC”) appreciates the
opportunity to submit feedback to the Office of Science and Technology Policy (OSTP) in response to its
request for information (“RFI”) on “National Priorities for Artificial Intelligence.” C_TEC also appreciates
OSTP’s acknowledgment and fully agrees that “[t]he pace of AI innovation is accelerating rapidly, which
is creating new applications for AI across society. This presents extraordinary opportunities to improve
the lives of the American people and solve some of the toughest global challenges.1” AI is not a new
technology; its origins can be traced back to the 1950s. However, since that time, technology has
steadily improved, particularly in recent years. Notably, the recent enhancements in computing power
have significantly advanced the technology. We believe AI has enormous potential to enhance security,
safety, and the lives of every American.
In March 2023, the Chamber released the report of the Commission on Artificial Intelligence
Competitiveness, Inclusion, and Innovation that advocated for a gap-filling, risk-based approach to
develop policies based on existing laws and regulations to facilitate artificial intelligence research and
deployment. This must be done through the leadership of the United States government for domestic
policy and in collaboration with like-minded nations to develop a global framework. A public-private
partnership to issue standards is critical for these domestic and international efforts.
The Chamber provides the following feedback on the request for comment on the “National
Priorities for Artificial Intelligence.”
Protecting rights, safety, and national security:
1. What specific measures – such as standards, regulations, investments, and improved trust and safety practices – are needed to ensure that AI systems are designed, developed, and deployed in a manner that protects people’s rights and safety? Which specific entities should develop and implement these measures?
The Chamber has been a strong supporter of the NIST AI RMF. The NIST AI RMF, developed pursuant to congressional mandate, is a voluntary framework that leveraged extensive stakeholder input during its development, and provides businesses and organizations with flexibility to help identify, mitigate, and address risks across the entire AI lifecycle. In particular, the Chamber looks forward to NIST’s adoption of “profiles,” which “are implementations of the AI RMF functions, categories, and subcategories for a specific setting or application based on the requirements, risk tolerance, and resources.2” Developing best practices around specific use cases will be beneficial as new AI systems and tools enter the market and their designers, developers, and users seek to develop consumer trust and mitigate potential risks appropriately.
2. How can the principles and practices for identifying and mitigating risks from AI, as outlined in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, be leveraged most effectively to tackle harms posed by the development and use of specific types of AI systems, such as large language models?
The Chamber strongly believes that further investment in the NIST AI RMF, and further development of “profiles” around generative AI, such as large language models, can be leveraged to tackle issues associated with the result of specific use most effectively. To that end, the Chamber appreciates the Administration’s recent partnership with NIST to establish a Public Working Group on Generative AI, which is tasked, among other workstreams, with developing such a profile. Furthermore, we believe that utilizing the NIST AI RMF and other flexible, voluntary approaches allows for developing use-specific best practices to stay ahead of rules and regulations that may quickly become outdated as the technology progresses.
3. Are there forms of voluntary or mandatory oversight of AI systems that would help mitigate risk? Can inspiration be drawn from analogous or instructive models of risk management in other sectors, such as laws and policies that promote oversight through registration, incentives, certification, or licensing?
Voluntary industry standards and frameworks are essential in helping mitigate risk. This is why the Chamber has strongly supported the NIST AI RMF. This voluntary process allows for companies and organizations to continue to adapt and stay ahead of potential issues surrounding the use of AI. However, it may be appropriate for developers of higher-risk uses of AI technology, to release for outof-package implementation some certification that explains the standards to which the entity that created the AI holds itself. Such certification would support explainability, promote trust, and minimize harmful impacts.
4. What are the national security benefits associated with AI? What can be done to maximize those benefits?
The Chamber believes that AI and machine learning (ML)-powered cybersecurity tools can be beneficial to U.S. economic and national security. Specifically, these behavior-based, predictive tools can enable the protection of U.S. organizations, including public and private critical infrastructure. A fundamental advantage of AI-driven cybersecurity solutions is that they can detect stealthy threat actors (e.g., nation states and criminal organizations) through the application of behavior-based threat detection methods—thus giving AI users an advantage over our adversaries. Similarly, AI can enable U.S. network and information system defenders to swiftly trace indicators of compromise and disseminate defensive measures.
If cyberattacks can be prevented through the use of AI (e.g., endpoint mitigation) then their costs can be dramatically reduced. Also, as part of a broader Zero Trust initiative, AI-based tools can bolster an organization’s cybersecurity posture and make its overall business environment more resilient.
In addition, generative AI tools can be leveraged to alleviate the national shortage of skilled cybersecurity workers.3 For example, worker access to trusted, well-trained chatbots can assist with or eliminate time-consuming tasks. AI can also equip workers with augmented expertise, thus lessening the effects of the skills gap. Such solutions are years away. Meanwhile, research to make large language models (LLMs) dependable, transparent, and trusted can move us toward our goal of improved cybersecurity.
5. How can AI, including large language models, be used to generate and maintain more secure software and hardware, including software code incorporating best practices in design, coding and post-deployment vulnerabilities?
AI-assisted code generation holds significant promise. LLM-based code generators such as GitHub CoPilot are not yet fully mature. Nevertheless, as they become more transparent and trustworthy, they can provide a skill augmentation boost to programmers (e.g., recommending more secure coding practices and flagging less secure ones) AI techniques can also help maintain the security of software and hardware systems through anomaly detection. AI models can be trained on past system activity to identify anomalies for investigation. Likewise, system security can be evaluated using AI to simulate or synthetically generate malicious activity or by simulating known attack vectors.
6. How can AI rapidly identify cyber vulnerabilities in existing critical infrastructure systems and accelerate addressing them?
AI can improve organizations’—including noncritical infrastructure entities—cybersecurity by accelerating the detection of vulnerabilities and orchestrating remediation measures. Many organizations have processes in place to receive, analyze, and respond to vulnerabilities disclosed to them from internal and external sources (e.g., internal testing, security bulletins, and security researchers). Still, organizations may find it challenging to prioritize and manage the new vulnerabilities they contend with.
AI systems can feature autonomous detection and decision making to improve an organization’s reaction times to vulnerabilities. Such systems can foster the reduction of risk with advanced continuous vulnerability assessments, risk-based prioritizations, and remediations. AI has the potential to help organizations identify and fix potential cybersecurity issues and vulnerabilities before they become significant incidents.
7. What are the national security risks associated with AI? What can be done to mitigate these risks?
Risks to national security associated with AI is an important issue as well. The Chamber would welcome engaging experts and policymakers, particularly White House officials, to identity both opportunities and challenges related to U.S. national security.
It is paramount for the U.S. government, such as the departments of Commerce and State, to collaborate with American industry to promote international consensus on AI and cybersecurity governance. Bridging differences between the U.S. and other countries should help ensure that stakeholders’ security concerns are adequately addressed and that AI and cybersecurity requirements do not create trade barriers or limit American firms’ access to foreign markets. Hence, the Chamber urges federal officials to pursue the following objectives as they consider and promote U.S. AI and cybersecurity policies abroad:
- Push for U.S. leadership in international cybersecurity forums. Standards, guidance, and certification schemes relevant to cybersecurity are typically led by the private sector and adopted on a voluntary basis. As we move to heightened global cybersecurity standards and laws, it is key that the U.S. asserts its views and leadership.
- Reduce regulatory fragmentation. A fragmented global AI and cybersecurity environment would create much uncertainty for organizations and splinter the resources that businesses devote to activities—ranging from sound product development, production, and assessments to supply chain risk management.
- Spotlight global alignment with industry-led baselines. The Chamber believes that enterprises in the U.S. and overseas should align their cybersecurity laws and policies with the common language of the National Institute of Standards and Technology Cybersecurity Framework,4 which is rooted in international standards and cutting-edge business practices.
Advancing equity and strengthening civil rights:
9. What are the opportunities for AI to enhance equity and how can these be fostered? For example, what are the potential benefits for AI in enabling broadened prosperity, expanding economic and educational opportunities, increasing access to services, and advancing civil rights?
The Chamber sees the use of AI as an essential technology in expanding economic and educational opportunities. The use of AI can lift up and provide increasing opportunities to all, from providing better-tailored, more timely healthcare, to providing tailored education to those with language barriers and learning disabilities.
Additionally, AI can assist workers with certain administrative or routine tasks to potentially reduce errors, and can help identify equity needs and potential solutions.
10. What are the unique considerations for understanding the impacts of AI systems on underserved communities and particular groups, such as minors and people with disabilities? Are there additional considerations and safeguards that are important for preventing barriers to using these systems and protecting the rights and safety of these groups?
One of the key factors in understanding the potential impacts on communities is having the necessary data for evaluation. For this reason, it is essential to look to open government data sets that allow companies and organizations to develop and deploy AI trained on these key demographics to ensure the systems are protecting the rights and safety of all groups.
Support is needed for broader data equity research and updates. Algorithms learn from existing data. Historically, certain populations have been under- and mis-represented. Larger conversations regarding data representation and accuracy, while respecting laws and regulations protecting consumer privacy and rights, are necessary to improve model development and performance.
12. What additional considerations or measures are needed to assure that AI mitigates algorithmic discrimination, advances equal opportunity, and promotes positive outcomes for all, especially when developed and used in specific domains (e.g., in health and human services, in hiring and employment practices, in transportation)?
The federal government can lead in strengthening the development and deployment of AI. The following recommendations should be acted on now:
- First, the federal government should conduct fundamental research in trustworthy AI: The federal government has played a significant role in building the foundation of emerging technologies through conducting fundamental research. AI is no different. A recent study from the U.S. Chamber Technology Center and the Deloitte AI Institute5 surveyed business leaders across the United States, and found that 70% of respondents support government investment in fundamental AI research. The Chamber believes that enacting the CHIPS and Science Act was a positive step as the legislation authorizes $9 billion for the NIST for research and development and advancing standards for “industries of the future,” including artificial intelligence.
Furthermore, the Chamber has been a strong advocate for the National Artificial Intelligence Initiative Act, which was led by then-Chairwoman Eddie Bernice Johnson and Ranking Member Lucas, which developed the office of the National AI Initiative Office (NAIIO) to coordinate the Federal government’s activities, including AI research, development, demonstration, and education and workforce development.6 The business community strongly advises Congress to appropriate these efforts fully.
- Second, the Chamber encourages continued investment into Science, Technology, Engineering, and Math (STEM) Education. The U.S. Chamber last year polled the American public on their perception of artificial intelligence. The findings were clear; the more the public understands the technology, the more comfortable they become with its potential role in society. Education continues to be one of the keys to bolstering AI acceptance and enthusiasm as a lack of understanding of AI is the leading indicator for a push-back against AI adoption.7
The Chamber strongly supported the CHIPS and Science Act, which appropriated many of these critical investments, including $200 million over five years to the National Science Foundation (NSF) for domestic workforce build up to develop and manufacture chips, and $13 billion to the National Science Foundation for AI scholarships-for-service. However, this is just a start; we now need action on these from the federal government.
- Third, the government should prioritize improving access to government data and models: High-quality data is the lifeblood of developing new AI applications and tools, and poor data quality can heighten risks. Governments at all levels possess a significant amount of data that could be used to improve the training of AI systems to support more accurate outcomes, and to create novel, beneficial applications. When C_TEC asked leading industry experts about the importance of government data, 61% of respondents agreed that access to government data and models is important. For this reason, the Chamber encourages the federal government to make available government data that can assist with training models.
- Fourth, increase widespread access to shared computing resources: In addition to highquality data, the development of AI applications requires significant computing capacity. However, many small startups and academic institutions need more computing resources, which in turn prevents many stakeholders from fully accessing AI’s potential. When C_TEC asked stakeholders within the business community about the importance of shared computing capacity, 42% of respondents supported encouraging shared computing resources to develop and train new AI models. Congress took a critical first step by enacting the National AI Research Resource Task Force Act of 2020. The Task Force finished its work earlier this year, and we would now encourage the build-out of the NAIRR.
- Fifth, enable open source tools and frameworks: Ensuring the development of trustworthy AI will require significant collaboration between government, industry, academia, and other relevant stakeholders. One key method to facilitate collaboration is by encouraging the use of open source tools and frameworks to share best practices and approaches to trustworthy AI. An example of how this works in practice is the NIST AI RMF, which is intended to be a consensus-driven, cross-sector, and voluntary framework, akin to NIST’s existing Cybersecurity Framework. Stakeholders can leverage the AI RMF to develop best practices to mitigate risks posed by AI applications. Policymakers should recognize the importance of these approaches and continue supporting their development and implementation.
13. How might existing laws and policies be updated to account for inequitable impacts from AI systems? For example, how might existing laws and policies be updated to account for the use of generative AI to create and disseminate non-consensual, sexualized content?
The U.S. government should comprehensively review existing laws and policies applicable to AI. This requirement was put into statute through the American Competitiveness of a More Productive Emerging Tech Economy Act or the American COMPETE Act, signed into law in the Consolidated Appropriations Act of 20218 . A full examination of existing laws will provide policymakers with an understanding of existing gaps and allow for a thorough discussion about policies that can address those gaps.
Bolstering democracy and civic participation:
14. How can AI be used to strengthen civic engagement and improve interactions between people and their government?
AI can provide tremendous benefits to the public in supporting civic engagement, including with federal, state, and local governments. Potential use cases include chatbot features to allow for quicker interactions and using AI to support federal government processes to provide efficiencies on the backend. Therefore, we strongly advocate for the federal government to review ways the technology can be utilized internally to provide better services to the American public.
Promoting economic growth and good jobs:
17. What will the principal benefits of AI be for the people of the United States? How can the United States best capture the benefits of AI across the economy, in domains such as education, health, and transportation? How can AI be harnessed to improve consumer access to and reduce costs associated with products and services? How can AI be used to increase competition and lower barriers to entry across the economy?
AI will transform our ways of life and provide amazing benefits throughout every sector of the economy. There are many efforts on which the United States should focus to capture these benefits, such as the investment and build-out of the National AI Research Resource (NAIRR). The NAIRR provides a shared computing and data infrastructure that will allow researchers across scientific fields and disciplines access to computing resources, high-quality data, educational tools, and user support to address some of the most challenging scientific questions and problems society looks to solve.
18. How can the United States harness AI to improve the productivity and capabilities of American workers, while mitigating harmful impacts on workers?
AI is a tool that has tremendous potential to improve the productivity and capabilities of workers, including mitigating workplace/employment-related harms. From the use of technology like AR/VR to assist with alerting employees to dangers on the job to assisting in the up-skilling of workers to allow them to move up the promotional ladder at a faster rate. The utilization of AI has an amazing promise for the American worker.
19. What specific measures – such as sector-specific policies, standards, and regulations –are needed to promote innovation, economic growth, competition, job creation, and a beneficial integration of advanced AI systems into everyday life for all Americans? Which specific entities should develop and implement these measures?
The focus should be on use-specific policies and standards, which could be important for the technology’s continued economic growth and innovation. Each use has unique attributes, such as its own specific lexicon, laws and regulations specific to that use or practice which require a more nuanced review of the technology. Many entities and workstreams are vital to help provide guidance and assistance, including the following: Standard setting organizations such as IEEE, ISO, and many other specific standard organizations which provide key understanding to uses of the technology and nuances regarding its use.
Regarding federal efforts, we continue to highlight the importance of the NIST AI RMF and how the framework provides an important framework in allowing for the review of AI throughout the entire life-cycle of AI. Finally, we believe that developing “profiles” by utilizing the NIST AI RMF will provide the entire AI ecosystem with voluntary best practices and standards to help advance trust within AI systems.
22. What new job opportunities will AI create? What measures should be taken to strengthen the AI workforce, to ensure that Americans from all backgrounds and regions have opportunities to pursue careers in AI, and otherwise to prepare American workers for jobs augmented or affected by AI?
The U.S. Chamber last year brought together an independent Commission on Artificial Intelligence, Competitiveness, Inclusion, and Innovation, which published their recommendations9 in March of this year which provided the following policy recommendations to prepare the future workforce:
- Use an Evidence-Based Approach: Policymakers must take action to understand the potential impact of AI on the American workforce by leveraging new data sources and advanced analytics to understand the evolving impact of AI and machine learning on the American public
- Educate the Future Workforce: The United States must increase education around AI in both the K-12 and higher education systems by encouraging policymakers to reform the standard curriculum to better prepare students for developing AI and machine learning systems.
- Train and Reskill: The public and private sectors must invest in training and reskilling the future workforce. These investments should be targeted toward programs that help ease transitions and improve incentives for businesses to invest in retraining. Policymakers should also leverage community colleges and vocational schools to train workers to perform jobs alongside AIenabled systems
- Attract High-Skilled Talent: In areas where a worker shortage cannot be addressed through education, training, and reskilling, Congress must act to increase the AI talent pool through targeted refinements to the H-1B visa process to encourage high- skilled immigration to the United States.
23. How can the United States ensure adequate competition in the marketplace for advanced AI systems?
As previously indicated, the Chamber brought together a diverse set of stakeholders last year to form an independent commission to provide policy recommendations. The following are suggestions they put forward to help bolster U.S. global competitiveness.
- Shore Up Global Partnerships: U.S. officials should collaborate with key partners and allies to develop more sensible global governance frameworks that advance our common democratic goals and values.
- Advance Intellectual Property Protections: Building on the foundation of the current system, policymakers must clarify intellectual property law requirements to ensure adequate protection of AI-enabled intellectual property. Before any change, policymakers must involve relevant stakeholders to consider potential unintended effects.
- Provide Necessary Resources: Policymakers should provide additional resources to the U.S. Patent and Trademark Office to support the acquisition of technical expertise, training, and other resources to speed review of AI- and machine learning– related public patent applications.
- Protect Ingenuity: Policymakers should also explore opportunities to grant provisional approvals for submissions under review where appropriate to mitigate the effects of lengthy delays.
Innovating in public services:
24. How can the Federal Government effectively and responsibly leverage AI to improve Federal services and missions? What are the highest priority and most cost-effective ways to do so?
Through AI and data analytics, the Federal Government can profoundly improve its services to the American people. One of the critical efforts that must undertake is digitizing. The U.S. Chamber last year put forward a report titled “Government Digitization: Transforming Government to Better Serve American.10” The report highlights how the American public and government can both prosper through digitization. The study showed that investment in digitization within the federal government could save:
- $117 billion – the cost to Americans of using paper forms
- $10.5 billion hours – the amount of time spent by the public on paper forms
- 8 months – the average time it took for the IRS to process paper-based taxes
- $1 trillion – the amount of money that could be generated worldwide annually with widespread government digitization
25. How can Federal agencies use shared pools of resources, expertise, and lessons learned to better leverage AI in government?
The Chamber highlights that the government can better leverage resources, expertise, and lessons learned through:
- Investment in the current federal workforce to help up-skill and train for the future needs of the federal government.
- Reducing barriers to entry for utilizing of the public sector to drive efficiencies for lowrisk uses in government agencies.
- More public/private partnerships to allow the U.S. government to benefit from private sector gains and efficiencies.
- Work to obtain interoperability of data within the federal government. Massive gains in efficiencies and productivity are observable through data analytics and interoperability of such data throughout the national government ecosystem.
- Removing silos throughout the government to allow CIO’s and CTO’s to discuss best practices and lessons learned on transforming government to meet the needs of the 21st century constituent.
26. How can the Federal Government work with the private sector to ensure that procured AI systems include protections to safeguard people’s rights and safety?
The private sector and business community stand ready to discuss how AI systems can be procured in a way that provides necessary safeguards around AI deployment and use. However, it is important to reiterate that the utilization of the technology differs dramatically for each specific use, and each specific use has a different risk associated with it. For this reason, we would highlight the importance of taking a risk-based approach and in a way that does not inhibit the private sector from continuing to innovate for public sector use.
27. What unique opportunities and risks would be presented by integrating recent advances in generative AI into Federal Government services and operations?
Large language and foundational models are not entirely new technology. However, as indicated, recent advances have significantly changed the current dialogue around utilization. Integrating generative AI into the federal government could bring about a massive increase in productivity and provide a great resource to those public servants who serve within the U.S. Government. Also, technology has been found to have the “greatest impact on novice and low-skilled workers.11” This could be an invaluable resource for the federal government as it looks to upskill the future government workforce.
28. What can state, Tribal, local, and territorial governments do to effectively and responsibly leverage AI to improve their public services, and what can the Federal Government do to support this work?
State, tribal, and local governments have critical roles within their communities; they are the first responders to their citizens and have the obligation to provide critical services in times of their constituent’s needs. AI can and should be utilized at all government levels to help meet these requirements. AI is being developed to assist with the “predict, prepare and respond12” to natural disasters. The use of this technology can save countless lives, and be an important tool for State, Tribal and local government officials.
Conclusion:
In conclusion, C_TEC believes that AI has enormous potential to transform entire industries, providing innovative consumer benefits and enhancing personal safety, security and privacy for all Americans. While there may be risks associated with these technologies, it is important to evaluate the level of risk posed by technologies and their intended applications to determine an appropriate course of action for mitigating risks. Working alongside industry stakeholders to determine appropriate applications and considerations for various use cases is an important way to determine potential risk. We firmly believe that AI has a critical role in the advancement of our society. C_TEC looks forward to collaborating with OSTP on this important matter and encourages OSTP to constructively engage stakeholders on all matters that address artificial intelligence technology.
Sincerely,
Michael Richards
Director, Policy
Chamber Technology Engagement Center
U.S. Chamber of Commerce
1 OSTP-Request-for-Information-National-Priorities-for-Artificial-Intelligence.pdf (whitehouse.gov)
2 https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
3 House Homeland Security Subcommittee on Cybersecurity and Infrastructure Protection hearing on “Growing the National Cybersecurity Talent Pipeline,” June 22, 2023. https://homeland.house.gov/subcommittee-hearing-on-growing-the-national-cybersecurity-talent-pipeline
4 https://www.nist.gov/cyberframework
5 https://www.uschamber.com/technology/investing-trustworthy-ai
8 https://www.congress.gov/bill/116th-congress/house-bill/133/text
9 https://www.uschamber.com/assets/documents/CTEC_AICommission2023_Report_v6.pdf
11 Generative AI at Work | NBER
12 https://today.tamu.edu/2023/06/13/leveraging-big-data-and-ai-for-disaster-resilience-and-recovery/