Vice President and Chief Privacy and Trust Officer, IBM
Commissioner, U.S. Chamber Commission on AI Competitiveness, Inclusion, and Innovation
Published
May 17, 2023
Generative artificial intelligence (AI) tools using large language models such as ChatGPT are sparking new conversations about how AI can transform—or threaten—our workforce, do our homework, and even replace online therapy. Like all rapidly advancing technologies, AI has both wide-ranging risks and benefits. AI tools can change the ways we travel, the ways we make important decisions, and the ways we buy, sell, trade, and manufacture goods and services. But, when not designed and deployed responsibly, they can have unintentional bias and negative impacts, not to mention privacy risks.
We are at a key moment in time when the ethical application and regulation of AI must be explored together with industry leaders, ethicists, policymakers, and civil society alike so we can realize the true potential of AI and ensure ethical standards are built into current and future generations of this technology. And one sector where the proper use of this technology can have a massive positive impact is our government.
Creating Efficiency in Government: AI’s Role in Driving Productivity
Think about it—many government employees who are developing policy, engaging with members of the public, or working with large sets of data are encountering logistical processes and systems that are outdated and in desperate need of modernization. These busy government officials should be able to dedicate time and focus to their core responsibility to deliver for the American people and digital transformations like AI can help them automate some of the bureaucratic processes and administrative tasks that are slowing them down. This can increase efficiency and personalize service delivery, making our next visit to the DMV and our Tax Day filing much smoother.
Congress, for example, has begun experimenting with new AI tools in the policymaking process, with the House of Representatives establishing an AI working group and obtaining licenses for ChatGPT Plus to test the tool and share its results in creating and summarizing content. Since our government officials began expediting such processes decades ago by adopting new technologies, they’ve had to keep up with rapidly evolving ones, from AI and hybrid cloud to quantum computing. This means wrestling with the difficult task of modernizing legacy systems while keeping strong data and privacy protections in place.
In 2020, COVID-19 forced our government agencies to rethink their approach to service delivery in a chaotic and ever-changing pandemic. The IRS, for example, faced massive backlogs in processing paper tax returns and established a pilot program to test an automated system. Their digitization program proved to be a success: the Modernized e-File (MeF) system processed 140,000 paper tax returns, accepting 76% of paper tax returns without human intervention while effectively spotting anomalies and flagging potential fraud.
This is a great example of a government agency leveraging technology to confront an enormous challenge. But this and digital transformation, in general, can’t happen safely or responsibly unless we have strong data privacy protections in place. Any time an entity— from a small business to a large corporation or government agency—is developing and deploying AI and automated systems that use personal data, it must integrate robust access controls and protocols to protect customer privacy and confidentiality.
Championing the Responsible and Ethical Deployment of AI: The Role of the U.S. Chamber’s AI Commission
I was pleased to participate in the U.S. Chamber of Commerce’s bipartisan Artificial Intelligence Commission on Competitiveness, Inclusion, and Innovation, consisting of members with experience in government, industry, and civil society and formed to study these issues. The Commission recently released a report that outlines how the government can harness the power of AI while mitigating the potential for risks to manifest. After five field hearings across the country and overseas where we engaged with over 87 expert witnesses, the AI Commission had several key takeaways, including:
- Over the next two decades, nearly every business and government agency will use AI—and this will have a profound impact on our society, the economy, and national security.
- Our policy leaders must develop thoughtful rules and regulations for the development of responsible AI and its ethical deployment.
- If we fail to regulate AI, we could see adverse impacts on the economy, our individual rights, and the innovation of beneficial technologies.
- The U.S.—through its technological strengths, robust system of individual rights, advanced legal system, and decisive alliances with other democracies—is uniquely positioned to lead this effort.
- The U.S. needs to act swiftly and intentionally to ensure economic growth, support a competitive workforce, maintain its edge in the global economy, and provide for our future national security needs.
- Future administrations and Congress must prioritize policies that promote responsible and ethical AI.
We have the power to build and strengthen AI and generative AI systems that drive efficient functions to support both government workers and members of the public alike. Digital transformation is bringing us closer to an exciting new age, so let’s leverage AI and automation—with a human touch—to get there.
The comments in this blog are reflective of the author only.
About the authors
Christina Montgomery
Christina Montgomery is Vice President and Chief Privacy and Trust Officer, IBM Commissioner, U.S. Chamber Commission on AI Competitiveness, Inclusion, and Innovation.