Christina Montgomery is chief privacy officer and vice president at IBM. In this role, Montgomery oversees IBM’s privacy program and directs all aspects of the company’s privacy policies. She also chairs IBM’s AI Ethics Board, a multi-disciplinary team responsible for the governance and decision-making process for AI ethics. In April 2022, Christina was appointed to serve on the U.S. Department of Commerce National AI Advisory Committee, a group of 27 top AI experts from across disciplines that will advise President Biden and the National AI Initiative Office on a range of issues related to AI.
We sat down with Montgomery to discuss how AI benefits us every day in ways we don’t often realize, her experience in developing ethical AI tools, and how AI can make communities healthier and more equitable. We also asked her why AI gets such a bad rap in pop culture—and how that might be about to change.
Q: Can you give me an example of when AI is used in everyday life where the casual user might not notice?
A: Artificial intelligence is all around us, from the speech recognition software in our smartphones to the automated stock trading systems within our global economy. IBM has a long history of leading in sophisticated AI systems and was ranked #1 in AI lifecycle software market share earlier this year. So, you likely use IBM’s AI technology without even realizing it. Another critical example of AI in everyday life is fraud detection. If you’ve ever been contacted by your credit card company indicating they blocked a suspicious transaction, that transaction was likely identified through AI. Financial institutions and tech companies can use machine learning to spot suspicious transactions and logins, deploying special AI teams to train models to detect and alert customers of potential fraudulent activity.
Q: What are a couple of the biggest potential applications of AI that you’re most excited about?
A: Companies across the world are committing to better sustainability practices because they recognize the importance of sustainable and socially responsible business operations. However, companies need to be held accountable and demonstrate their progress in a clear way to make a real difference.
At IBM, we’re excited to use advanced AI systems to accelerate sustainability by measuring, managing, and optimizing our clients’ ESG (environmental, social and governance) efforts. Earlier this year, our company acquired a startup called Envizi to do just that, as its AI software gives companies the ability to analyze emissions data across business operations and better inform more sustainable operations and supply chains.
Now, clients can use the IBM Envizi Sustainability Performance Management system as a one-stop shop to build a solid data foundation, streamline reporting, engage their stakeholders, and accelerate decarbonization.
Q: You chair IBM’s AI Ethics Board, can you tell me a little bit more about that? And ethics and AI generally?
A: The IBM AI Ethics Board is a multi-disciplinary body responsible for leading the governance and decision-making process for AI ethics policies and practices. We aim to support a culture of ethical, responsible, and trustworthy AI throughout the company and guide the ethical development and deployment of AI systems.
IBM is committed to putting its principles of ethical AI into practice by advocating for policies that reflect our values. Through our Policy Lab, IBM called for “precision regulation,” a risk-based approach that emphasizes creating policy that addresses legitimate concerns while promoting innovation, protecting consumer rights, and ensuring accountability. IBM is constantly working to provide the tools and practices needed to identify and minimize instances of bias in AI systems. And it’s one of the reasons why I believe we must strengthen AI literacy and transparency throughout society and require mechanisms for consumer insight and feedback.
Q: What makes an AI tool “ethical”?
A: Because AI is embedded in our everyday life, it has to be trustworthy – that’s why IBM believes in helping organizations adopt and deploy AI systems in a responsible manner. We build systems based on trust by embedding ethical principles into AI applications and processes, and we believe companies advancing AI and other powerful technologies must lay a foundation that ensures new innovations put people first and extend their benefits across society. IBM is guided by three overarching principles: 1) that the purpose of AI is to augment human intelligence, 2) that data and insights belong to their creator, and 3) that technology must be transparent and explainable. We believe that when grounding AI systems in the pillars of explainability, fairness, robustness, transparency, and privacy, we can move closer to building AI tools that are truly “ethical.”
Q: I read on IBM’s website that the “The purpose of AI is to augment human intelligence.” How does AI do that exactly?
A: IBM believes that the purpose of the AI and cognitive systems we develop and apply is to augment – not replace – human intelligence. We strongly believe in building technology to enhance and extend our human capability and potential. AI should make us all more productive and better at our jobs, and its benefits should be enjoyed by as many as possible – not just the elite few. To that end, IBM is committed to investing in initiatives that will help the global workforce gain skills needed to use AI technologies to make their work better, more productive and more rewarding.
Q: You’re a member of the U.S. Chamber’s AI Commission designed to research and recommend artificial intelligence policies. Tell me a little about your work there.
A: I’m proud to be on the Chamber’s AI Commission with other AI experts from the public and private sector and academia to play a part in advancing U.S. leadership in the use and regulation of AI technology. This year, the Commission hosted field hearings across the country and abroad to gain insight from stakeholders, meet with top researchers, and to see and discuss AI issues firsthand. We’re now at the stage where the Commissioners are working together to develop policy recommendations to bring to the U.S. government to ensure that our nation continues to lead in AI innovation while fostering trust and fairness when deploying such complex and rapidly advancing technology.
Q: Why do you think it is that many of the depictions of AI in books, movies, and TV shows are negative and based on conflict? And do you think creative types could do a better job of capturing how AI and humans could collaborate?
A: I think it’s understandable to be wary of emerging and rapidly advancing technologies like AI, especially if those technologies seem highly technical and out of grasp in everyday life. That’s why I constantly work to show people that AI is all around us, and it has the potential to do enormous good if its systems are developed and deployed ethically and responsibly. Advancing AI systems will change the way we travel, the way we inform important decisions, and the way we buy, sell, trade, and manufacture goods and services. So, it would be nice to see people depict AI for its positive potential one day – because it’s a technology that can make the world smarter, healthier, and more prosperous if done right, and it will most definitely transform our world in dramatic ways.
Q: Is there anything you would like to add?
A: The AI and privacy fields are going to serve as great opportunities, particularly for those just entering the workforce, or making career pivots. These are growing fields with job opportunities that didn’t exist until 10 or 15 years ago. And you don’t necessarily have to be a high-tech engineer in order to get into this field – I’m a lawyer who studied English in college, so I come at this job with a different perspective. Many of the issues we see with AI aren’t technical, they’re socio-technical in nature, and they should be discussed and debated by diverse teams with a wide array of backgrounds – from data science and engineering to HR professionals, lawyers, and even English majors. So, if you’re interested in this field, get involved and share your experiences and ideas – we need them in this burgeoning field.