5 Ways OCIO Supports Responsible Artificial Intelligence

139

5 Ways OCIO Supports Responsible Artificial Intelligence

A visual representing artificial intelligence: A person at a keyboard taps a holographic-type image hovering above it. OCIO.

Artificial intelligence is finding its way into more aspects of everyday life, including the way we work. This administration wants to make sure government is leading the way with President Biden’s October 2023 executive order calling for the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” It sets a whole-of-government strategy to manage the risks and harness the benefits of AI, including protecting Americans’ privacy, supporting workers, and ensuring responsible and effective government use.

Under the executive order, the Labor Department is developing principles and best practices for employers and AI developers, and a report on the abilities of agencies to support workers displaced by AI, among other deliverables. Our Office of the Chief Information Officer is also responding to the executive order by coordinating the development and use of AI in our agency’s programs and operations.

How is OCIO responding?

We recognize AI has the power to both revolutionize the workplace and pose potential challenges. Our goal is to make sure AI in government technology helps – rather than harms – America’s workers and creates efficiency and value for our department staff who serve the public. As chief AI officer at the department, I am leading this work and collaboration with our federal agency partners.

Here are five ways the department’s AI strategies align with the executive order:

1. Transparency

  • EO standard: Requiring developers of the most powerful AI systems to share their safety test results and other critical information with the U.S. government.
  • OCIO action: Publishing the department’s AI use case inventory. We want to be transparent with the public about how we are deploying emerging technology.

2. Trust

  • EO standard: Developing standards, tools and tests to help ensure that AI systems are safe, secure, and trustworthy.
  • OCIO action: Partnering with a presidential fellow to create an AI Center of Excellence. We have a review process to make sure our AI products are responsible, ethical and reduce bias. Each of our AI solutions are built to address only the issue they were created to solve.

3. Cybersecurity

  • EO standard: Establishing an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
  • OCIO action: Investing to enhance cyber strength and fortify our digital infrastructure. We deploy AI and machine learning to predict, detect and prioritize cyber risks to our data and respond accordingly.

4. Privacy

  • EO standard: Developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
  • OCIO action: Forming an AI advisory board. We adhere to the AI Guide for Government for responsible AI frameworks that prevent infringement on privacy or other human rights.

5. Workforce 

  • EO standard: Rapidly hiring more AI professionals.
  • OCIO response: Taking part in a governmentwide hiring surge for AI experts. We joined a similar hiring effort in 2020 for customer experience designers. For information on opportunities, check out available IT jobs at OCIO.

Where do we go in 2024?

OCIO will continue to work with our agency partners to develop, implement and maintain trustworthy AI technology to enhance productivity and better serve the public.

Louis Charlier is the chief AI officer and the deputy chief information officer at the U.S. Department of Labor. Follow OCIO on LinkedIn.

 

Koebel.Tiffany…
Fri, 01/19/2024 – 14:53

Louis Charlier

Comments are closed.