The Future of AI in Government: Opportunities and Responsibilities

Artificial intelligence is making its mark across various sectors, and its potential in government innovation is a hot topic, as discussed at the recent Government IT Breakfast Forum. Hosted by Howard University and REI Systems, the event delved into how AI is currently used in agencies. Jeff Myers, REI’s senior director, who moderated the conversation, pointed out that while agencies recognize AI’s potential, less than 5% are harnessing its power. This gap indicates a vast opportunity for AI-driven innovation in government services.

Commerce Department Chief Information Officer André Mendes shared insights on AI’s long-standing role in operations and production.

“We understand all of the excitement around generative AI that has taken a hold of the marketplace over the last year, but the reality is that AI efforts have been around for a long time in a variety of areas, but mostly sort of in the classic AI arenas associated with operations and production,” he said.

He outlined four AI uses within DOC:

  • The U.S. Patent and Trade Office uses machine learning to help examiners process large data volumes, improving efficiency.
  • The National Oceanic and Atmospheric Administration leverages AI for accurate weather forecasting and marine life studies.
  • The International Trade Administration uses AI for quick, multi-language document processing; and
  • The Census Bureau is exploring AI to improve the accuracy of the 2030 census, focusing on better survey methods and data reliability.

Nathan Manzotti, director of data analytics & AI Centers of Excellence within the Technology Transformation Services at the General Services Administration, spoke about his work in spreading AI knowledge and assisting agencies in AI adoption.

Some of the projects he has worked on include developing an AI system for meat classification at the Agriculture Department and establishing a centralized data system for the Surface Transportation Board.

The National Science Foundation’s Assistant to the Chief Data Officer for Analytics and Strategy Avital Percher highlighted the importance of using AI responsibly and safely. NSF is particularly interested in decision intelligence tools to improve its grant-making processes, aiming to support staff with AI.

NSF is working on establishing a basic framework for safely using AI, including understanding how to use it safely, who should use it, and why, Percher said.

The focus is on making sure everyone, even those without AI expertise, knows about the risks and safety measures of  AI use. This includes teaching people how to use AI tools safely and correctly, and making sure everyone in the organization understands these important points.

REI Chief Technology Officer Andrew Zeswitz emphasized the staying power of AI, saying, “AI is here, it’s here to stay.” He focused on AI’s role in improving customer experiences and efficient government spending.

Zeswitz explained he’s focused on using customer feedback to discover the most effective ways AI can assist people. The government mainly offers services and grants to citizens. He suggests closely examining these processes to identify major issues and determine where AI can be most beneficial.

“I look at that from the perspective of facilitating the user, but also from the perspective of efficient spending of government funding,” he said. “There’s the use of responsible AI.”

Ultimately it comes back to trust, he said. The use of the models over time, building that trust, modifying, tweaking to ensure corrections are made.

“It’s the empowerment of thought leaders, the empowerment of the workforce, increasing accessibility, not replacing workers, but really potentially making more of an equitable workplace so that people can do jobs with meaning and have value,” he said.

AI and Academia 

Dr. Harry Keeling, a professor at Howard University’s Department of Electrical Engineering and Computer Science, said universities like his are introducing new AI courses and research opportunities because of the growing interest in AI among computer science students and those in similar fields.

He stressed the importance of teaching AI positively and ethically.

“I think there’s a line that I like to think about when I try to answer this ethical question, and that is, are we using these tools as decision supports or are we making them decision makers?” he said. “I think when we cross that line, I think we start to muddy the water a bit in terms of the ethics of this use.”

Keeling mentioned he regularly asks his students about the ethical, good, and bad ways to use technology. Interestingly, he found that instead of cheating on exams, the students are using it to improve their language skills.

“These new tools that are built around generative AI technology are introducing new ways of expressing knowledge,” he said.

Preventing Bias in AI

Addressing concerns about women and other marginalized groups underrepresented in AI and AI training data bias toward white males, the speakers discussed how to ensure AI tools are more inclusive and considerate of these groups’ experiences.

Manzotti referred to a recent executive order that instructs agencies like NIST and DHS to create guidelines addressing AI bias and toxicity. This order includes producing synthetic data to make datasets more balanced and conducting risk assessments to spot and assess biases in AI models. He also mentioned the creation of the Responsible AI Officers Council last year, showing the government’s dedication to tackling AI-related challenges. Also, the Government Accountability Office has an AI Accountability Framework that’s a good starting point for managing AI use.

However, even with this progress, there’s still much to do in setting up and following responsible AI practices, Manzotti said.

Conclusion

To wrap up the event, Myers recapped three key ways AI could be useful in agencies:

  • Finding the Needle in the Haystack: Assist in tasks where humans have limitations, such as searching through vast amounts of data to find specific, crucial pieces of information.
  • Consistency in Quality, Especially in Mundane Tasks: Offer consistent quality in repetitive and mundane tasks.
  • Alleviating Humans from Undesirable or Inefficient Tasks: Freeing people from tasks they may dislike, are not good at, or are inefficient in performing.

Mendes mentioned the potential of AI is virtually limitless. He raised the question of when humans will start to fully use and enhance their capabilities with AI, beyond just using tools. This could include creating direct connections between our brains and technology, like wet interfaces and neurological links, to boost our abilities with AI and environmental sensors.

Mendes believes trying to limit AI technology is unwise. He thinks it’s important to have safety guardrails for AI, but recognizes these might not always work.

“And also that the guardrails have limitations because a lot of rogue individuals and rogue regimes will decide that the guardrails apply to them but not to me and not apply them with the same earnest and wholesome behavior that one would hope that we would do,” he said.

To close the event, Keeling brought a challenge to the audience.

“I want to leave you all with a challenge, and that is to continue to educate yourself personally and those which you can influence in this area of AI,” he said. “It is moving very quickly on a day-to-day basis things and revolutionary changes are being made and the more we stay abreast, the better.”

 

View Webinar Recording

The post The Future of AI in Government: Opportunities and Responsibilities appeared first on REI Systems.

Share:

More Posts