Skip to playerSkip to main contentSkip to footer
  • yesterday
During a House Oversight Committee hearing on Thursday, Rep. Wesley Bell (D-MO) spoke about the Pentagon removing a page dedicated to Jackie Robinson during a DOGE purge of race-related content using AI.
Transcript
00:00I now recognize Mr. Bell from Missouri.
00:03Thank you, and thank you to our witnesses for being here today to speak on this important issue.
00:10Americans are concerned about AI and want protections from the risks it poses to their privacy, security, rights, and freedoms.
00:20Key AI guardrails that provided these protections at the federal level were repealed.
00:25Now House Republicans are pushing legislation that bans states from enforcing laws or regulations to protect Americans from AI for a full decade.
00:37Mr. Snyer, why is it important that the federal government adopt AI responsibly and with necessary oversight?
00:45Because doing it badly is going to harm people.
00:48These are technologies that are going to, in a sense, replace human activities.
00:56Whether they're going to be assistants to humans or they're going to replace humans for low-level decisions, they're going to do the things that humans did.
01:05And we know that when humans make bad decisions in government, people get hurt.
01:11So AIs are going to do the same thing.
01:13The problem is they can do it faster.
01:15They can do it more efficiently.
01:17So they could more efficiently hurt people.
01:20They could more efficiently help people, and that's why you need to do it responsibly.
01:24When Doge pressured Secretary Hegseth and the Department of Defense to quickly purge any mention of race across its webpages, the agency used AI.
01:35The result was the Pentagon removed a tribute to Jackie Robinson, an American sports legend, civil rights icon, and World War II American Army veteran who served his country honorably simply because it acknowledged his race.
01:55We're talking about American history that was flagged and removed in the blink of an eye.
02:02Mr. Snyer, without such safeguards, what risks do we face in ensuring the prevention of purges, distortions, or discrimination in federal AI?
02:13Mr. Snyer, so again, it's less the AI and more the people that used it.
02:17And the weirder one is we removed the page about the Enola Gay bomber, which was an aircraft and not actually gay.
02:24It just had the name.
02:27So when these technologies are used without human oversight, I mean, as humans, we could recognize that's ridiculous.
02:33But an AI doesn't have that context.
02:36An AI is going to do, like any computer, exactly what it's told.
02:40And if it's given bad instructions, if it's not monitored, if there isn't audit, there isn't integrity in the system, you're going to get bad outcomes.
02:49So, I mean, the AI did it, but blame the humans who asked the AI to do it.
02:54Removing access to information is just the first step towards deeper consequences.
03:01Doge is creating lasting impact in our expert federal workforce alone.
03:08We have seen a purging of nonpartisan federal workforce, even though the Government Accountability Office has for years been sounding the alarm
03:18that a lack of skilled workers will lead to more waste, fraud, and abuse in government.
03:23Understanding how AI works and its possible biases in programming requires a very specific skill set, one which we're losing every day.
03:35Mr. Snyder, again, why is it important to retain skilled workers with institutional knowledge who can carefully oversee the integration and use of AI across our federal agencies?
03:47Because the AI is not ready to do that yet.
03:49You know, without the people, the AI is going to run without context.
03:53And you're going to get mistakes like the ones we just talked about.
03:57And, you know, these are mistakes that are in some ways theoretical.
04:01They're about information.
04:03But they also harm people.
04:04We're talking about using AI to reduce fraud.
04:07Now, you're going to have problems with false negatives.
04:10There's a fraud you're not going to catch.
04:11You don't have problems with false positives.
04:13There's fraud you're going to catch that isn't real.
04:15A really good example comes from the UK.
04:17Horizon software was used to audit rural post offices and flagged hundreds of instances of fraud that wasn't real, costing people their jobs.
04:27There were suicides.
04:28This is the problem of getting it wrong.
04:31And humans can make these same mistakes.
04:33We just make them slower.
04:34AI makes the pace of this much worse.
04:37In the name of efficiency, the government has recently slashed funding for Job Corps, a program that offers free career technical training for young people aged 16 to 24 instead of investing in our future workforce.
04:52They've eliminated a program that could help fill the shortage of skilled workers in emerging tech careers.
04:59When it comes to the federal government, it's not a question of just efficiency, but also of equity, fairness, and safety.
05:06I believe in efficiency, and I believe that technology can do enormous good for the American people.
05:12But this is a country that has a history that AI cannot erase.
05:16I look forward to ensuring that when the federal government employs AI, it actually serves the needs of those who have too often been left behind.
05:23The gentleman's time's expired.
05:25I yield back. Thank you.
05:26I now recognize that.

Recommended