At a House Oversight Committee hearing on Thursday, Rep. Robert Garcia (D-CA) called out HHS Sec. Robert F. Kennedy Jr.'s possible use of AI in a "MAHA" report.
Category
🗞
NewsTranscript
00:00recognize Mr. Garcia for five minutes. Thank you, Madam Chairwoman. Thank you to our witnesses for
00:05being here today. I want to just start by noting that we know that certainly AI can be an important
00:10tool for federal government, for our federal workforce. We always want to make sure that
00:14we're centering those that are working in our government and uplift the work that our federal
00:19workers do every single day. But we know that AI can navigate and help government become more
00:24efficient and intuitive in many cases. I think we should be honest about that. It can also help
00:27constituents get services faster. I've seen this happen at the local level. I've seen it happen,
00:33of course, and then happening at the state level and here in Congress as well. We also know that
00:37many agencies just need to move more quickly, be more efficient. And AI can be a tool when you
00:43respect the federal workforce and work with the federal workforce on how we make these implementations
00:49happen. AI can also be used, of course, on issues around red tape. I saw this happening, seeing
00:54happening in cities across the United States. And we can analyze data better and faster.
00:59We can also work to empower our federal workforce in ways that they can use AI to help them do their
01:03jobs. And I think that's an important piece of this. It's also clear, though, that AI can be
01:07incredibly dangerous, incredibly disruptive, and certainly without guardrails can cause real harm
01:14to the American public and to the work that we're all trying to do. Now, deployment, of course,
01:19it's going to take investments into work. But I do have some serious concerns about how AI is used,
01:24and it's particularly in this administration. And I want to talk about one of them, which I think
01:27is actually really, really important. So this, of course, is Robert Kennedy Jr., someone who I
01:34consider to be an extreme anti-vaxxer. I believe for him to be a conspiracy theorist and certainly has
01:39no business being the Secretary of Health and Human Services. Now, take one step back. We just went
01:44through a pandemic where 1.3 million Americans lost their lives, where businesses were shut
01:49down across the country, where health care was brought to the forefront of the public
01:54consciousness. And we know how important getting real medical vaccine testing information is
02:00to the American public. And right now, we're dealing with measles outbreaks in Texas and
02:05other places. Information needs to be peer-reviewed, fact-checked, and done in a way that is
02:11responsible. But here, of course, we know that there was recently a Make America Healthy report
02:15that was put out by RFK Jr. and HHS. And I'm going to include in the article just this quote.
02:23There were dozens of errors, including broken links, wrong issue numbers, and missing or incorrect
02:28authors. Some studies were misstated to back up the report's conclusions, or more damningly,
02:34didn't exist at all. At least seven of the cited sources were entirely fictitious. What we do know
02:41from many folks that have read this report is it appears that AI played a role in developing a report
02:47about the public health of Americans. This is something that we should be incredibly concerned
02:53with. And let's be clear, RFK Jr. has a long history already of dangerous unscientific beliefs.
02:58He said 5G and Wi-Fi cause brain damage. He said that people don't, that you don't believe that HIV
03:04causes AIDS. He said that water can make you transgender. These are very concerning positions.
03:09And the fact that he puts out this report, and HHS, which has been long respected as an important
03:14agency, puts out a report that is full of AI errors and AI-generated content should concern every
03:21member of Congress on both sides of the aisle. This is incredibly devastating when you, on top of that,
03:27look at all the cuts that he's making to NIH and other health agencies across the United States.
03:33And so this is, for me, very concerning, and it should be for all of us. Professor Schneier,
03:38is it fair to say that these kind of errors in important government documents and health reports
03:44that could be due to AI are concerning to us and we should be concerned about?
03:49They are concerning, but put the blame where it belongs. I mean, maybe the AI is not suitable for the task,
03:54or maybe the human who used the AI didn't like check the work. And that would have been true if the human
04:02tasked an intern. I mean, so it's really a matter of what's the process by which, by whatever gives you
04:09your first draft, do you look at it and make sure it's correct? We've seen the same problem with
04:14attorneys submitting briefs into courts that have fake precedents. I mean, yes, the AI made the
04:20mistake, but it's the human who puts their name to it, who says this is correct. They're the ones
04:27responsible. I mean, I teach this stuff. I tell my students to use AI, but you're responsible for
04:31plagiarism. You're responsible for the accuracy. I agree with you completely. And sir, I also spent 10
04:35years in the classroom teaching as well. It's really weird now. And so I, same. And I think that
04:42the idea that we're now seeing so much AI-generated content produced in our universities is a different
04:48topic. But let me just conclude with this. I think I want to, your point is exactly right.
04:53AI can be a tool, but there has to be the human element. The workforce has to be a part of it. It
04:58has to be a responsible user of AI. And so that we don't end up with medical reports that are false.
05:04And with that, I yield back. Thank you. Thank you. I will now recognize