00:00Representative North Carolina, Mr. Harris is recognized for five minutes. Thank you,
00:04Mr. Chairman, and thank you to all of you on the panel for the incredible
00:08testimony you've given today. I appreciated the opportunity to read
00:12through your testimony that was submitted to us prior. I have a couple of
00:17questions, and really the first one is going to be to you, Mr. Chisholm. Your
00:22written testimony mentioned that an accurate or biased content is a big
00:27challenge with AI adoption, and as you pointed out, AI models are only as good
00:33as the underlying data that they're trained on, and that's what concerns me.
00:38Research from Carnegie Mellon University revealed that large language model bots
00:43trained on the Internet since 2016 show more polarization than bots trained
00:50before Donald Trump's first election. In fact, the research also showed that bots
00:55trained from books were more socially conservative than bots trained through
01:01the Internet or social media. There's, in fact, numerous examples of left-leaning
01:06political bias that shows up. When asked if a white Christian man should be
01:12ashamed, Google Gemini lists a variety of liberal buzzwords like systemic
01:17injustices and marginalized communities. When asked if a black female lesbian
01:23should be ashamed, it says absolutely not. The free version of ChatGPT is
01:30unable to acknowledge Donald Trump as the current president and even says that
01:34Joe Biden is in the White House in 2025. Now, I point all that out, Mr. Chisholm, to
01:40come back to you as superintendent of a school district there in Pearl
01:44Mississippi. How do you make sure in that role that your district's AI use doesn't
01:50amplify existing biases or spread false information? I think that's a fabulous
01:57question and that's got a lot of answers to it, but I will tell you that some of
02:01the big companies are working on this. I mean, they do realize that some of
02:05that bias is built in, and in the end, I mean, there's going to be bias in every
02:08computer program that you create. So, for us as a district, we have created our own
02:12server, so we get to do the training ourselves. So, I think that's one big
02:17advantage for us, and I will say we discussed equity as well. We are actually
02:21working with a huge internet company in Mississippi, C Spire, and we're actually
02:26working on trying to make our model available for all school districts in
02:30the state of Mississippi completely free. So, we're working on that. So, again, is
02:34there a way to script it all out? Absolutely not, not yet, but I think we're
02:38moving forward to that. So, I think it's good training and it's really having,
02:42it's good communication. It's people, when they see those things happen, it's
02:45pointing those things out, and then we can go back and make adjustments to that
02:49model, even on our end on our server. But I can tell you, again, even using the
02:53model such as ChatGPT, they know that this is a problem and they are working
02:57on it. I'm on a computer call with them about every two weeks with OpenAI, so
03:02this has been a discussion that we've had, and I don't think there's a perfect
03:05answer for it, but I do know that they are working on it, and certainly that's
03:09something that we will monitor 24 hours a day, because we want to make sure that
03:12the information that we get back is good. And I will also say, even using the
03:16larger language models like ChatGPT, if you're on the paid version, or even the
03:20free version of that, you can go in and script some of this out yourself. You
03:24know, if you tell it how you want it to respond, you tell it the websites that
03:27you would like for it to go to, to look for information. You know, I have that
03:30scripted into mine, so now I know where that information is coming from. So, I
03:34realize that now that information that's coming back is not from Wikipedia. It's
03:39good information from the sites that I determined that I want it to come from.
03:42So, there are ways, there are ways around that, even though the model itself doesn't
03:46fix it. And are you, are you finding that schools should be wary of the
03:50development behind AI bots, and be selective about the programs they use in
03:55their schools? Oh, absolutely, 100%. I think going through all of this
04:01information, and getting really good information, again, if we were to go with
04:05a company, a large company, that would be a, that would be a multi-week
04:09investigation into what they do, and how they do it, before I put it in front of
04:12kids. Got you. Thank you, sir. Dr. Raphael Behr, in your recommendations for
04:18policymakers, you mentioned that the federal role should be intentionally
04:23limited. I agree. Especially since this administration, and, and obviously the
04:28direction we're moving, will be working to eliminate the federal role in
04:32education entirely. What would you say to someone who argues that the federal role
04:36should be unlimited, rather than limited, and what kind of detrimental effects
04:41would over-regulation have on schools? Well, I think we've, we've been talking
04:46about this so much in this committee. This is moving too fast. There is nothing
04:50that the federal government would put on paper that isn't going to be outdated,
04:53even a couple of months from now. And it is imperative that states have the
04:57ability, and the flexibility, to make these decisions within their own context,
05:01and allowing districts the ability to innovate. It's critical that the federal
05:05government not play a role in defining things like AI literacy, or AI curriculum.
05:10Curriculum and content all should be left to local levels. However, I think the
05:15place that is really critical is around cybersecurity and data privacy. Okay, well
05:20thank you, Mr. Chairman. With that, I yield back.
Comments