Skip to playerSkip to main content
  • 4 months ago
At a House Judiciary Committee hearing before the Congressional recess, Rep. Andy Biggs (R-AZ) probed the witness panel about how quickly artificial intelligence is evolving.

Category

🗞
News
Transcript
00:00I'm going to recognize myself for my last, my five minutes here, but I would suggest that this is, as we started off, I said this would be the first of its kind, and I meant that we're going to have to keep pushing this.
00:14So I think a week, let's see, maybe a week ago, Elon Musk announced Grok 4, and he talked about artificial intelligence, and Grok 4 is going to have the intelligence, it already has beyond a PhD, engineering, science, genius, et cetera.
00:38Artificial superintelligence, and we've touched on it lightly here today, using different terms, but I guess my question is, at what point do we no longer see computational decision-making with a human-first mover,
01:05and you have a algorithmically iterative process that essentially, and we're there in some extents now, but we're not totally there because human interaction still is the first mover.
01:24But at some point, it won't be a human that's the first mover anymore, it'll be the algorithm itself.
01:30And how long before we get there, how do we get there and prevent the crime and provide the deterrence that is necessary?
01:43And so for the hypothetical I'll give you, how long before adjudicating whether there's probable cause or not for a search warrant or arrest warrant is merely algorithmically sustained,
01:55as opposed to having a human, as opposed to having a human, make that determination.
02:00So, with that bizarre question, but acknowledging that we are actually moving so rapidly that we probably thought by 2050 you'd be getting to artificial superintelligence,
02:12but it looks like maybe before 2030 you're going to be in artificial superintelligence.
02:16Dr. Baum.
02:18And then we'll go down to the whole panel.
02:20Thank you, Mr. Chairman.
02:23So, certainly a thought-provoking exercise.
02:27I'm glad we're still at the point where it's an exercise and not reality, but I recognize that we may not be far from there.
02:35Before we get to artificial general intelligence or before we get to certainly artificial superintelligence,
02:42and those are still theoretical, not a foregone conclusion, there is this very powerful and very rapidly increasing aegetic AI,
02:56and where we start to see some of what you described already taking place at an admittedly lesser extent than what we would if it were true AGI or ASI,
03:07but we still have algorithms that are making decisions on behalf of humans that are able to do so at speed and often at a scale that certainly makes it difficult for the human agent to observe and to be a part of.
03:27And so it really depends on how much trust, how much authority we provide those agents, and what those models, how are they fine-tuned to limit that.
03:41And so I do believe, as you said, Mr. Chairman, this is the first of hopefully many, having some of those industries,
03:49some of those private sector companies describe that so that this subcommittee and Congress can ensure that they are able to predict and know
04:02and set up those guardrails, if necessary, to limit what you're concerned about.
04:09And since we only have a minute left, each of you get 20 seconds.
04:16Very interesting question.
04:17I think we see AI agents making simple decisions today.
04:21I think for more complex decisions, as the models can do more complex reasoning in the next few years,
04:26it really should come down to how important is the decision and then what transparency and explainability we can get from the model
04:33and how much human oversight is necessary.
04:35I think it really depends on how risky that individual decision is, is where we should hopefully see the rollout for these different autonomous decisions.
04:43I'm going to build directly on that.
04:44The role of AI is a choice.
04:46And laws passed in Texas, the National Security Memorandum that governs national security uses of AI,
04:51say that certain things are off limits for artificial intelligence.
04:54And you mentioned probable cause.
04:55That strikes me as a core foundational tenet of due process.
04:58That should probably be truly a human activity.
05:00And that is a policy choice that we make of who will be, what will be human, what will be AI, and where will humans be in the loop.
05:09I've never seen anything move as fast as this has moved in my lifetime.
05:13And while I don't have a great answer for how long, it's certainly coming.
05:17And if I could leave this committee with anything today, it's that we need to move as quickly,
05:22building the tools, working with this body in order to provide the right laws there.
05:29As an old school prosecutor, I'm happy with judges still making decisions around probable cause.
05:34But I do think we really need to ensure that we also are using the tools defensively to meet this moment.
05:41Great. Thank you all for being here.
05:42My time has expired.
05:43There's so much more to cover with this.
05:45And like I say, it's just the first.
05:48And hopefully we'll get back together soon and continue this.
05:53Please feel free to contact myself or the ranking member.
05:58I'm assuming that's okay.
06:00She says that's okay because we want to have a dialogue and see where there's holes, where there's gaps.
06:05Let us know.
06:06We want to do stuff that's preventative without being constraining, if that's possible.
06:14So thank you.
06:15With that, we are adjourned.
Be the first to comment
Add your comment

Recommended