00:00I'm trying to get my head around this, and it kind of portrays Anthropic as a good guy in this,
00:04right?
00:04Like fighting for democracy, trying to not, and the U.S. government is wanting something, you know,
00:09can they still get it if Anthropic doesn't give it to them?
00:12It must be more nuanced than good guy versus, you know, someone that wants technology that they could use for
00:18bad effects.
00:19And I think you could argue, is it, does a commercial entity have the right to determine how their products
00:27are used by the military?
00:29And the thing that made this whole situation kind of complicated is that what Anthropic was asking for,
00:35we want red lines around autonomous weapons and surveillance, domestic surveillance.
00:40Those things are already like basically illegal, but they just weren't in the contract language.
00:45But the fact that the U.S. government didn't want to put that in the contract language
00:49meant that there was some flexibility down the line, and then Anthropic was worried about that.
00:53Now, what could happen next?
00:55Of course, someone else could just fill that gap.
00:58Another tech company, OpenAI, Elon Musk's XAI certainly has a lot of expertise, and Google DeepMind.
01:06You know, those are all possible contenders.
01:08I think the real question is, has Anthropic now drawn a line in the sand?
01:14You know, they've made this a public issue, which in a way is good.
01:17It could have just been behind closed doors, contract negotiations,
01:21but now this could maybe become a public debate, which is a good thing.
01:25Yes, so could it not force, but could it inspire other companies to do the same?
01:29It could.
01:30The question is whether there is pressure, enough pressure on them to also have that same kind of standoff with
01:37the Pentagon.
01:38And so for them, that will be a business decision.
01:40I mean, it could be a question of integrity on the part of the executives.
01:44At the end of the day, some of these are publicly traded companies like Google.
01:48They have a fiduciary duty to their shareholders.
01:50Do they want to give up hundreds of millions of dollars in potential contracts just like Anthropic did?
01:55So it's really an open question how they'll do it.
01:58I mean, we do sometimes, you know, look at AI and say in terms of job displays and all of
02:02that.
02:03But I guess fundamentally, it's different to any other kind of technologies, the industrial revolution, everything like that,
02:09because this is private companies.
02:11Yeah.
02:11Right.
02:11So does that really change how we should look at it?
02:16Yes and no.
02:17I mean, I think the fact that some of these companies are public mean that they have shareholders who are
02:23part of that conversation.
02:24They are stakeholders and they can put pressure on those companies.
02:28Just because a company is a private company doesn't mean they can't be part of that debate.
02:33Well, because they have youth, they have chat GPTs used by almost a billion people.
02:36So they've got this whole other stakeholder set who needs to be part of that debate.
02:40But who's the right?
02:41And again, this is a difficult question.
02:42Like, who's the right entity or government or person to try and put some guardrails in this new technology?
02:48Or should we even be looking at putting guardrails?
02:51That's a great question.
02:52It's got to be a combination.
02:53I wish it was a simple answer, but it's always the whole ecosystem answer.
02:57So there are civil society groups, campaigners, activists.
03:00There's government regulators, obviously, but they're just a bit slow in kind of moving quickly through legislation.
03:07And the companies themselves, Google DeepMind, for many years had red lines in place in their dealings with the government
03:14about military use.
03:15So this is not a new issue.
03:16And oftentimes it does come from the companies.
03:19It's just a question of whether that gets watered down.
03:21And sometimes that does happen.
03:23Should we really be seeing this as a race of who gets there first or how their models are great
03:30between the U.S. and China?
03:32And then within the U.S., open AI versus Anthropic versus DeepMind?
03:36Yes, it is a race.
03:37It absolutely is a race.
03:38And that's why also earlier this week we saw this change in Anthropic's language around how it was pursuing its
03:46own safety principles.
03:47It was giving itself a little bit of extra flexibility in allowing itself to move more quickly because – and
03:53it talked about how the environment had become more competitive.
03:57So it absolutely is.
03:59And I think on the one hand, you talked about Anthropic being the good guys.
04:01But on the other, they are potentially compromising a little bit on those very strict safety principles to move ahead
04:07quickly and keep up.
04:08Yeah.
04:09Are they breaking too many things by going too quick?
04:11I'm not talking about Anthropic, but everyone actually.
04:13And how far behind is China at this point?
04:16So I would say things are getting broken on the fly.
04:20That's how these companies work.
04:21This is the Silicon Valley mantra.
04:23You iterate, you A-B test, you put it out in the public, and then you see things as they
04:28go wrong.
04:28Then you fix them and you patch.
04:30So, yes, I think things are going to break, as you say.
04:35And people will talk about edge cases, for example, some of the mental health issues that people have experienced, the
04:40court cases that have resulted from that.
04:42As far as where China is, they are absolutely catching up.
04:45I mean, because of the open source movement in China, models there, like Alibaba's Quen, Kimi.
04:52I mean, these models are, and DeepSeq, obviously.
04:54We just had Chinese New Year, so there were a bunch of new announcements out of China.
04:58And, yes, we are seeing things move much more quickly there.
05:02Parmi, thank you so much, as always, for joining us.
05:04Bloomberg Opinion columnist there.
05:05Parmi Olsen, also author of Supremacy, a great book that everyone should read.
05:10Tip.
05:10Tip.
Comments