Skip to playerSkip to main content
  • 4 minutes ago
Tom Sulston is deputy chair at digital rights watch, a charity organisation that promotes and defends digital rights. He has also worked at Thoughtworks Australia, a consultation firm that provides advice to companies on AI technologies. He says the balance between setting out to attract investment and protecting Australian’s isn't quite there.

Category

📺
TV
Transcript
00:00Australians consistently show that we're pretty sceptical of AI companies. 83% of us
00:08would welcome stronger regulation before we go too deeply into AI. And the government
00:13had this great opportunity to get into that and to regulate big tech, but they flubbed
00:17it. Their AI plan has sold us and our data out to the AI companies. And we urgently need
00:23more guardrails in the development of AI rather than opening the locks up to let the world's
00:28most rapacious tech companies invade our private data to grow their profits.
00:33What about this proposed AI Safety Institute as part of this plan? Does that go very far
00:40in addressing your concerns? The AI Safety Institute's a good thing. I think
00:45we can back that quite confidently. But it's not an answer in itself. Firstly, it doesn't
00:50exist yet. But it also has no regulatory powers. It's an advisory body. And it's very likely
00:56to be focusing on long term worries about AI, rather than the immediate problems that we
01:02know we have. And wrapped up in that is that the idea of safety in AI is not as useful as
01:08other concepts like respecting our human rights and acting in a way that's aligned with our
01:14community expectations.
01:14So you argue this is not enough to keep Australians safe. What is the danger? What are you so concerned
01:21about? What could happen under this government plan?
01:24So part of the plan is that the government has said that they're not going to be legislating
01:31until they see serious harm taking place. But it's not good enough to wait for the iceberg
01:37to hit the ship before we start steering. We're already seeing concrete harms caused by AI.
01:42We know that there's race and gender bias in AI systems used in healthcare. We know that
01:48LLMs are being used to generate harmful content.
01:51And just explain to viewers what LLMs are?
01:54Sorry, large language models. So what you might see as a chatbot, things that are very conversational
02:00have been having very harmful conversations with people, even to the extent of inducing them
02:05to commit suicide. We also have problems with non-consensual deep fake and nudify image and
02:11video generators and the creation of myths and disinformation being spread around the
02:16world, created by AI and then propagated through social media. So there are plenty of problems
02:21to tackle. And we can't wait for the government to get round to seeing these problems and then
02:29responding to them. They're happening now. We need to start regulating now.
02:32I interviewed another expert on this subject in the last couple of months or so. And he said,
02:37Australians are the most concerned of any nation in the world about the onset of AI. And he kind of
02:46was suggesting that there is too much concern here. What's... How do you feel about that view?
02:54I don't think we have too much concern. I'm quite glad to see that Australians are reasonably
02:58sceptical of the supposed benefits that are being pushed in front of us that we will enjoy because
03:05of AI. We know that four-fifths of us want to see stronger regulation, but we're confident using
03:10those systems and letting those systems loose on the data that is about us and belongs to us.
03:15So I think I'm pretty proud of Australians for being sceptical about the benefits here.
03:19And we really need to see those being more concrete before we're able to start talking about how willing
03:25we are to make concessions to big tech to let them harvest our data.
03:30And Australia is the second largest destination for investment in data centres. So these are the
03:36physical engine rooms for AI that churn through power and water. How does it appear that's going
03:44to grow under this plan? And how do you feel about that?
03:47So the plan calls for a hope for almost tenfold increase in data centre construction. Obviously,
03:56we do need data centres in Australia. We're a pretty technological society. We do a lot of things on
04:01the internet, and it's good to have that power accessible to us. But you're right. They use a
04:06lot of electricity. They can use a lot of water. And in a dry country where we have very high electricity
04:13prices, there are going to be some tough conversations that we're going to have about
04:17how much water and how much electricity are these data centres allowed to use? What does that mean
04:22for the prices that we as consumers are paying? And what does that mean for things like our
04:27commitments under the Paris obligations, when we might have to ramp up non-renewables in order
04:31to power data centres?
04:33Yeah. And so just taking a look at this process then, how do you feel about how it was looking
04:38about a year ago? And how this has ended up now? Sir, a year ago, we were talking quite a lot about
04:46mandatory guardrails for AI companies. And some of those were pretty straightforward, things like risk
04:52management systems, testing AI systems to make sure that they do what they should, and they give the
04:57output that's expected, third-party transparency, and having complaints handling processes. But the
05:04government has put those to one side with the national AI plan, even though they're really
05:09basic, like they're very straightforward things that you would expect any technology company to be
05:14doing. And so when the government denies us these very straightforward protections, it's kind of
05:19neglecting its duty to protect us from the excesses of big tech companies playing fast and loose with
05:24our safety. And so, yeah, how do you feel about that process that's happened over the last year? Are you
05:30curious as to what has happened? Very curious. I'd love to be inside some of the rooms where clearly
05:37the discussions were going on. But I think it's fairly plain to see that the Australian government
05:43is really keen to see some benefits from AI. And when you dig into the report, it's in three sections,
05:50opportunities, sharing benefits, and safety. And it's obvious that that is the priority order in which
05:56the government sees it. So they're looking to mitigate harms rather than prevent harms, for example, is the
06:01language in the report, or promoting responsible practice rather than enforcing responsible practice. So it's
06:07largely a consent manufacturing exercise for the AI industry, in the hope that there will be some
06:14productivity gains, which I think we should also be a little bit cynical about. A lot of AI companies are not
06:19profitable at this point in time, and some of them are even not profitable on the margin at inference level, like when you
06:25ask them questions, it's still very expensive to the point to which they're losing money. So we have to be
06:30very careful that we're not falling prey to a lot of the capital that's washing around the world at the
06:35moment following AI, and opening ourselves up to risks when the AI bubble bursts.
Be the first to comment
Add your comment

Recommended