- il y a 22 heures
Is AI Making Us Safer, or Hackers More Dangerous
Catégorie
🤖
TechnologieTranscription
00:00Thanks for being here, thanks for listening to our panel discussion, we're going to be talking about AI, is it
00:04making us safer or is it making our lives more dangerous?
00:08This is probably the big topic in cyber security at the moment, so we're going to answer that question once
00:13and for all, no sitting on the fence.
00:15I'll introduce the panel first, then we'll have a discussion. We might have some time at the end, we'll leave
00:20about five minutes possibly for some questions depending on how we go.
00:23So, if you have any ideas, raise your hand when I invite those questions. Let's try and come up with
00:28some really difficult ones to really stump the panel.
00:30So, we have Alexander Bornyakov, who is Ukrainian Deputy Minister of Digital Transformation. Alex was appointed to position in October
00:392019.
00:40He is behind such projects as the DIA City, U-Residency and AI Development.
00:45Wendy Nather is the Director of Strategic Engagement at Cisco. Wendy has been doing this for, when did you start?
00:53Oh God, about 40 years ago?
00:55Right, there we are then. And before that, I've got here that you were Research Director of the Retail ISAC
01:01and Research Director of the Information Security Practice at 451 Research.
01:06And last but not least, we obviously have Guy Philippe Goldstein at the end there,
01:10who's a lecturer at the School of Economic Warfare, advisor to PwC and to VC Fund Expo on Capital and
01:17Cybersecurity,
01:18lecturer at the School of Economic Warfare on Cyber Defense, and also an essayist and novelist.
01:25So, first of all, I wanted to just lay out our stall a little bit here, because I've been covering
01:30cybersecurity now for almost six years.
01:32And whenever I talked to a cybersecurity company, even back then, they were telling me that AI is baked into
01:39our product.
01:40We're already using AI for ages. So, what's changed in the last two years?
01:45Well, what's changed in the last two years, you're right. A lot of us have been working on AI and
01:51cybersecurity for several years.
01:53We've been working with models that basically tell customers, look, when we see this, this is what we believe it
02:00is in our experience,
02:01but you tell us. Tell us if we got it wrong.
02:04But what's different now is that the public in general is more aware of AI than it was before.
02:11And as you could say almost that AI is slowly becoming democratized.
02:16People are becoming aware of what AI can do and how it can serve them, even if they're not in
02:24cybersecurity or in tech.
02:27So, really, is it the chat GPT effect?
02:31Before that, the companies were using AI, but it's just that everyone knows about it now.
02:35It's a bit beyond that, to be fair, in the sense that we have a bit like free situation going
02:41on.
02:41As Wendy would say, we already use AI, but now we have new layers of AI, especially with generative AI,
02:48which is very easy to use because all of a sudden you can use the most efficient programming language,
02:54which is natural language, and that democratizes a lot of the use.
02:58But, infunditly, there are issues within that layer.
03:04May that be all new types of injection.
03:07We had SQL injection in the past.
03:09Now we have prompt injection, indirect prompt injection.
03:12We have issues around hallucination.
03:14There's lots of stuff, actually, we don't fully understand,
03:17even in the statical complexity of those models.
03:20Even researchers don't understand why, with so many trillions of parameters,
03:25we still don't have overfitting and we have the ability of generalization.
03:28So, that's one thing.
03:30The second thing, evidently, and this was touched upon by Wendy,
03:34we have a fantastic development of actually having these tools and previous tools of AI
03:40within the cybersecurity field.
03:43We have the great guys, the Microsoft guys would come up with Microsoft Security Co-Pilot.
03:48We have Palo Alto Networks, which is bringing co-pilots to each of its different modules,
03:54you know, the Cortex, the Prisma Cloud, the whatnot.
03:57So, everybody is moving ahead, including lots of new cybersecurity startups.
04:01We're talking about that, actually, with Wendy.
04:03And the third issue is that we may have also a new wave of risks
04:09because the bad guys may be also using this new stuff
04:14and there are lots of things to be told about, but that's also the risk so far.
04:18And I can say even how, because what we see now,
04:22that the AI has been using for social hacking.
04:24So, it's basically a script that pretend that you're a friend of yours
04:29and they just scam your data.
04:31They just get your logins, secret questions, probably,
04:35and they're using a Jornative AI to talk to you,
04:37like pretending you're a friend or pretending you're a relative.
04:39and that might sound like a quite funny example.
04:48The AI was used by one guy that I know,
04:51that he's set up to talk to random women to send them nudes.
04:57So, this is how it works.
04:59And it talks to you like it wants to really know you
05:02and really wants, like he knows something about you,
05:05he reflects, and then it becomes your friend
05:08and then you share information with him.
05:10And then it's information used to hack you.
05:12So, that's going to be a big issue, like you said.
05:16It's one, and actually it goes back also a bit
05:18on those new emerging properties of those LLMs.
05:22Miral Kosinski at Stanford has already demonstrated last year
05:26that LLM has these inherent abilities
05:30to find out what are your personality traits.
05:34Are you open-minded?
05:35Are you conscious?
05:36You have a high level of consciousness and whatnot.
05:38So, it could potentially play on that.
05:40It can also develop a sort of theory of mind
05:43of, you know, who's the guy behind the screen.
05:45Plus, we have also seen some studies that shows
05:48that actually, done the proper way,
05:51LLMs can be 80% more persuasive
05:56than human beings in debates.
05:59So, that goes back to what you say, Alex.
06:01There's lots of risk around that element of social engineering
06:04which always existed in cyber but which now could be amplified by those new means.
06:08But I think the danger is what we are reading into AI,
06:13into what LLMs are producing.
06:15I mean, basically, they're just bringing out statistical results.
06:21They are spitting out the most probable,
06:24the most prevalent language that they are seeing.
06:27but we imbue that with a meaning that isn't actually there.
06:32And as you think about how we talk about AIs,
06:35how they, we say that they think,
06:38we see that they, say that they hallucinate,
06:40they're not doing any such thing.
06:42They are taking inputs
06:43and they are finding patterns
06:46the way they're programmed to
06:47and they're spitting things out again.
06:49but it's how we are helping them fool us
06:53that is the real danger.
06:56So, we won't be wrong.
06:57And actually, there was a case
06:58already back in 2022
06:59of a Google engineer
07:00who all of a sudden thought
07:02that those stuff were sentient
07:03though you're absolutely right.
07:05It's just, you know, a statistical machine
07:07and that's it.
07:08So, maybe we're giving these machines
07:10a bit too much power,
07:11a bit too much respect.
07:12But it does work though, doesn't it?
07:15If you look at the way that
07:16companies and people are attacked,
07:18it's normally always,
07:19we talked about it in a discussion
07:21the other day, didn't we, Guy-Philippe?
07:22It's normally always through the human.
07:25It's the human element, they call it,
07:26as the weakest link is often described.
07:28Because if you can persuade someone
07:30to hand over vital information,
07:32then you can get in.
07:33And is it true that LLMs are helping criminals?
07:37Because I sometimes see
07:39different statistics on this.
07:40Cyber security companies want you to think
07:42there is a wave of attacks
07:44because that's how they sell their products.
07:45But do you think the evidence is there
07:47that we are seeing that?
07:48I don't, I don't,
07:50well, that's a very small part
07:52of what is happening
07:54when we say AI.
07:56For example, one big thing
07:58that AI has enabled is scaling.
08:03If you can build agents
08:05to talk to each other
08:06and to automate functions
08:07that you had to do yourself,
08:09a friend of mine, David Maynor,
08:11who's a very well-known hacker,
08:13has built himself agents
08:16to work together to automate
08:18a lot of the work
08:19that he used to have people do for him
08:21when he was researching something in particular.
08:24And he said to me,
08:26what he believes that AI will do for him
08:29is extend his useful professional life
08:32maybe another 20 years
08:34because he won't have to do
08:36some of the things that he can now automate.
08:38Are the bad guys doing this now?
08:41Of course they are.
08:41So it's up to us as defenders
08:43to do the same thing
08:45to keep up with it.
08:47But it's always going to be an arms race.
08:51Yeah, I think that we're going to see
08:54more and more of cases
08:55of stealing digital identity
08:57and AI pretending to be someone.
09:00And that's definitely
09:03already been used by criminals.
09:05I saw some baddest of the software
09:08that could pretend
09:09that it's now using real estate.
09:13So it's like calling you
09:14as your real estate agent
09:16that you know
09:17and he's talking to you
09:19like he's this engine.
09:20And he said,
09:21remember you bought an apartment from me.
09:23This is another house opening
09:26so you can participate.
09:27Are you interested?
09:28And then after the conversation happens,
09:31it goes with all the data
09:34about possible deal.
09:37And it completely emulates
09:40the voice of the person that you know.
09:43And those guys told us,
09:45like we can reprogram it
09:48so it could be your mother
09:49asking for your money.
09:52That's so easy for us.
09:53Just give us a sample of the voice
09:55and she's going to get you into your head.
09:59Yeah.
10:00There are lots of YouTubes of me out there.
10:03And within 15 minutes,
10:05no, there are,
10:06you know,
10:06and places like this.
10:08And another hacker friend of mine
10:10within 15 minutes
10:12had put together
10:13a deep fake of my voice
10:14saying things that,
10:16let's just say I would never say them.
10:18But it sounded just like me
10:20except for the end part
10:22where it had me laughing.
10:23And that was not my laugh.
10:25So, you know,
10:26would my kids be fooled?
10:28Probably not.
10:29But this is something
10:30that we have to teach
10:31the next generations.
10:33So, just to concur
10:35on what had been said
10:36in the initial question.
10:38And so, you hear from Wendy.
10:40And we hear now on the market
10:42some additional elements
10:44as per,
10:45do we see already
10:46that big risk
10:48of a wave of bad guys
10:50or not?
10:51And so,
10:52recent comments
10:53from the RSA conferences
10:55from Chris Krebs
10:56who was a former head
10:57of CISA DHS
11:00in the United States
11:01from other guys
11:02such as director
11:02of IBM Security
11:03and other folks.
11:04It doesn't seem
11:05at this stage
11:06that in terms of having
11:09very advanced ransomware
11:12or cyber attack,
11:15we're not seeing
11:16or maybe we don't see
11:18or we don't know
11:19but we're not seeing
11:20actually a great use
11:22of new AI tools.
11:24And actually,
11:24there was a very recent
11:25Splunk study
11:27that would ask CISOs
11:28and basically,
11:3045% were saying
11:31more risk,
11:32we're seeing.
11:3343% were saying
11:35no,
11:35it's going to help
11:36the good guys.
11:37And 12% were saying
11:39we don't know.
11:40And you know,
11:40basically,
11:41when you have about
11:41half and half,
11:42it means that
11:43we don't have information.
11:44So,
11:45that's the stage.
11:46Right?
11:47That being said,
11:48indeed,
11:49it seems that
11:49the low-hanging fruits
11:51in terms of criminality
11:52is just what Alex
11:53and Wendy
11:54has talked about.
11:55May that be
11:56deepfake,
11:57may that be
11:57voice cloning
11:58with lots of new
11:59sophisticated approach.
12:01For example,
12:01for the last two months
12:02in India,
12:04at the Bombay Stock Exchange
12:05and the National Stock Exchange,
12:06there's been lots of
12:07deepfakes of the directors
12:10of those stock exchanges
12:11so that criminals
12:12could play
12:13on the stock price
12:15or this
12:16or that
12:18asset
12:18because all of a sudden
12:20there will be
12:20deepfake of the director
12:21to increase the price
12:23or decrease the price
12:23and you can make money
12:24out of it.
12:24and this is indeed
12:25the type of risk
12:26that we may be
12:27at this stage
12:28seeing.
12:29Some of this
12:29is really scary
12:30and it kind of
12:32seems quite insurmountable
12:33how you can defend
12:34from some of this
12:35but how can we use AI
12:36to defend against
12:37some of these threats?
12:38Well,
12:39again,
12:40whatever the bad guys
12:42are automating,
12:43we can also automate
12:44and whatever
12:45they're scaling at,
12:46we can also scale
12:47and I'm sure Alex
12:48has some good stories
12:49about how they've used
12:50some of the same techniques
12:52for defense.
12:53Yeah,
12:55well,
12:55there are obviously
12:57solutions to that.
12:58One of the things
13:00that we can use
13:01as an example,
13:02we as a country
13:02at war,
13:03we're facing
13:04a lot of
13:07disinformation attempts
13:08from Russia
13:08and they're using AI
13:10to create
13:11narratives
13:12and complement
13:14those narratives
13:14with thousands
13:15of comments
13:15which are very
13:16sophisticated.
13:17It's like a real
13:18person talking to you
13:19and if there's
13:21some motion
13:21of narrative,
13:23they support
13:23this narrative
13:24with very relevant
13:25comments
13:25and what we did
13:27in turn,
13:28we created a system
13:29based also
13:30on artificial intelligence
13:31called,
13:32one of them
13:32is called
13:33Mantis Analytics.
13:34it's part
13:35of Ukrainian
13:36Armored Force
13:37software
13:38that we use
13:39to identify
13:41automatically
13:41the source
13:42of propaganda
13:43or the false
13:43message
13:44so we can
13:44100% sure
13:46say that
13:46this narrative
13:47came from
13:48a certain
13:50source
13:50and this is
13:51how it's
13:52spreaded.
13:52It's not being
13:53done by human
13:54anymore.
13:54It's also AI
13:55so it's kind of
13:56funny,
13:57maybe it's kind of
13:57scary but
13:58when I say this
13:59it's kind of funny.
14:00So AI
14:00creating a false
14:02narrative
14:02and AI
14:03from other side
14:04creating
14:06a coping mechanism
14:07of how
14:08we can fight it.
14:10This is just
14:10one example
14:11but there are
14:12much more of them.
14:13And what I really
14:14want to stress
14:15from the example
14:15of Alex
14:17which is the
14:18element of
14:19authentication.
14:20Where is the source?
14:21Who was the first
14:22one to report that?
14:24Basically
14:24when you do
14:25journalism
14:25this is your
14:26first task.
14:27Alright?
14:29We forget about that
14:30but in a way
14:31AI should
14:32help us go back
14:33to the right
14:34methods
14:34the right
14:35approaches
14:36in journalism
14:36or
14:37in private
14:39messages
14:39in what we
14:40see in other
14:41issues
14:42which is
14:43text manipulation
14:44business email
14:45compromise
14:46which now is
14:47amplified
14:47of course
14:48with deep
14:49fakes
14:50but at the end
14:50it's the same
14:50thing.
14:51This guy
14:52who sent me
14:52this message
14:53maybe it'd be
14:54video
14:54maybe it'd be
14:55audio
14:55maybe it'd be
14:56just text
14:57is this
14:58person
14:59properly
14:59authenticated
15:00we have
15:02systems for
15:02that
15:02what Wendy
15:03said
15:04we just
15:04need to
15:05expand
15:05the
15:06systems
15:06truth of
15:07the matter
15:07is that
15:08a lot of
15:09the trust
15:10that we
15:11implicitly
15:12believed in
15:13we get a call
15:13from someone
15:14we hear the
15:15voice
15:15it sounds
15:16like Wendy
15:17it sounds
15:17like Alex
15:18so I'm
15:18fine with
15:19that
15:19this
15:20implicit
15:20trust
15:21is over
15:23we will
15:24need to
15:24have
15:24especially
15:25for
15:25business
15:26communication
15:26or political
15:27communication
15:28we need
15:28to have
15:29multi-factor
15:29authentication
15:30or all
15:31those
15:31elements
15:32of authentication
15:33but we've
15:34been saying
15:34that in
15:35cyber security
15:35for 15
15:37years
15:37and it
15:38isn't working
15:39what makes
15:40you think
15:40it's suddenly
15:41going to
15:41change
15:41with this
15:42scale
15:42of how
15:43AI is
15:43growing
15:43it's going
15:44to be
15:44a huge
15:45problem
15:45for the
15:46world
15:46so
15:47yeah
15:48I totally
15:49agree with
15:50you
15:50it was
15:50ignored
15:51for quite
15:52some
15:52time
15:52but
15:53I don't
15:54think
15:54it's going
15:54to be
15:54ignored
15:55in
15:56next
15:57few
15:57years
15:57and
15:58there's
15:59countries
15:59like
16:00what
16:01we
16:01recently
16:01were
16:01in
16:01Saudi
16:02Arabia
16:02they
16:03have
16:04app
16:04to
16:05digital
16:05identification
16:06of the
16:06people
16:06they have
16:0737 million
16:08population
16:08and 20
16:09million
16:09of people
16:10using
16:10this app
16:11with
16:12two or
16:12three
16:12factor
16:13identification
16:13so
16:14Saudi
16:14Arabia
16:15basically
16:15ready
16:16to put
16:17any
16:17digital
16:18signature
16:18remark
16:19that
16:19I'm
16:19the
16:19person
16:20that
16:20I'm
16:20talking
16:20or
16:21this
16:21video
16:21that
16:23you
16:23see
16:23me
16:24this
16:24is
16:24digitally
16:25signed
16:25so
16:26and
16:27in
16:27Ukraine
16:27we have
16:28Dia
16:28which is
16:29also
16:29like
16:2920 million
16:30users
16:30it's
16:31basically
16:31serve as a
16:32digital
16:32identificator
16:33so we're
16:34also ready
16:34to implement
16:35those
16:35kind of
16:35things
16:36but
16:38you like
16:38it or
16:39not
16:39I think
16:40that's
16:40the future
16:40and
16:41maybe
16:41not
16:42next
16:42month
16:43maybe
16:43not
16:43next year
16:44but
16:44we're
16:44going to be
16:45there
16:45because
16:46after
16:47some
16:47terrifying
16:48examples
16:48of
16:48deep
16:49fakes
16:50which
16:51turn
16:51into
16:51some
16:52death
16:54or
16:54critical
16:55financial
16:56problems
16:56we're going
16:57to turn
16:57to that
16:58the key
16:59thing
16:59indeed
16:59is
17:00governance
17:01and we
17:02have an
17:02issue
17:02with AI
17:03we have
17:03results
17:04that so
17:04far
17:05only
17:05from the
17:07UK
17:07this month
17:0847%
17:09of
17:10organizations
17:10that are
17:11using
17:11AI
17:12are putting
17:13in place
17:13AI
17:14security
17:14policies
17:15okay
17:15that's
17:15crazy
17:16but
17:16as
17:17Alex
17:17said
17:18if
17:19it
17:19becomes
17:19so
17:20important
17:21then
17:22AI
17:22corporate
17:23will
17:23move
17:23or
17:24will
17:24move
17:25to
17:25national
17:26level
17:26as in
17:27Saudi
17:27Arabia
17:27as in
17:28Ukraine
17:28I mean
17:29you go
17:29back to
17:30the obvious
17:30example
17:31of car
17:32security
17:32and the
17:33goddamn
17:33seat belt
17:34all right
17:35it was
17:35invented by
17:36I think
17:37General Motors
17:37back in
17:38the 1950s
17:39in France
17:40we had to
17:40wait
17:401979
17:41to have
17:43it in
17:43place
17:43I don't
17:43know about
17:44the UK
17:45Joe
17:46neither
17:46but anyway
17:47the good
17:48thing is
17:48that if
17:49corporate
17:49does not
17:50move
17:50compared to
17:51the risk
17:51then national
17:53state
17:54regulation
17:55should
17:55perhaps say
17:56as in
17:56Ukraine
17:57well not
17:57exactly
17:58but as
17:58in Saudi
17:59Arabia
17:59and perhaps
17:59the things
18:00that Ukraine
18:01has probably
18:01put in place
18:02at some
18:02point
18:02for some
18:03type of
18:03message
18:03you need
18:04to use
18:05this type
18:05of
18:05authentication
18:06and
18:07may I
18:08just
18:08compliment
18:08to that
18:09we
18:09actually
18:09as
18:11a
18:11society
18:12we
18:12already
18:12went
18:12to
18:12this
18:13process
18:13one
18:13time
18:14remember
18:15every
18:15site
18:16in
18:16web
18:16was
18:17HTTP
18:17in
18:19the
18:19beginning
18:19but
18:20then
18:20after
18:21millions of
18:21sites
18:21created
18:22we
18:22turned
18:23into
18:23HTTPS
18:24because
18:24we
18:25wanted
18:25to
18:25make
18:26sure
18:26that
18:27this
18:27site
18:27is
18:27not
18:28fake
18:28and
18:28there
18:28was
18:29a
18:29mechanism
18:29for
18:29that
18:30and
18:30I'm
18:31sure
18:31Cisco
18:31and
18:32hardware
18:32software
18:33checking
18:34this
18:34and
18:34you
18:35filter
18:35those
18:35sites
18:36because
18:36you
18:36want
18:36to
18:36make
18:36sure
18:37that
18:37you
18:38go
18:38into
18:38the
18:38site
18:38and
18:39you
18:39see
18:39this
18:39green
18:40mark
18:40it's
18:41with
18:41SSL
18:41certificate
18:42and
18:42it's
18:42proved
18:43that
18:43the
18:43site
18:43is
18:43not
18:44fake
18:44something
18:45is
18:45going
18:45to
18:45happen
18:45with
18:45the
18:46human
18:46I
18:47think
18:47yeah
18:47I mean
18:48just
18:48recently
18:49just
18:50today
18:50I was
18:51reading
18:51reports
18:52that
18:52a
18:53search
18:53on
18:54Google
18:54about
18:54how
18:55to
18:55keep
18:55cheese
18:55sticking
18:56to
18:56pizza
18:57was
18:57being
18:57answered
18:58by
18:58AI
18:59by
18:59saying
18:59you
19:00put
19:00some
19:00glue
19:01in
19:02between
19:02the
19:03cheese
19:03and
19:03the
19:03pizza
19:04to
19:04get
19:04the
19:04cheese
19:04to
19:05stick
19:05and
19:05that
19:05came
19:06from
19:06a
19:07poster
19:08from
19:09Reddit
19:09from
19:10years
19:10ago
19:11who
19:11was
19:11just
19:12making
19:12something
19:12up
19:13and
19:13he
19:13said
19:13yeah
19:13you
19:13just
19:14use
19:14some
19:14non
19:14toxic
19:15glue
19:15but
19:16of
19:16course
19:16AI
19:16doesn't
19:17understand
19:17that
19:18that's
19:18a
19:18ridiculous
19:18idea
19:19and
19:19it
19:19just
19:20repeated
19:20it
19:20so
19:21every
19:21time
19:22we
19:22see
19:22things
19:23like
19:23that
19:23I
19:24believe
19:24that
19:25we
19:25as
19:26citizens
19:26are
19:26going
19:27to
19:27learn
19:27how
19:27to
19:28test
19:28the
19:28reality
19:29of
19:29everything
19:30that
19:30we're
19:30seeing
19:30and
19:31maybe
19:31in
19:32some
19:32way
19:32we're
19:33all
19:33going
19:34to
19:34have
19:34to
19:34think
19:34like
19:35journalists
19:35we're
19:36going
19:36to
19:36have
19:36to
19:36do
19:37fact
19:37checking
19:38just
19:39as
19:39journalists
19:40do
19:40today
19:40that'll
19:41be
19:41something
19:41a
19:42skill
19:42that
19:42everyone
19:43has
19:43to
19:43learn
19:43and
19:44as
19:44Alex
19:44said
19:45we
19:46will
19:46be
19:46helped
19:47on
19:47that
19:47skill
19:47by
19:48additional
19:49AI
19:49systems
19:49that
19:49will
19:50be
19:50to
19:50trace
19:50back
19:51who
19:51was
19:51the
19:51first
19:52guy
19:52to
19:52say
19:52that
19:52and
19:53what
19:53is
19:53his
19:54credibility
19:54for
19:54example
19:54yeah
19:55data
19:55provenance
19:56is a
19:56thing
19:57that
19:57people
19:57have
19:57been
19:58working
19:58on
19:58and
19:58in
19:58fact
19:59that's
20:00a
20:00very
20:00difficult
20:01cyber
20:01problem
20:02that
20:03has
20:03been
20:04on
20:04the
20:04list
20:05at
20:05least
20:05in
20:05the
20:05US
20:06of
20:06hard
20:06cyber
20:07problems
20:07ever
20:07since
20:082006
20:08so
20:10but
20:10you're
20:11talking
20:11very
20:11optimistically
20:12about
20:13society
20:14going
20:15through
20:15a
20:15revolution
20:15in
20:16order
20:16to
20:16make
20:16us
20:16safe
20:17from
20:17the
20:17dangers
20:18so
20:18let's
20:19go
20:19back
20:19to
20:19the
20:19question
20:20for
20:20the
20:20people
20:20here
20:21is
20:21AI
20:21making
20:22us
20:22safe
20:23or
20:23is
20:23it
20:23more
20:23dangerous
20:23currently
20:24by the
20:25sounds
20:25of it
20:26it's
20:27more
20:27dangerous
20:28I mean
20:28putting
20:28glue
20:29on a
20:29pizza
20:29is
20:30not
20:30good
20:30but
20:31am I
20:32right
20:32in
20:32my
20:32assumption
20:32of
20:32what
20:33you're
20:33saying
20:33no
20:34again
20:34currently
20:36oh
20:36hey
20:36currently
20:37we don't
20:37know
20:37for
20:37sure
20:38but
20:39at
20:39this
20:40stage
20:40we
20:41haven't
20:41seen
20:42and
20:43correct me
20:43Wendy
20:44and Alex
20:44but
20:44we haven't
20:45seen
20:45a big
20:46wave
20:48of
20:48bad guys
20:49using
20:49AI
20:49except
20:50for
20:50deep
20:50fakes
20:51the
20:51good
20:51thing
20:52is
20:52that
20:53all
20:53the
20:54industry
20:54the
20:54cyber
20:55industry
20:55guys
20:55if I
20:56may
20:56put
20:56it
20:56this
20:56way
20:57have
20:57already
20:57invested
20:58it's
20:59a race
20:59it's
20:59always
21:00a race
21:00between
21:00the
21:00good
21:01and
21:01the
21:01bad
21:01at
21:02this
21:02stage
21:02because
21:03AI
21:03is
21:04such
21:04a
21:04great
21:05thing
21:05you
21:05put
21:05AI
21:06here
21:06and
21:06all
21:07the
21:07people
21:07come
21:08that
21:09helps
21:09to
21:09sell
21:10and
21:11the
21:11cyber
21:12security
21:12guys
21:12Microsoft
21:13Google
21:14Palo Alto
21:14Networks
21:15and then
21:16a whole
21:16slew of
21:17new
21:17cyber
21:17security
21:17companies
21:18thanks to
21:19VCs
21:19are coming
21:20up
21:20right
21:20now
21:21so
21:23usually
21:24it sells
21:24fear
21:25at this
21:26stage
21:26if we
21:27have to
21:27be
21:27honest
21:27we
21:28haven't
21:28seen
21:28a big
21:29bad
21:29wave
21:30so
21:30I'll
21:30give
21:31you
21:31an
21:31example
21:31of
21:32how
21:32AI
21:33and
21:34automation
21:34could
21:35be
21:35making
21:36us
21:36safer
21:36in
21:37the
21:37future
21:37you
21:37know
21:38the
21:38old
21:38saying
21:38that
21:41the
21:42bad
21:42guys
21:43the
21:43attackers
21:44only
21:44have
21:44to
21:45be
21:45right
21:45once
21:45when
21:46they're
21:46attacking
21:46a
21:47target
21:47and
21:48the
21:48defenders
21:48have
21:49to
21:49be
21:49right
21:49every
21:50time
21:50I
21:50hate
21:50that
21:51saying
21:51by
21:51the
21:51way
21:53it's
21:53true
21:54in
21:54one
21:54way
21:54but
21:54if
21:55you
21:55turn
21:55it
21:55around
21:56and
21:56you
21:56look
21:56at
21:57an
21:57attack
21:57and
21:58if
21:58you're
21:58talking
21:58about
21:59sharing
21:59threat
21:59intelligence
22:00then
22:01all
22:01it
22:01takes
22:02is
22:02for
22:02the
22:06then
22:06if
22:07you're
22:07sharing
22:07that
22:08information
22:08among
22:09all
22:09other
22:09defenders
22:10then
22:10it's
22:11the
22:11attacker
22:11who
22:13has to
22:14be
22:14right
22:14every
22:15time
22:15and
22:15as
22:15soon
22:15as
22:16he
22:16messes
22:16up
22:16he's
22:17caught
22:17this
22:18is
22:18something
22:18that
22:18AI
22:19can
22:19help
22:19us
22:19with
22:20and
22:20it
22:20can
22:20be
22:21even
22:21augmented
22:21by
22:22deception
22:22which
22:23is
22:23hard
22:23to
22:23do
22:23at
22:24scale
22:24but
22:25all
22:25of
22:25a
22:25sudden
22:26may
22:26be
22:26possible
22:26so
22:27you
22:27could
22:27get
22:27the
22:28information
22:28before
22:28even
22:29bad
22:29things
22:29happens
22:29and
22:30then
22:30exchange
22:31in
22:31the
22:31sharing
22:31networks
22:31right
22:32again
22:33everything
22:33that
22:34works
22:34for
22:35the
22:35bad
22:35guys
22:35can
22:36also
22:36work
22:36for
22:36us
22:37but
22:37we
22:38have
22:38a lot
22:38more
22:39constraints
22:39because
22:40we
22:40are
22:40trying
22:40to
22:40use
22:41AI
22:41ethically
22:42which
22:42the
22:43bad
22:43guys
22:43are
22:43not
22:43constrained
22:44by
22:45for
22:45what
22:46it's
22:46worth
22:46I
22:46would
22:46agree
22:47with
22:47you
22:47as
22:48a
22:49journalist
22:50my
22:51antennas
22:51always
22:52alert
22:52to
22:52any
22:53big
22:53cyber
22:54attack
22:54that
22:54may
22:54have
22:54been
22:55AI
22:55enabled
22:55and
22:56we
22:56haven't
22:56really
22:57that
22:57we
22:58know
22:58of
22:58seen
22:59that
22:59kind
22:59of
22:59threat
23:00yet
23:00obviously
23:01social
23:02engineering
23:02yes
23:03LLMs
23:04yes
23:05they are
23:05improving
23:06things
23:07massively
23:07but I
23:08suppose
23:08it's also
23:09hard to
23:09detect
23:09isn't
23:10it
23:10well
23:10again
23:11doesn't
23:12mean
23:12it's
23:12not
23:12going
23:12to
23:13happen
23:13this
23:14new
23:14layer
23:14is
23:15very
23:15new
23:16and
23:17unfortunately
23:17when we
23:18look at
23:18the history
23:19of new
23:19technologies
23:20being
23:20introduced
23:21ever
23:21from
23:22your
23:22commercial
23:23development
23:23or
23:24even
23:24when you
23:24have
23:25a
23:25crisis
23:25then
23:26typically
23:27we would
23:27expect
23:2812
23:2924
23:30months
23:31later
23:31a big
23:32crime
23:32wave
23:32great
23:33example
23:33during
23:34COVID
23:34all of a
23:35sudden
23:35there was
23:36this
23:36forced
23:36transition
23:37into
23:37remote
23:38working
23:38and
23:39visual
23:40conferences
23:40that people
23:41were not
23:41prepared
23:41about
23:42and it
23:42came
23:42too
23:42quickly
23:43and
23:44then
23:44we had
23:44this
23:45big
23:45wave
23:46profiting
23:46of that
23:46so
23:47in
23:47theory
23:48I
23:48myself
23:49I
23:49would
23:49expect
23:50a
23:50big
23:51wave
23:51to
23:51come
23:51we
23:52just
23:53haven't
23:53seen
23:53it
23:53yet
23:54that's
23:54all
23:54I
23:55can
23:55say
23:55and
23:56moving
23:56slightly
23:56away
23:56from
23:57cyber
23:57security
23:57but
23:57staying
23:58in
23:58AI
23:58and
23:59cyber
23:59Alex
23:59we've
23:59got
24:00you
24:00here
24:00how
24:01is
24:01AI
24:01changing
24:01things
24:02on the
24:02battlefield
24:03well
24:04we consider
24:05AI
24:06could be a game
24:07changer
24:07and actually
24:07it's already
24:08we already
24:09started this
24:10process
24:10we see
24:11we see
24:12we see more
24:12and more
24:12solutions
24:13that
24:13completely
24:14autonomous
24:14I'm
24:16talking
24:16more
24:16about
24:16computer
24:16vision
24:17and
24:18you
24:19might
24:19heard
24:19of
24:19massive
24:20use
24:21of
24:21drones
24:21in
24:22Ukraine
24:23Russian
24:23war
24:23and
24:24just
24:25this
24:25year
24:25only as
24:26a
24:27government
24:27we procured
24:28about
24:28a million
24:29of drones
24:29this year
24:30and
24:32a very
24:33small portion
24:33of them
24:34but they
24:34are already
24:35on
24:35battlefield
24:35that are
24:36able to
24:37completely
24:37autonomously
24:38fly
24:38without
24:39operator
24:40so
24:40usually
24:41how it
24:41looks like
24:42it's
24:43rise up
24:43in the
24:44air
24:44then
24:44operators
24:45see
24:46target
24:46on
24:46control
24:47clicks
24:47on
24:48confirm
24:48and
24:49if
24:49the
24:49target
24:49is
24:50moving
24:50the
24:51drone
24:51is
24:51following
24:51this
24:52target
24:52until
24:52it
24:53hit
24:53it
24:53they
24:54also
24:55AI
24:55has been
24:55using
24:55to
24:56identify
24:56targets
24:57scan
24:58images
24:58satellite
24:59images
25:00for
25:00masked
25:01equipment
25:03human
25:04forces
25:04armored
25:05vehicles
25:05and
25:06also
25:08for
25:08even
25:09on
25:10situation
25:11awareness
25:11system
25:12so
25:12the
25:12software
25:13that
25:13have
25:14all
25:15the
25:15data
25:15on
25:15battlefield
25:15and
25:16suggest
25:16what
25:16you
25:16do
25:16next
25:17so
25:18we
25:18kind
25:19of
25:19slowly
25:19moving
25:19to
25:20machines
25:22fight
25:23instead
25:23of
25:23humans
25:24and
25:25even
25:25making
25:25decisions
25:26instead
25:26of
25:26humans
25:27we
25:28mentioned
25:28earlier
25:28with
25:29cyber
25:29there's
25:29this
25:30AI
25:30versus
25:31AI
25:31situation
25:32are you
25:32seeing
25:32that
25:33when
25:34you're
25:34in
25:34conflict
25:34with
25:35Russia's
25:35versions
25:36of these
25:36technological
25:37developments
25:37well
25:38not
25:38yet
25:38but
25:38I'm
25:39sure
25:39we're
25:40going
25:40to
25:40see
25:40soon
25:40footages
25:41of
25:43robots
25:44because
25:44we
25:45recently
25:45launched
25:46an
25:46initiative
25:46like
25:46six
25:47months
25:47ago
25:47army
25:47of
25:48robots
25:48so
25:48now
25:48we
25:48do
25:49a lot
25:49of
25:49robots
25:49they're
25:50going
25:50to
25:50substitute
25:50people
25:51on
25:51front lines
25:52there was
25:52basically
25:53four types
25:53of robots
25:54kamikaze
25:55robots
25:55mining
25:56demining
25:57robots
25:57sentinel
25:58robots
25:58that
25:59has
25:59AI
25:59and
26:00computer
26:00vision
26:00components
26:01so
26:01they
26:01can
26:01target
26:01and
26:02they
26:02choose
26:02target
26:03if
26:03they see
26:04moving
26:04target
26:04they
26:05identify
26:05it
26:05they
26:06can
26:06even
26:06distinguish
26:06people
26:07one
26:07army
26:08from
26:08another
26:09one
26:09uniform
26:10from
26:10another
26:10uniform
26:10and
26:11evacuation
26:12robots
26:12and
26:13I'm
26:13sure
26:14that
26:14we're
26:14going
26:14to
26:14see
26:14soon
26:15how
26:15let's
26:16say
26:16a robot
26:16hit
26:17by
26:17drone
26:18driven
26:19by AI
26:19so
26:20that's
26:20going to
26:21be
26:21for
26:21sure
26:22it's
26:22Terminator 2
26:23isn't
26:23it
26:23it's
26:24you know
26:24I shouldn't
26:25even joke
26:25about it
26:25because
26:26it is
26:26here
26:26it's
26:27literally
26:27we're
26:28moving
26:28towards
26:28this
26:30rapidly
26:30the
26:31key
26:32everything
26:32evidently
26:33because
26:33you
26:33have
26:34AI
26:34systems
26:34it
26:35records
26:35what
26:36happens
26:36so
26:38one
26:39reason
26:39actually
26:40the US
26:41army
26:41has pushed
26:41forward
26:42into AI
26:42that's not
26:43the main
26:43reason
26:43the main
26:43reason
26:44is
26:44that
26:44you're
26:45going to
26:45reduce
26:45the
26:46staffing
26:46so
26:46that
26:47you
26:47can
26:47redeploy
26:47to other
26:48things
26:48and
26:48you
26:49have
26:49better
26:49sometimes
26:50not
26:51always
26:51sometimes
26:51better
26:52targeting
26:52than
26:52with
26:52human
26:53beings
26:53and
26:54you
26:54can
26:54also
26:55especially
26:55for
26:55drones
26:56resist
26:56the
26:56jamming
26:57force
26:57but
26:58an
26:58additional
26:59potential
26:59element
27:00which
27:00has
27:00been
27:00studied
27:01by
27:01people
27:01like
27:01Paul
27:02Scharr
27:02at
27:02the
27:02Center
27:03for
27:03New
27:03American
27:03Security
27:04with
27:04the
27:05US
27:05army
27:05is
27:06how
27:06actually
27:06because
27:07you
27:07collect
27:08to record
27:08the
27:09modes
27:09of
27:09action
27:10and
27:11in
27:11Western
27:11armies
27:12then
27:12this
27:12can
27:13be
27:13seen
27:13by
27:13actually
27:15judicial
27:16elements
27:17within
27:17Western
27:18armies
27:18that can
27:18check
27:19if the
27:19law
27:19of
27:19armed
27:19conflicts
27:20is
27:20being
27:20properly
27:21respected
27:21you
27:22can
27:22have
27:23actually
27:23the
27:23ability
27:24to
27:24check
27:24better
27:25if
27:25indeed
27:26there
27:27is
27:28proportionality
27:28if
27:29indeed
27:30you
27:30are
27:30going
27:30to
27:30focus
27:31as
27:31much
27:31as
27:31you
27:32can
27:32only
27:32on
27:33the
27:33armed
27:34militants
27:34or
27:34the
27:35armed
27:35forces
27:36and
27:36reduce
27:36collateral
27:37damage
27:38reduce
27:38non-combatants
27:39being
27:41harmed
27:42by
27:42situation
27:43you
27:44have
27:44a
27:44record
27:45of
27:45that
27:45you
27:46can
27:46work
27:46on
27:46that
27:47and
27:47there
27:48is
27:48the
27:48ability
27:48to
27:49properly
27:49put
27:50in
27:50place
27:50the
27:50arm
27:50conflicts
27:51also
27:53I
27:54was
27:54going
27:54to
27:54ask
27:54you
27:54guys
27:54you
27:55talk
27:55about
27:55AI
27:56able
27:56to
27:56recognize
27:57between
27:58uniforms
27:58really
27:59that's
27:59complex
28:00stuff
28:00isn't
28:00it
28:00what
28:01about
28:01moving
28:02away
28:02from
28:02the
28:02battlefield
28:02if
28:03we
28:03can
28:04what
28:04about
28:04facial
28:05recognition
28:05because
28:06do
28:07you
28:07as
28:07a
28:07panel
28:07think
28:08that's
28:08making
28:08us
28:08safer
28:09or
28:10is
28:10it
28:10causing
28:10us
28:10harm
28:12well
28:12we
28:13as
28:13humans
28:13have
28:14never
28:14been
28:15really
28:15that
28:16good
28:16at
28:16writing
28:17software
28:17I'll
28:18just
28:18put
28:18it
28:18that
28:18way
28:19and
28:19if
28:20you
28:20look
28:20very
28:21closely
28:22at
28:23AI
28:23or
28:23anything
28:24if
28:24you
28:24squint
28:24a
28:25little
28:25bit
28:25it's
28:26all
28:26still
28:26software
28:27so
28:28yes
28:28we
28:29have
28:29a lot
28:29of
28:29bias
28:30built
28:30into
28:31our
28:31models
28:32today
28:32we
28:33have
28:33a lot
28:33of
28:33mistakes
28:33we
28:34have
28:34a lot
28:34of
28:34flaws
28:35and
28:35people
28:36do
28:36suffer
28:36as
28:37a
28:37result
28:37of
28:38it
28:38whether
28:38they
28:39are
28:39being
28:39tagged
28:39falsely
28:41as
28:41shoplifters
28:42and
28:42being
28:43harassed
28:44and
28:44embarrassed
28:45when
28:45they
28:45go
28:45into
28:45a
28:46shop
28:46whether
28:47they
28:47simply
28:48can't
28:48get
28:48a
28:49system
28:49to
28:49operate
28:50because
28:50their
28:50skin
28:50is
28:51the
28:51wrong
28:51color
28:52because
28:53the
28:53system
28:54hasn't
28:54been
28:55trained
28:55on
28:55people
28:56of
28:56every
28:56skin
28:57color
28:57there
28:58are
28:59real
28:59life
28:59ramifications
29:00for
29:01people
29:01but
29:02they're
29:02getting
29:02better
29:02all the
29:02time
29:03aren't
29:03they
29:03if
29:04you
29:04look
29:04at
29:05the
29:07stats
29:07on
29:08the
29:08latest
29:08facial
29:10recognition
29:10systems
29:11compared
29:11to
29:11humans
29:12they're
29:13way
29:13better
29:13there's
29:15examples
29:16in
29:16healthcare
29:16that
29:17shows
29:18that
29:18it
29:19could
29:19do
29:19much
29:20better
29:22diagnosis
29:23and
29:24I
29:25think
29:25there's
29:26definitely
29:27a future
29:27in
29:27healthcare
29:28for
29:28vision
29:29I
29:30recently
29:31approached
29:32by a
29:32person
29:33that
29:33told
29:34me
29:34that
29:34there's
29:35studies
29:35that
29:37precision
29:38of
29:38AI
29:39computer
29:39vision
29:40on
29:42cases
29:42with
29:43cancer
29:43are
29:44much
29:44better
29:44than
29:44with
29:45the
29:46human
29:47and
29:47just
29:48to
29:48add
29:48a
29:48little
29:49bit
29:49on
29:49the
29:49evolution
29:50because
29:50Wendy
29:50is
29:51right
29:51and
29:51we
29:51had
29:51studies
29:52back
29:52in
29:522018
29:53from
29:54from
29:58MIT
29:59that
29:59showed
29:59that
30:00indeed
30:00the
30:01error
30:02rate
30:02was
30:02only
30:030.8%
30:04when you
30:04have
30:04light-skinned
30:05male
30:05but
30:06actually
30:0632%
30:07when you
30:07are
30:07dark-skinned
30:08women
30:08so that's
30:10not good
30:10at all
30:11that
30:11being said
30:12hopefully
30:13this
30:13evolution
30:13NIST
30:14the
30:14National
30:15Institute
30:15for
30:15Science
30:15Technology
30:16that
30:17set
30:17the
30:17standards
30:17in
30:18the
30:18United
30:18States
30:18check
30:19that
30:19every
30:19year
30:20latest
30:20results
30:21that I've
30:21seen
30:21in
30:22the
30:22last
30:23year
30:23seems
30:24to have
30:24shown
30:24that
30:25there
30:25was
30:26improvement
30:27the error
30:27rate
30:28the ability
30:28to have
30:29a
30:29maximum
30:300.3%
30:32of false
30:32positive
30:34was achieved
30:35for 45
30:36out of 100
30:37algorithms
30:39if the
30:40image quality
30:40was good
30:41and whatever
30:42the social
30:42demographics
30:43so that's
30:43an improvement
30:44the issue
30:44is that
30:45if the
30:45image
30:46was much
30:46older
30:47of 12
30:48years old
30:49then
30:50the rate
30:50was much
30:52worse
30:53so there's
30:53still
30:54improvements
30:55but at
30:56least
30:56NIST
30:56is there
30:57and we
30:57go back
30:57to
30:57governance
30:58regulation
30:58NIST
30:59is there
30:59to say
31:00this is
31:00okay
31:01and this
31:01is not
31:02okay
31:02and that's
31:03why we
31:03also have
31:04there are
31:04bug bounty
31:05programs
31:06for AI
31:07for bias
31:08and I think
31:09that they've
31:09been going
31:09on for a
31:10few years
31:10now and I
31:11think we
31:11need that
31:12just as
31:12much as
31:13we need
31:13bug
31:14bounties
31:14for other
31:15types of
31:16software
31:16and I
31:17suppose we
31:17need the
31:18AI makers
31:18to be
31:19open to
31:19have their
31:21algorithms
31:21checked
31:22and their
31:22systems
31:23hacked into
31:23as well
31:24shall we
31:25go to
31:26some
31:26questions
31:26are there
31:27any
31:27questions
31:28oh here's
31:29one
31:29at the
31:30front
31:33you've
31:34got a
31:34couple of
31:34minutes
31:35left
31:35yes
31:36hello
31:37thank you
31:37for the
31:38nice
31:38panel
31:38I have
31:39maybe one
31:40angle that
31:41wasn't
31:42discussed
31:42about the
31:43increasing
31:43of risk
31:44of cyber
31:45attacks
31:45so you
31:46mentioned
31:47that humans
31:48are typically
31:48quite bad
31:49at coding
31:49and especially
31:50when you talk
31:51about hacking
31:52there is a
31:53very high
31:54barrier
31:54entry barrier
31:55so to be a
31:56good hacker
31:56you have to
31:57be a great
31:57programmer
31:58and one
31:59risk that I
31:59see is that
32:00LLM and
32:01AI in general
32:02are going to
32:02enable a lot
32:04more people
32:05to access
32:05this level
32:07of software
32:08development
32:08and so then
32:10potentially a lot
32:11more hackers
32:11as well
32:13absolutely
32:13just one
32:14thing
32:14you're right
32:15on the
32:15quality of
32:16coding
32:16that's getting
32:17better
32:17but it's
32:18still not
32:18there yet
32:18plus I
32:19would add
32:20the issue
32:20of shadow
32:21AI
32:22lots of
32:23people
32:23actually
32:24use already
32:25AI
32:26generative
32:27stuff
32:27without
32:28telling
32:28corporate
32:29and figures
32:30are like
32:30all the
32:31people
32:31are doing
32:32that
32:3278%
32:33may be
32:34using it
32:35as shadow
32:35AI
32:36so that's
32:36again
32:37without any
32:37governance
32:37policy
32:38that's an
32:38issue
32:38yeah
32:39the other
32:40thing is
32:41that
32:41as humans
32:42we are
32:43not
32:43really
32:43capable
32:44cognitively
32:45of
32:46writing
32:47software
32:47and then
32:48troubleshooting
32:49it
32:49at the
32:49same
32:50time
32:50there's
32:51a great
32:52interview
32:52with
32:52Damon
32:52Cortese
32:53where he
32:53talked
32:54about
32:54he's
32:55a famous
32:55hacker
32:56and he
32:57talked
32:57about
32:57how he
32:58wrote
32:58some
32:59software
32:59got it
33:00to
33:00function
33:00and then
33:01put his
33:02hacker
33:03hat
33:03back on
33:04and checked
33:04it
33:04and he
33:05found
33:05all sorts
33:05of flaws
33:06that he
33:06had written
33:07into it
33:07so it
33:08is not
33:09possible
33:09for our
33:10brains
33:10to hold
33:11both states
33:11at the
33:12same time
33:12which is
33:12why we
33:13will always
33:14be checking
33:14AI
33:15it's never
33:16going to
33:16replace us
33:17completely
33:17we will
33:18always need
33:18to be
33:19checking
33:19our own
33:19work
33:20should we
33:20have one
33:21more quick
33:21question
33:22if there is
33:22one
33:22there's one
33:23at the
33:23back there
33:24I have
33:25Mike
33:25so I'll go
33:26a lot
33:28a lot
33:28of you
33:28mentioned
33:28that
33:30depersonalization
33:31is an
33:32issue
33:32and using
33:33someone still
33:34in identity
33:34can be
33:35an issue
33:35so I
33:36wonder
33:36a part
33:37of
33:37better
33:38AI
33:38versus
33:39like
33:40evil
33:40AI
33:41what
33:41other
33:42technology
33:42can be
33:43solving
33:44problem
33:46for that
33:46and
33:47it's
33:47not
33:48cool
33:48blockchain
33:49anymore
33:50now
33:50everything
33:51is about
33:51AI
33:51but I
33:52wonder
33:53what
33:53this
33:53group
33:54of
33:54people
33:54think
33:55about
33:55this
33:56yeah
33:57it
33:57never
33:57was
33:58blockchain
33:58sorry
34:00again
34:01authentication
34:02you know
34:02as we
34:03mentioned
34:03biometric
34:04basically
34:05this
34:05is a
34:06certain
34:06process
34:07already
34:07in place
34:08how to
34:08make it
34:09you basically
34:10come
34:10as a
34:11physical
34:11person
34:11somewhere
34:12you
34:12give up
34:14your
34:14biometric
34:15you
34:15give up
34:15your
34:15other
34:17parameters
34:17and then
34:18they've
34:18been stored
34:19and then
34:19you assign
34:20with
34:20some
34:21let's
34:22say
34:22ID
34:22related
34:23to
34:23these
34:23things
34:24and then
34:24you can
34:25use it
34:25as a
34:26token
34:26or
34:26you
34:27can
34:27use
34:27it
34:27as
34:27an
34:27app
34:28and
34:28there's
34:28numerous
34:29ways
34:29and
34:30basically
34:30it also
34:31could be
34:33some
34:33database
34:34of
34:34those
34:35things
34:35using
34:36internationally
34:38I think
34:39that's
34:39kind of
34:39version
34:40of
34:40digitizing
34:41of
34:41biometric
34:42passwords
34:42so
34:44I think
34:45not complex
34:46we're pretty
34:47much out of
34:47time
34:47but if you
34:48have a
34:48point
34:48no
34:48go ahead
34:48okay
34:49well thank
34:49you very
34:50much
34:50panel
34:50thank you
34:53very much
34:54for the
34:54questions
34:54and thanks
34:54for listening
34:55thank you
34:56thank you
Commentaires