Passer au playerPasser au contenu principal
  • il y a 1 jour
The Autonomous Coder: How is Agentic AI Revolutionizing Software Development?

Catégorie

🤖
Technologie
Transcription
00:00Merci d'être avec vous aujourd'hui pour cette fantastique panel discussion
00:06autour de l'agentic AI.
00:08Je m'appelle Stéphane Booth, je suis un senior partner à McKinsey,
00:11leading Quantum Black AI par McKinsey en France
00:14et co-leading l'agentic service line de McKinsey worldwide.
00:20Aujourd'hui, j'ai l'honneur d'hôpital d'hôpital de deux premiers tech leaders,
00:26Devina Pasta, CEO of Software Business at Siemens Mobility
00:32et Azo Kant, CTO et co-founder of Poolside.
00:37Aujourd'hui, nous allons explorer un paradigme de l'agentic coding
00:43qui révolutionnerait le développement de l'agentic software.
00:48Deux ans, McKinsey a publié un report sur le création de la valeur de la valeur de l'agentic AI
00:553 à 4 trillions de dollars,
00:59out of which 20% à 25% were specifically related to software and data engineering,
01:07with in particular three key use cases.
01:11GNI supportant application development and maintenance,
01:14GNI supportant legacy system modernisation,
01:17and also GNI supportant data engineering activities.
01:23Devina, since the surge of ChatGPT in 2022,
01:27we have seen the emergence of multiple GNI-powered tools for developers.
01:34From a software editor point of view,
01:38could you share your perspective on that kind of tool
01:41in terms of deployment, adoption, impact,
01:44and, of course, challenges?
01:47Thanks, Stéphane.
01:48Hi, everyone, from my side, too.
01:50So, firstly, maybe a little bit about Siemens.
01:52What do we do?
01:53We build everything from trains to automation and digital solutions
01:57for transportation, energy grids, factories,
02:00and, of course, software is at the core of that.
02:03We are building software that helps, for example,
02:06design, operate, and simulate factories in the digital world
02:10before you actually even build one single element in the real world.
02:14And once it's in the real or physical world,
02:16we're building the digital intelligence layer that goes into it,
02:20which means that, of course,
02:21software is absolutely core to everything we do.
02:24We've invested over $15 billion just in this year on software acquisitions.
02:29So, for us, Gen AI in software is about redefining
02:34how we do industrial-grade software.
02:37Our approach specifically in terms of Gen AI for developers is twofold.
02:42We look at access and enablement.
02:45As the word suggests, access.
02:48Access is how do we make sure that developers across the company
02:52are getting access to the relevant, and I want to stress relevant,
02:56AI tools and models.
02:58That's really one core component.
03:00And, you know, we all know that AI tools and models
03:04are being developed and released, I feel like,
03:07every time we breathe, right, there's a new version.
03:09So what we also do is we have a subgroup
03:12or a set of selected developers that actually just try.
03:16They try, they experiment across different models
03:20to also figure out what is relevant in our world.
03:24And for the other, let's say, the GitHub co-pilots in agent modes,
03:28Amazon Bedrock that we're using for specific use cases
03:32that need sophisticated agents,
03:34for those, those are being rolled out across the company
03:36to, like, tens and thousands of developers,
03:38while we also have, then, this playground where we can try things.
03:43And then the second is enablement,
03:45because just, you know, throwing a tool over the fence is great,
03:48but I think adoption in terms of how do we drive AI literacy,
03:53how do you best use those tools through,
03:55we have boot camps, we have collaboration workshops
03:58with our partners.
04:00So what we're trying to build is what I call
04:02a gen AI learning culture.
04:04And I would say that's kind of a two-fold strategy.
04:08In terms of impact, since you mentioned impact,
04:11I know everyone talks about productivity,
04:13and, of course, that's absolutely core for us, too.
04:16And, you know, you can, in some areas, we see 30%,
04:19in some areas, 10%,
04:21some areas, everything in between and even higher.
04:24But I think what's core is with the access
04:26and enablement focus that we have,
04:29we're trying to look at how many people
04:31are actually using these tools, not just using,
04:34but also using them in the right manner,
04:36meaning what are the reliability rates?
04:39How much of the code that is generated
04:41is actually accepted?
04:43What are really the ways that a model is being used?
04:47Is it being used as a partner?
04:49Is it being used as a co-creator?
04:50And these are things we're working on to measure,
04:53because I think those are kind of productivity
04:55is a by-product of all of these elements.
04:59And you mentioned challenges.
05:01I mean, there are many challenges on the technical side
05:03in terms of hallucinations, code vibing, all of that.
05:07But the one thing I want to mention is,
05:09I see it a lot in our environments,
05:12is, you know, the role of the software developer
05:15is changing heavily.
05:17And the reason I say it's a challenge today
05:20is because I don't think everyone has figured out
05:24how we want to deal with this.
05:26It means we need to work to re-skill our software developers
05:30from manual code development
05:32to actually critically analyzing everything
05:36that's being developed with the support of the tool
05:39or with the tool itself.
05:40Or also having kind of strategic oversight
05:43on how do I manage all the agents that I have in the field?
05:48How do I manage the code that is being developed?
05:50So I think this is one piece where I see
05:52that there's a lot more work to be done.
05:54Thanks, Devita.
05:55Hey, so, Purside is building foundational models
06:00specifically for software development.
06:02Could you share your perspective
06:04on what an optimal GNI tool
06:07for a software developer should look like?
06:11And maybe also explain
06:12what make your models
06:15apart from general purpose models,
06:19and in particular how your RL-CEF reinforcement learning
06:24from code execution feedback approach
06:26makes them particularly robust
06:29when applied to real-life code bases?
06:32Thank you, Stéphane.
06:34So I think it's worth...
06:35There's a lot of questions in there,
06:36so let's start with breaking them down.
06:39Our view is that the form factor,
06:42the tools that we interact with at AI,
06:45continuously change as the models gain more capabilities
06:48and more reliability.
06:49And what we've seen in the last couple of years
06:52is that we've gone in software development
06:53from a world where we started with code completion,
06:56we then got chat,
06:57we're now starting to see early signs of agentic tasks,
07:01and the end state is autonomy,
07:03the ability for an agent and the model itself
07:06to have very little difference in their capabilities
07:08between ourselves
07:09and what AI itself is capable of.
07:12But we're not there yet.
07:14And so along the way, the form factor changes.
07:17And if we bring that down to where we focus on
07:20from a model perspective and the stack on top,
07:24the most important thing that is happening
07:27in our space is the increase in capabilities and models.
07:30We have a phrase in our company
07:32that we repeat over and over again
07:34that says everything at the end of the day
07:36collapses into the models.
07:37As the models are getting more capable,
07:40they're becoming more agentic in nature.
07:42They're not just conversational back and forth with you,
07:45they're learning how to take actions,
07:46they're learning how to use tools,
07:48and we're not that far away from them
07:50being able to operate
07:50in fully virtual machine environments.
07:53And so if you anthropomorphize this a little bit,
07:56we're closing the gap between AI model capabilities
07:59and what we can do as software developers.
08:01But we are still incredibly relevant in this world,
08:05us as developers,
08:06but our role, to your point, is changing.
08:08We're finding ourselves,
08:09as the models are more capable, more reliable,
08:12increasingly more in the role
08:13of being a high-agency person
08:15determining what task is set off to do
08:18by the agent or by the model,
08:20be there to review,
08:22be there to guide along the way,
08:24be there to scope things.
08:25But as models get more capable,
08:27we go from narrow tasks with narrow scope
08:29to increasingly larger scope
08:31and larger objectives.
08:32And this has a pretty fundamental shift
08:35for how enterprises and organizations operate.
08:38Today, our means of creation is a team,
08:43an organization of 100, 1,000, 10,000,
08:4625,000 software engineers in the case of Siemens.
08:50But what we're starting to add onto this
08:52is agents that are working synchronously with us,
08:55back and forth,
08:57agents that are asynchronously working with us,
08:59we're sending them off on a task,
09:01and they might come back to us
09:0210 minutes or half an hour later.
09:05And we're not far off from a future
09:07where there will be agents
09:08that will be autonomously running.
09:10And we can touch upon that later,
09:11but that are part of our SDLC or other places.
09:14Now, to your second part of your questions,
09:17how are we different in our world
09:19and how do we do things differently?
09:23It's, while we focus on software development and coding,
09:26a big part of our training
09:28looks identical to general purpose models.
09:31For a model to become capable in software development,
09:34it needs to become capable
09:35across all areas of intelligence,
09:37in understanding the world,
09:38in reasoning over it,
09:40in planning for it.
09:41But earlier in the training
09:43than other foundation model companies,
09:45we bias our models
09:46towards software development tasks.
09:48And the way we do that
09:50is through reinforcement learning
09:51from code execution feedback.
09:53That means is,
09:54for the developers in the room,
09:55we have an environment
09:56with close to a million containerized repositories.
10:00These are all public Git repos.
10:02And in those million containerized repos,
10:05we take our model and the agent
10:07and we set them off
10:08to do tens and hundreds
10:10and soon billions of tasks
10:11and learn from when they're right
10:13and when they're wrong.
10:14Right and wrong is determined
10:16by a host of different signal.
10:17Does the code compile?
10:19Are tests written?
10:20Do they pass?
10:21And a host of other signals along the way.
10:23And the way to think about this
10:25is that where models
10:26have been traditionally trained
10:27on predicting the next token,
10:29primarily on web data,
10:31here models are being trained
10:33in doing long-running multi-step tasks
10:35with bigger and increasingly
10:36more challenging objectives,
10:38where they're learning
10:39when they're slightly more correct
10:40versus slightly more wrong.
10:41And this pushes the software development
10:43capabilities of these models.
10:45Thank you.
10:46Leveraging GNI to assist coders
10:49is clearly powerful.
10:51However, coding typically represents
10:55only 30 to 50% of the total workload
10:58in the software delivery lifecycle.
11:01And to address that challenge,
11:03McKinsey has developed an approach,
11:05ID to impact,
11:06with the ambition to leverage GNI
11:09along the full software delivery lifecycle,
11:11from ideation to deployment.
11:14Debina,
11:16from your perspective,
11:18what do you see
11:19in terms of broader usage of GNI
11:22along the software delivery lifecycle?
11:24Where do you see the biggest opportunities?
11:26What challenges still need to be addressed?
11:31So, yes,
11:33we actually do use GNI.
11:34I would say from everything,
11:36from actually idea generation
11:38to software deployment,
11:39development,
11:40and operations and support.
11:42For example,
11:43in quality assurance,
11:44that's a big space for us
11:45where we are using Gen.AI
11:47to have test scripts
11:49and completely automated,
11:51generated automatically,
11:52whether in human readable language
11:54or in JSON files.
11:56We also do this,
11:58for example,
11:58in support areas
11:59in terms of having local knowledge bots.
12:02And what this does
12:03is it actually even changes
12:05the way our engineers
12:08deal with support and operations
12:10in terms of,
12:11it's a lot more
12:14intelligently contextualized information
12:15and automation
12:17that they're dealing with.
12:18In terms of the challenges,
12:20what we see
12:21is that the more we include this
12:24deeper in all parts of the value chain
12:26from a software perspective,
12:28actually security does start playing
12:30a very key role.
12:31And the reason for that
12:32is not only
12:34but also because we do operate
12:36in safety-relevant
12:37and safety-critical industries.
12:39You do want that train
12:40to run when it's supposed to run
12:41and not crash into the one
12:42in the front.
12:43Or, you know,
12:44you do want that machine
12:45that's building the beautiful Ferrari
12:46to not break it down
12:48the next second
12:49because of something
12:50within the debugging
12:52that caused an effect.
12:53So, I think from that perspective,
12:55security,
12:56and that's where we're working strongly
12:57to say
12:58what are really
12:59the validation,
13:00automated,
13:01codes checking,
13:02everything that we can put in
13:04in terms of guardrails
13:05to support
13:06our engineers
13:07to actually use this
13:08across the lifecycle.
13:09this is something
13:10that we've been putting
13:11a lot of work in heavily
13:13to make sure
13:14that we are making
13:14the most of it
13:15while managing it
13:17in a secure manner
13:17in different parts
13:18of the value chain.
13:20Thank you.
13:21Ezo,
13:22what is a pull-side vision
13:24when it comes
13:25to expanding
13:25the usage
13:26of JNI tools
13:28beyond coding
13:29to cover the whole
13:31software delivery lifecycle
13:33in particular?
13:34What do you plan
13:35in your product
13:36for OnMap?
13:37So,
13:38if we look at
13:39the software development
13:40lifecycle,
13:41it has really been
13:42built for us.
13:43For us as developers
13:44to collaborate closely
13:46with each other,
13:46for enterprises
13:48to have safety checks
13:49so that the things
13:50that end up shipping
13:51to customers
13:51are as,
13:53you know,
13:54possibly bug-free,
13:55secure,
13:56and testable as possible
13:57within the realm
13:58of, of course,
13:58what's possible
13:59being all software developers
14:00here.
14:01But our view is
14:02that over time,
14:04everything is collapsing
14:05into the models.
14:06and so your CI debt breaks
14:09will be fixed by AI.
14:11Your code review
14:12will be predominantly AI.
14:15A lot of your code generation
14:16is a mix of human generation
14:19with AI generation
14:20with human review
14:21and others.
14:22So when we talk about
14:23an agent
14:23and building an agent,
14:25we're not talking
14:25about building an agent
14:26that is only capable
14:27in writing code.
14:29We're talking about
14:29an agent
14:30that is increasingly
14:31approximating
14:32your capabilities
14:34and your peers' capabilities.
14:35and a part of that
14:36is making sure
14:36that that agent
14:37is able to be
14:39called and embedded anywhere.
14:41So in the very short term,
14:43you know,
14:43talking this year,
14:44what you're going to see
14:45is increasingly more
14:47deployments
14:47of headless agents.
14:49They will live
14:49within your CI,
14:51they will live
14:51behind your code review process,
14:53they're going to live
14:55even on your systems
14:56where they're going
14:57to be monitoring logs
14:58and analyzing them
14:59and spotting things
15:00and surfacing it
15:01to you as a developer.
15:03But if we fast forward
15:04this world
15:05where we have
15:06an incredibly large
15:07number of agents,
15:08both running headlessly
15:10in environments
15:11that are not triggered
15:12by developers,
15:13agents that are collaborating
15:15with you
15:15that you're triggering
15:16from your editor
15:17or you're triggering
15:17from your CLI tool
15:18or your web,
15:20and then agents
15:20that are being sent off
15:21by even sometimes
15:22non-software engineers
15:24to do tasks,
15:25we find ourselves
15:26in enterprise environments
15:27where you will have
15:28tens of thousands
15:29of agents.
15:30In the case of Siemens,
15:31maybe even more
15:32when you have
15:3225,000 developers.
15:34And this means
15:36that we are missing
15:37a core fundamental fabric
15:39for organizations.
15:41How do we manage
15:41these agents?
15:42How do we orchestrate them?
15:44How do we give them
15:45specific access
15:46to tools and data sets?
15:48And so this whole world
15:49that is going to play out
15:50is no longer
15:51just a world
15:51of a real-time
15:53synchronous communication
15:54between us
15:55and an AI via chat.
15:56It's going to become
15:57how do we manage
15:58tens of thousands
15:59of agents
16:00in our organizations
16:01and provide them
16:02with similar access control
16:04and audits
16:05and logging
16:06that we frankly do
16:07on our human side.
16:08And this is where
16:08the SDLC matters.
16:10The SDLC is built
16:12so that all of us
16:12can collaborate securely
16:14and deliver.
16:15Agents will become
16:16an actor in that SDLC,
16:18both collapsing it
16:19partially into the model,
16:20but also becoming
16:21even more important.
16:22So those checks
16:23and balances
16:23will matter even more
16:24when you have
16:25tens of thousands
16:25of agents running.
16:27Absolutely.
16:29Agentic AI
16:30is pushing boundaries
16:32even further,
16:34allowing to automate
16:35complex business
16:36and tech workflows.
16:39Yesterday,
16:39we shared the case,
16:41example of a digital factory
16:43composed of more than
16:44hundreds of agents
16:46organized into squads
16:47and supervised
16:48and supervised
16:49by human beings
16:49to modernize
16:50a complex legacy
16:51application
16:52of several millions
16:54of lines of code.
16:56Devina,
16:57have you started
16:58to experiment
16:59agentic AI
17:00and what are
17:01the, I would say,
17:03early learnings
17:04of those experiments?
17:06Before I jump
17:07to agentic AI,
17:08I'm just going to
17:09take a step back
17:10for a minute
17:11and talk about foundation.
17:13because when we talk
17:14about industrial-grade AI,
17:16I think the one thing
17:17that's super solid
17:18is we really need
17:19to work on
17:20industrial foundation models
17:21and that's something
17:22that we are building
17:23with our partners currently.
17:25So industrial foundation models
17:27are those that you feed in
17:28with thousands
17:29and thousands
17:30of 3D design,
17:312D simulation,
17:32time series data,
17:34sensor data,
17:35automation logic,
17:36you name it.
17:36So I call it,
17:37it speaks the language
17:38of the industry.
17:39That's what it does.
17:39And I think this is
17:40super critical
17:41when you talk about
17:42deploying AI agents
17:43in some of the industries
17:44and areas that we work in
17:46because the AI agent
17:48needs to understand
17:49the domain
17:50that it's working in
17:51and the surrounding
17:52of the environment
17:53that it's working in.
17:54So really contextualizing it,
17:55but really contextualizing it
17:57for industry.
17:59And this is something,
18:00of course,
18:00that's the base
18:01of a lot of the
18:02agentic AI products
18:04and I would say
18:04some experimentation,
18:06but some also products
18:07that we are already
18:08rolling out.
18:09So for example,
18:10we have something
18:11called Industrial Co-Pilot
18:12that we built
18:13with NVIDIA
18:14where what this does
18:16is actually
18:17you can be in a shop floor
18:18or in a control center,
18:19but the goal
18:20of this AI agent
18:21is to support you
18:22and you can be
18:23many different personas.
18:24You could be
18:25on the shop floor
18:26a blue-collar worker
18:27who is just operating
18:29a machine,
18:30but the machine
18:31needs to understand
18:32G-code, M-code,
18:33all of these
18:33different code languages.
18:35But actually,
18:36you can just talk
18:37to the machine,
18:37tell it what it needs
18:38to do and it adapts
18:40that into the world
18:41of the machine.
18:42Also, in terms
18:43of debugging a machine
18:44really locally
18:45from the shop floor,
18:47imagine normally
18:47you need experts.
18:49You need experts
18:49who understand the machine
18:50who can come
18:51and debug it,
18:52but actually,
18:53what we're doing
18:54with the Industrial Co-Pilot
18:56is actually
18:58democratizing AI,
19:00or let me say
19:01industrial AI,
19:02for all of these
19:03different personas.
19:04So you don't need
19:05to write a line of code
19:06to debug that machine.
19:07You don't need to know
19:07how to write a line of code
19:09to do a lot of the tasks
19:10that before you did
19:11might have needed experts for.
19:14And this is, of course,
19:15also changing
19:16the workforce environment
19:17and the opportunities
19:19that we have.
19:20You mentioned learning.
19:22So yes,
19:23I mean,
19:23a lot of learnings.
19:24We were just discussing
19:25this offstage,
19:26but the questions
19:27we get asked
19:28is,
19:30why is the agent
19:31giving me that response?
19:32What is it thinking?
19:33I want to understand
19:34how did it come
19:34to that decision?
19:36When should I,
19:37as a human,
19:37intervene?
19:38Should I intervene at all?
19:39Is the AI smarter?
19:41So I think this interaction
19:42between humans and AI,
19:45that's the part
19:46that I think
19:47we're learning,
19:48I would say, right?
19:49And I think
19:51this is where
19:51what we are seeing,
19:52of course,
19:53as AI agents
19:54will grow more and more.
19:56You spoke about 10,000.
19:57I mean,
19:57there's more coming up.
19:59They become employees.
20:01So how do you manage
20:02AI agents?
20:03How do you onboard them,
20:05train them,
20:06evaluate them?
20:07Are they doing a good job?
20:09How do you de-board them
20:11when you don't need them,
20:12right?
20:12And here's where
20:12also you can scale.
20:14You can go from 10,000
20:15AI agents
20:16to 50,000
20:16and back to 10,000
20:17when you need it.
20:18So I think
20:19this is all the learnings
20:20that we are
20:22currently developing
20:23as we're working
20:24through the AI agentic
20:26rollout across.
20:27Thank you.
20:29Ezo,
20:29on your side,
20:30you have already
20:31shared a perspective
20:32on the
20:33poolside product
20:34home app.
20:34How do you approach
20:35specifically
20:36agent TKI?
20:39So I think
20:40it really actually
20:41starts where
20:42Davina ended,
20:43right?
20:43We're moving
20:44into a world
20:45where
20:46we are starting
20:47to see agents
20:48as long-lived
20:50entities.
20:51But there's
20:52something that is
20:52very obvious
20:54once you hear it,
20:54and I think
20:55it's worth
20:55calling out,
20:57is agents
20:57are an elastic
20:58workforce.
20:59And so
21:00what we're going
21:01to see
21:01in the coming
21:01decade
21:02is discussions
21:04around we want
21:05to accelerate
21:05this six-month
21:06project to two
21:07months.
21:08Let's decide
21:08to scale up
21:09our agents
21:09by 5x.
21:11Let's partner
21:12them with the
21:12most highly
21:12capable,
21:13high-agency
21:14software engineers
21:15and managers
21:15that we have,
21:16but now let's
21:17decide for this
21:18project we use
21:192,000 agents
21:19for two months
21:20instead of 500
21:21for four months.
21:23And I think
21:24this is going
21:24to become
21:24increasingly
21:25more important.
21:26We are starting
21:27to divorce
21:28headcount
21:29from the output
21:30of work
21:30that we can
21:31create as a
21:31business.
21:32And that has
21:33an incredible
21:33effect on the
21:34economy of
21:35software.
21:36When no longer
21:37our timelines
21:37are bounded
21:38by the size
21:39of our
21:39organizations,
21:41but they're
21:41bounded by the
21:42choice that we
21:42make of how
21:43much capital
21:43we want to
21:44invest in,
21:45in scalable
21:46agents,
21:46in intelligence
21:47served via
21:48compute,
21:49it changes
21:49how we can
21:50make decisions,
21:51it changes
21:51how fast we
21:52can move
21:53in our industry,
21:53that even
21:54changes
21:54competition
21:54between
21:55businesses.
21:56Because now
21:57the game
21:57theory optimal
21:58thing towards
21:59your competition
21:59is to determine
22:01the right
22:01amount of
22:02agents paired
22:03with the right
22:04amount of
22:04developers to
22:05move as
22:05quickly as
22:06you possibly
22:06can.
22:07And the
22:08reason I
22:08mention this
22:09to your
22:09question of
22:09our approach
22:10to agentic
22:11AI is really
22:13across several
22:13prongs.
22:14The first
22:15prong is
22:15continue to
22:16invest in
22:17increasing the
22:18capabilities of
22:18the models.
22:19Until models
22:21are at 99.9%
22:22reliability and
22:23capability, we
22:25are going to
22:25find that
22:26agentic AI has
22:27its limitations.
22:29If I have a
22:30model that is
22:30only reliable
22:3170% of the
22:32time and it
22:34does a 15 or
22:3520 step process
22:36that compounds
22:37into garbage,
22:39right?
22:39And we already
22:40see this.
22:40Where agents
22:41feel today is
22:42they go off on
22:43a good start but
22:44they don't manage
22:45the course
22:45correct and get
22:46to a right
22:46place.
22:47So I think
22:47it's worth
22:48thinking that
22:49agents today
22:49have an upper
22:50bound of
22:51complexity of
22:51tasks that
22:52they can do.
22:52This is highly
22:53domain specific
22:54to your point.
22:55In certain
22:56areas they're
22:56far more
22:57capable, far
22:57more relevant
22:58than in
22:58others.
23:00But that is
23:01going to change
23:01in the coming
23:02years.
23:02So the first
23:03point is keep
23:04investing in
23:04making the
23:05foundation models
23:05more capable.
23:07The second
23:07is as the
23:08models are
23:09getting more
23:09capable, help
23:11enterprises and
23:12build for
23:12enterprises the
23:13infrastructure to
23:14managing the
23:15agents, to
23:16audit logging, to
23:17tracing, to
23:18the role-based
23:18access control.
23:19But this isn't
23:20just from a
23:20compliance
23:21perspective to
23:22your point.
23:23It is also
23:24about you are
23:25creating an
23:26asset in your
23:26company of
23:27every thought and
23:28every action that
23:29was taken and
23:30every decision
23:30that was made to
23:32create something in
23:33a historic
23:34database.
23:35So when you go
23:36back to the code
23:36and four years
23:37from now and
23:38you try to
23:38understand why
23:39did we do
23:39this or why
23:40did we make
23:41the decision for
23:41this technology,
23:43you're not just
23:43going to be asking
23:44the engineers who
23:45worked on it,
23:46you're also going
23:46to be looking
23:46back into what
23:47the agents thought
23:48and decided at
23:49that moment in
23:49time.
23:50That is incredibly
23:51powerful.
23:52As organizations,
23:53the speed at which
23:54we move is because
23:56of the amount of
23:56collaboration and
23:57information we have
23:58access to each
23:58other.
23:59With agents, that
24:00becomes essentially a
24:01solved problem.
24:02So I'm quite excited
24:03about that part and
24:04we're investing in that
24:04infrastructure and
24:05investing in the
24:06models.
24:07Thank you.
24:08We are reaching the
24:09end of this
24:10fascinating discussion
24:11on Agenti Kodi.
24:13Maybe to conclude,
24:14Devina, could you
24:15share with us what
24:16are the priorities
24:17for the coming
24:18month to further
24:19transform a Siemens
24:20software engineering
24:21capability and maybe
24:22also based on your
24:23experience, some
24:25advice to other
24:26software editors?
24:28I think the
24:29priorities we have
24:30and the advice kind
24:30of match.
24:31So maybe I'll focus
24:33on the advice.
24:34Three things.
24:35The first is don't
24:37only start with the
24:39model, start with
24:39your context.
24:40I've seen a lot of
24:41companies running for
24:42each new model that's
24:44coming out.
24:44Experiment.
24:45There's no harm in
24:46experimenting.
24:46Keep doing that.
24:47We do that across.
24:49We're doing it with
24:49Poolside, we're doing
24:50it with RueCode, we're
24:51doing it with Cursor
24:52AI, all of that.
24:53So experiment, but not
24:55with 25,000 developers.
24:57So really don't only
24:59focus on the model, also
25:00focus on your context.
25:02The second is do
25:05have enablement,
25:06especially in large
25:07organizations, if you're
25:08from a large
25:08organization, have
25:10enablement as a key
25:11lever in this whole
25:12journey.
25:13And by enablement, it's
25:14everything I mentioned
25:15before in terms of how
25:16do you develop a gen AI
25:18learning culture, but
25:19also what kind of
25:21skills do your
25:23employees need in the
25:24future?
25:24Who is managing who?
25:26Are we managing the AI?
25:27Is the AI managing us?
25:29What are those skills in
25:31terms of strategic
25:32oversight, in terms of
25:33how we manage that
25:34complexity?
25:35And the third thing is
25:37be humble.
25:38And the reason I say
25:40this is because I think
25:42it's a moment in
25:43humankind where our
25:45capability to develop is
25:48outpaced by the technology
25:50capabilities that we see
25:51around us.
25:51And maybe we've been in
25:52such a moment before, but I
25:54see it in this moment.
25:56You know, I often tell my
25:56young kids about what's not
25:58going to exist when they
25:59grow up, and they go on
26:00looking at me like I'm on
26:02some digital and innovation
26:04drugs.
26:05But really, be humble.
26:07That's my third message.
26:08Thank you.
26:09Azo, same question.
26:10Some advice for the
26:12companies adopting
26:13JNI tools to transform
26:15their software delivery
26:16lifecycle?
26:18So the short answer is plus one
26:20to everything Davina just said.
26:21To be very honest, I think
26:23she captured a lot of it.
26:26There is a lot of noise in
26:28our market, and AI is no
26:31longer yet at the level of
26:32capabilities that we're
26:33preaching here.
26:34The world will be up.
26:36but please, as an
26:37organization, plan for the
26:40increasing capabilities of
26:42AI to reach human-level
26:44capabilities.
26:45Have these, the questions
26:46that Davina is asking you to
26:48think about are the right
26:49ones.
26:50What happens in a world where
26:51AI is at the level of
26:53capabilities of my workforce?
26:55How do we pair that
26:56together?
26:57How do we scale?
26:57How are we ready for that at
26:59that moment in time?
27:00I think it's absolutely the
27:01best advice to take.
27:03And keep experimenting,
27:04stay close to the technology,
27:06build your own internal
27:07evaluations, as you're saying,
27:09experiment, be there with it,
27:11and identify the people in
27:12your organization who are
27:14doing so, because they're your
27:15canaries in the coal mine.
27:17If you look at usage of AI
27:19across your company, find the
27:21top 1% who are using the
27:22tools the most, and sit with
27:24them, ask them what they're
27:26seeing, because they're on the
27:27ground, and they're seeing the
27:28day-to-day change as models
27:30and products on top are getting
27:32more capable, and they'll be
27:33able to start telling you,
27:34hey, wait a second, now we can
27:36do this.
27:37And that means you're
27:38essentially planning over the
27:39next five years for a world
27:40where you can create software
27:42a hundred times faster.
27:43And what does that do for
27:44your business?
27:44What does it unlock if the
27:46cost of software creation is
27:48both elastic in nature and is
27:50actually a hundred times
27:51faster and cheaper?
27:53Evina, thanks a lot.
27:55Thanks for your attention.
27:57Thank you.
27:57Thank you.
Commentaires

Recommandations