Saltar al reproductorSaltar al contenido principal
  • hace 10 horas
La IA generativa puede acelerar la codificación, pero también puede generar una deuda técnica peligrosa. En este video, descubre cuándo tus equipos deberían usar GenAI y cuándo deberían evitarla por completo.

Categoría

📚
Aprendizaje
Transcripción
00:00Companies using GitHub Copilot report that developers are 55% more productive.
00:06McKinsey has found that developers can complete tasks up to twice as fast with generative AI assistance.
00:12These aren't inflated claims. They're real, measured gains happening right now.
00:17But here's what most organizations don't realize.
00:20While your developers are shipping code faster than ever, they may be creating a different kind of problem.
00:26One that compounds invisibly and could cripple your systems months or even years from now.
00:38In a recent article for MIT Sloan Management Review,
00:41professors Jeffrey Parker at Dartmouth, Edward Anderson at UT Austin, and Burku Tan at the University of New Mexico
00:48explain that AI-generated code creates technical debt at an unprecedented scale.
00:54Debt that's particularly dangerous because it's harder to fix and detect than traditional coding problems.
01:00Think of technical debt like financial debt.
01:03The principle is the work needed to modernize systems.
01:06The interest is the ongoing complexity that slows everything down.
01:11AI-generated code comes with a much higher interest rate.
01:16The numbers are staggering. The cost of technical debt in the US exceeds $2.4 trillion.
01:22Yet most organizations allocate less than 20% of their tech budget to addressing it.
01:27While developers produce code faster, quality is declining dramatically.
01:32One senior developer explained it perfectly.
01:35AI can't see the big picture.
01:37It generates code that works but doesn't fit the architecture.
01:41I asked Jeffrey Parker, one of the article's co-authors, what warning signs executives should look out for in their own systems.
01:48Some of the signs that you're going to be looking for, they're going to be fragility,
01:54and that's going to show up in essentially unexplained bugs that you then have to dedicate more senior talent
02:04to go and hunt down some of the root causes of.
02:07I think the danger is you sort of push the accelerator, and then you're getting a lot of things done,
02:14and then all of a sudden the complexity starts to go up, and then things start to entangle,
02:19and you get a lot of spaghetti, and then you bog down.
02:23It takes a while, and it depends upon the environment, for those issues to kick in though.
02:30So, you know, when you say what's the red flag, you can, in some settings, proceed pretty happily for a while, until you're not.
02:41Whether AI helps or hurts depends on the environment it's being deployed in.
02:46Developers distinguish between greenfield and brownfield environments.
02:51Greenfield means starting fresh, new products, no legacy code.
02:56Here, AI delivers those productivity gains.
03:00Brownfield environments, on the other hand, are your existing systems, the legacy code that's running your business.
03:06This is where most enterprises operate, and it's also where AI can become dangerous.
03:12When AI-generated code enters legacy systems, it creates tangled dependencies.
03:18Each new piece connects in ways that aren't obvious.
03:21Over time, these multiply until nobody fully understands the system.
03:26But Parker points out that the calculus changes significantly depending on your time horizon and your business context.
03:33In greenfield environments, the action space to deploy AI is a lot wider,
03:38because you don't have all of these complex interdependencies that you see in the legacy environments.
03:44And if you also have a shorter time horizon, so for example, you're in an entrepreneurial venture, and you're trying to achieve product-market fit,
03:54you may be less concerned about getting the exact technical solution, as opposed to getting a solution,
04:00and then understanding how that interacts with your market.
04:03And so in those environments, using Gen.AI tools to accelerate development makes all the sense in the world.
04:10And so part of it is the complexity, and the other part is the time horizon.
04:15Because if you have a sufficiently short time horizon, then the accumulation of tech debt, which takes time,
04:22may not get so large that it matters that much.
04:25A Fortune 50 developer shared this insight.
04:29With AI, a junior engineer writes as fast as a senior one, but without the cognitive sense of what problems they're creating.
04:36This combination of inexperienced developers using AI in legacy environments creates the most risk.
04:44Parker explains why that experience gap matters so much.
04:49Once you're a more senior developer, you've been through a lot of cycles.
04:53And so you understand particularly, and this will interact with the Greenfield, Brownfield,
04:59but a lot of systems, you have to interoperate, you have to integrate with existing systems.
05:07And so you'll have a lot of experience as a senior developer for kind of what to look out for.
05:14And you'll say, yeah, okay, I know that things that I should do are to dovetail to an existing system.
05:23And I've seen that pattern before.
05:25And so I'm on the lookout for it.
05:27Whereas a junior developer, you've got the problem statement at hand.
05:31You provide a solution to X.
05:35And then you use the AI or you do it yourself and you provide a solution to X.
05:40But of course, the broader problem statement is provide a solution that does the following functionality that then embeds itself into this broader context.
05:50And that's where the experience really kicks in because now you're dealing with a broader system architecture.
05:57And that's something that as a junior developer, you're just less likely to have seen a bunch of cycles.
06:04And furthermore, you're less likely to have seen the variability.
06:08So a lot of systems get more complex over time because things get added and things get modified.
06:17And they may or may not have been refactored to make them more modular.
06:22They may have a lot of interdependencies and sort of it's a variation of knowing where the bodies are buried, particularly for talent that's been in an organization for a while.
06:33Technical debt has always been dangerous.
06:35Southwest Airlines 2022 meltdown, the CrowdStrike outage, Newark Airport system failures.
06:41These cost billions of dollars and disrupted millions of lives.
06:45But these problems accumulated over decades.
06:48With AI, the same level of debt can build in just months.
06:52Google's DevOps team quantified this.
06:55A 25% increase in AI usage improves documentation but decreases delivery stability by 7.2%.
07:02Analysis from GitClear is even more sobering.
07:05Code duplication increased eightfold in just four years.
07:09What looks like progress today becomes tomorrow's crisis.
07:12But the timeline is compressed dramatically.
07:14And there's another dimension to this risk that the researchers didn't even anticipate.
07:18Since the publication of this article, Parker told me about an unexpected response from the security community.
07:24The thing that surprised us was the degree to which the security community got really interested.
07:31And indeed, if you look into the issues of technical debt, the issues of opening up loopholes, if you will, or creating attack surfaces are highly related.
07:44And so I think that was the bigger surprise because we were focused on this issue of kind of integration and complexity and a little less on opening up sort of attack surfaces.
07:56But of course, it made perfect sense.
07:58Does all this mean that companies should avoid using AI coding tools?
08:02Definitely not.
08:03I spoke with Doug English, CTO of CultureAmp, who oversees about 300 engineers.
08:09His organization has embraced AI coding tools, but with a critical distinction that's made all the difference.
08:15So one of the things that we've made a really careful distinction between within our platform is what we would call vibe coding versus AI coding.
08:24And we make a really important distinction between those two.
08:27And we actually do both within CultureAmp, but for very different purposes.
08:30So one of the things that we've said is that the vibe coding should never make it to production.
08:36Only AI coding should make it to production.
08:39And the distinction for us is that AI coding, the engineer is responsible for the code that goes to production and it needs to go through all of our standard scrutiny checks.
08:49So the first requirement is the engineer using it needs to understand and be accepting that the code that's being pushed to production is their code.
08:57And then secondly, it needs to go through the same review processes.
09:00So having a second engineer looking at the code and giving an assessment.
09:06On the vibe coding side, what we've found is that it's been excellent for doing things like prototyping or experimentation.
09:13But having that really key difference that if something's vibe coded, if no one's actually looking properly and scrutinizing the code, it's not allowed into production.
09:23You need a strategic approach.
09:25First, develop specific guidelines.
09:28Microsoft, Google and Meta have AI ethics policies, but you need operational rules that developers can follow daily.
09:35Define when AI is appropriate and when it's not.
09:38Morgan Stanley offers a model.
09:40They're using AI to reduce technical debt in legacy systems, but with careful human oversight and custom models, not off the shelf solutions.
09:48English shared a mental model that's guided CultureAmp's approach.
09:52And here's what he has to say about the results.
09:55Focus on making sure that the right scrutiny is put into the documentation of the areas that you want to make sure are rock solid and it will help.
10:05And I think what we're actually seeing is because we've got this in place, it's actually helping us to drive better consistency across our teams.
10:15Our AI agents are leveraging that constantly and reinforcing it.
10:20And so it's actually helping us to train the engineers on our ways of working.
10:26And what we're seeing is that it's making it easier for an engineer to move from one team to another.
10:31Second, make technical debt a core metric.
10:34Track it just like revenue and build it into workflows.
10:37Economic modeling shows that ignoring it will eventually overwhelm any productivity gains.
10:43Third, invest in developer judgment, not just prompt engineering workshops, but training in assessing AI output quality.
10:50Senior developers must become AI coaches, helping juniors understand not just if code works, but what problems it might create down the line.
10:59One developer captured this perfectly, explaining that if you train AI properly, it can reduce technical debt rapidly.
11:06The technology isn't the problem. It's deployment without strategy.
11:10Generative AI for coding is powerful and it's here to stay.
11:13But success requires understanding both potential and risk.
11:17The companies that will thrive are those that see AI as a tool requiring skill, not just an easy solution.
11:23They'll use AI to accelerate development while actually improving code quality.
11:28The difference? Strategy, training, and clarity about what AI can and cannot do in your specific environment.
11:36Don't become the next cautionary tale.
11:38For the complete analysis and frameworks, read The Hidden Cost of Coding with Generative AI at the link below.
11:44I'm Karshik Vishwanath for MIT Sloan Management Review.
Sé la primera persona en añadir un comentario
Añade tu comentario

Recomendada