Why AI coding assistants might not help devs much
November 4, 2024
There’s a gap between expectations, experience, and metrics when it comes to AI code assistants, according to a recent study by engineering intel firm Uplevel.
Previous surveys have shown both high expectations and satisfaction with code assistants. Stack Overflow’s 2024 developer survey found 76% of respondents were already using or planned to use AI code assistants. A separate GitHub survey found nearly all developers have at least tried AI, and 73% of those in the US were optimistic it could help them better meet customer requirements. Surveys have also shown high rates of developer satisfaction with AI tools.
Uplevel took a sample of nearly 800 developers using its metrics-tracking platform, of whom around 350 were using GitHub Copilot. They then compared them to the control group on “objective metrics” like cycle time, pull request (PR) cycle time, bugs detected during review, and extended working hours.
I use generative AI as a developer extensively. It has enabled me to do things I could, if absolutely required, do, but in a fraction of the time.
Like Simon Willison, I’ve found it transformative, more so than any other development in the 40+ years I’ve been writing code.
Amazon has reported huge time and cost savings for specific use cases (upgrading from older versions of Java), reports last week were that as much as 25% of all new code written at Google is AI generated.
But not all analysis is so bullish. Perhaps it’s the use cases? The level of experience of the developers using the tools?