PaulHoule 15 hours ago

There's a time delay between when an innovation is first adopted and when it has a real impact. You might spend anywhere from 2 months to 8 years [1] developing an application. I've seen numerous VC or bootstrapped companies that took 1.5 years to 2 years to launch, so I wouldn't expect AI to have had much an effect yet.

I'm also not sure about "better"; I find Copilot is a good wingman for writing things like shell scripts, CMD.EXE scripts, powershell scripts and python scripts that do simple things. Even there I find it confuses forward slashes and backslashes sometimes so I often have to do a little debugging. Copilot can help me figure out how to use obscure (to me) features of PostgreSQL in JooQ. It will also argue with me and take factually wrong positions such as telling me that there is no zero-argument version of Optional.orElseThrows() which there is.

[1] https://en.wikipedia.org/wiki/Concord_(video_game)

  • ericmcer 15 hours ago

    ChatGPT came out in late 2022 so it has been almost 2 years since the "AI is going to change everything" wave started. Looking at myself and my coworkers I don't feel there has been an uptick in developer output in that time. We all have copilot licenses as well, I don't use mine but pair with people who do and it seems cool but not for much more beyond advanced autofill.

    My big concern with all this stuff is it is pushing devs away from the uncomfortable part of coding. When you approach a new area of a codebase you just have to sit in discomfort and step through the code and experiment with it until your brain gains context. Sure you can tweak something on the surface to fix a bug, but to really gain full understanding of it can take days or even weeks.

    Patience is one of a developers most important tools, whether its spending 30m uninterruptedly working to achieve "flow" state, or slowly stepping through a complex piece of code to see why it is failing. I worry AI is going to instill a rapid reward system that makes future devs goal just to rush to make it work. They won't have much interest in really gaining mastery over what they work on.

    • rerdavies 8 hours ago

      I haven't tried copilot yet. I have tried Claude 3.5, which is unexpectedly brilliant. I use it very regularly. But almost entirely useful for writing new code, not fixing old code.

      I take your point with respect to understanding large existing codebases. AIs don't yet seem to deal well with strategic large-context thinking. But things are changing at such a furious pace, so that may change sometime much sooner than either of us expect. But you are right. At the present moment in time, AIs are not particularly good at that.

      But overall, I am much more optimistic than you are. I think AIs will save us time, so that we can spend more time on issues of overall structure that make "spending months to understand a codebase" a thing of the very distant past. I have certainly worked on codebases that require "months to understand". But I would (perhaps naively) like to think that's a symptom of a disease that codebases shouldn't have, and that modern codebases usually don't have anymore (but sometimes do).

      • QuantumGood 7 hours ago

        Anecdata: Claude 3.5 sometimes nails on the first try something that I can't GPT4 (any version) to ever complete without errors. Most recently, an AHK 2.0 script concatenating some strings.

    • dasil003 14 hours ago

      This resonates with me, but also I don’t want to presume too much about the value of the old ways of doing things. AI tools are going to drastically lower the bar of entry to programming, so it will be flooded with dilettantes, but people who can think will still apply their thinking in creative ways. It will be really interesting what happens as AI tools mature and a generation of AI-native developers figures out how to maximize them.

      • skydhash 9 hours ago

        Maybe an unpopular opinion, but I believe that the bar of entry to programming has always been low. On Windows, some environment are just an exe away. And web dev can be done with Notepad. And there are many introductions books and video tutorials. What is hard for most people is formalism and detailing your idea as a sequence of steps, aka the programming mindset. Writing code was always easy (Before AI, we had copy-paste from the internet). But whether you're writing assembly or python, you still need the programming mindset and that's the first step. Then you need to learn software engineering (code hygiene, requirement gathering, tasks planning,...)

  • marcosdumay 14 hours ago

    GTP-derived bash scripts seem like a new kind of nightmare I never wanted to imagine.

    • PaulHoule 13 hours ago

      If you are afraid of that you should see the CMD.EXE scripts that Copilot writes. For that matter, copilot has a habit of making long bulleted lists so instead of just telling you to make a configuration change in settings it will also tell you how to do it with REGEDIT and the policy editor.

  • johnea 10 hours ago

    Of course this all depends on what you mean by "better", or for whom the product is better.

    AI, like every technical advance in history, will be deemed "better" if fewer people make more money from it.

    This has nothing to do with you, mister insignificant user...

DanHulton 15 hours ago

Simple, really. It's not actually helping people to code better:

https://greaterdanorequalto.com/ai-code-generation-as-an-age...

  • ActionHank 15 hours ago

    Last junior dev I spoke to, taught themselves React, spent months building a very polished portfolio, got hired to work with a new "language" they hadn't heard of called "Angular". This dev was taking up the job because it paid well, they otherwise had a disdain for tech.

    If these are the people who are coming into the industry, LLMs will not help them. There is no desire to learn or understand or dig even remotely beneath the surface.

    • sevensor 15 hours ago

      Ever since FORTRAN came on the scene, there have been tools that make disdainful, unskilled programmers more productive. I doubt this will ever change. It’s in the interest of mainstream employers to treat programmers as a fungible resource. There’s a much smaller risk to your business when you hire ten and get one or two good ones in the lot, than if you hire one good one and pay them triple. Things average out, and you’re not beholden to a few people who know their worth. As long as the pay is good enough to keep a couple of competent people around, you can rely on them to cover for the rest. Best of all, you never have to figure out which is which.

      • marcosdumay 13 hours ago

        You mean COBOL? FORTRAN wasn't really about lowest common competence.

        Anyway, about this:

        > It’s in the interest of mainstream employers to treat programmers as a fungible resource.

        It's in their interest to make programmers fungible. It's self-delusional to think they succeeded.

        • sevensor 12 hours ago

          No, I do mean FORTRAN. Massively lowered the bar. Not that COBOL didn’t, but it was in a different domain and slightly later as I understand it. Have you ever worked with properly old FORTRAN? Some of it is quite good, but some of it, well, it’s regrettable it was ever transferred off of paper tape.

    • hjkl0 15 hours ago

      They’ll have to learn Angular for their new job, so “no desire to learn” seems harsh and unfair

    • Jcampuzano2 15 hours ago

      As someone who enjoys tech, I don't really fault or look down on people who get into tech for the pay like some in our field seem to do. In basically every other field on the planet that is high paying, people go into it for the money, not for some passion - so whats different about ours?

      I don't really mind whether you have some innate desire to learn something, or if you're doing because its your job and your job pays you. As long as in the end you suck it up and do it.

      • JohnFen 14 hours ago

        I don't know that there's anything different about ours, but that's a shame.

        The difference (broad-brushing here) between someone who is a dev because that's their passion and someone who is a dev for the pay is the quality of their work. As our industry matured and gathered more "in it for the money" types, the overall product quality has been declining.

        I think that's a shame. Perhaps inevitable, but a shame nonetheless.

  • kredd 15 hours ago

    It, undeniably, makes prototyping incredibly fast. I’ve been forcing myself to use Cursor for the past month, and could create a fairly functional web app, given my skills aren’t front end dev. Very sweet for one off deterministic python scripts, figuring out the UI bugs, filling up boilerplate code and especially handy with languages/frameworks you’re not that familiar with.

    Sure, we’re not at the “hook it up to prod codebase and ask it to make features from scratch” phase, but we didn’t have this 2 years ago either. But writing it off as completely useless? Nah.

    I’m more of a process oriented person, who more or less cares about code quality. And as of now, it kinda sucks for it. However if your main goal is just the result, it’s delivering good stuff.

    • skydhash 9 hours ago

      If your github-fu was good, you could get the same result with copy-pasting. But then you would need to worry about licenses. It's very convenient that Copilot and the like don't disclose the licenses of their training data.

    • DanHulton 11 hours ago

      I mean, I didn't say it was completely useless, just that it's not making code any better. There's definitely use cases for it. Rapid, throw-away prototyping may end up being one of them. (Though I am skeptical about that, given how often the prototype ends up being the finished product, but that's neither here nor there.)

      If your main goal is the _short-term_ result, sure, it's delivering. In the longer term though, I still believe it to be dangerous.

shepherdjerred a day ago

Why would there be a correlation between how fast something is developed and its quality?

Assuming that AI is helping developers to write more code, it could mean:

* there are fewer developers

* developers are working less

* the efficiency gains are resulting in more products being created rather than existing products being improved

* AI isn't widely enough adopted or used to make enough of a difference

* the benefits are too recent to be measured

  • JohnBooty 16 hours ago

        the efficiency gains are resulting in more 
        products being created rather than existing 
        products being improved
    
    This has been perhaps the only constant in the history of this industry. As software tooling and hardware get better, we never really feel or see the gains because companies and individual developers are pressed to do more.

    If a tool makes my job 2x easier, then I'm simply expected to have 2x more output. Not 2x "better" output.

    • shepherdjerred 15 hours ago

      100%. You can also see this with Moore's law. Hardware gains were converted to developer productivity gains.

      Software is easier than ever to write, but now the average AAA video game is 100+ GB and most popular software is browser-based.

  • inetknght a day ago

    > Why would there be a correlation between how fast something is developed and its quality?

    I've found that for all but the smallest and tightest of teams, there definitely is a correlation... an inverse correlation.

    • marcosdumay 13 hours ago

      The highest quality product is finished first, yeah.

      Often the difference is between finishing within an order of magnitude of the predicted time versus not finishing at all. But the highest quality product is almost always finished first.

      There are exceptions, and AFAIK, those are all for very small projects. In my experience, high-quality takes about a week to be paid back. So if you have something smaller that you'll throw away after a use, you may want to cut corners.

  • chthonicdaemon a day ago

    I can think of several ways in which faster development could lead to better quality. Let's say you have a fixed time to come up with a solution to a problem. If you are able to come up with one candidate and evaluate it in that time, you have to live with what you get. If you do two candidates, you can select the one that is better.

    Another aspect of quality is "polish". A team that can get a UI in front of QA twice in a development cycle instead of only once will benefit from more fault-finding.

    • al_borland 16 hours ago

      Parkinson's law tells us that work expands so as to fill the time available for its completion.

      I think the more likely outcome would either be it still taking the same time to deliver, with extra fluff in the middle, or the time simply shrinks. One thing is still evaluated and shipped, but slight faster.

  • mu53 a day ago

    I am leaning towards fewer developers. There is a crunch in the job market and companies are finding rate of progress as being similar before, so don't feel inclined to hire more

    • stackskipton a day ago

      Most companies I know are turning out less features, they are just more tuned features towards what they believe will provide value and less "moon shots".

      • soco 21 hours ago

        To me it looks like "usability" never gets on top of that value list. Yet the seminars and courses and LinkedIn posts season everything with that word.

  • foobarqux a day ago

    Is there any evidence for any of those hypotheses? More apps in app stores? Mass layoffs? How could the "benefits be too recent to be measured" if everyone is arguing that they are already 10x more productive?

    If you accept that premise the only conclusion left is that it has made developer's work much less but their bosses haven't noticed yet.

    But you would still expect that some people are working for themselves and continue to work the same amount of hours so should be producing 10 apps a year instead of 1. Are there any examples of that?

    • kurthr a day ago

      If "everyone is arguing that they are already 10x" since GPT4 18mo ago, why aren't they 100x with _o1 and 3.5 Sonnet? Maybe it's bs and they've been spending actual hours getting maybe 2x more productive for the last year. None of it has kicked in yet? It might continue to improve? Who knows.

      Actual productivity takes time, and actual products take work. Talk is cheap, show me the code.

    • faangguyindia 17 hours ago

      Yes jobs cut will be secondary market first.

      India have had massive layoffs in tech sector.

      Europe and US will follow suit once companies layoff dead weight in outsourcing locations first.

  • black_13 17 hours ago

    And does it matter? The broader economy is whats is killing me and everyone one else. Healthcare and education are luxuries i dump tons of money into providing healthcare and education for my kid. Healthcare for myself.

turnsout 14 hours ago

The simple answer is that product managers are not asking for quality. They're asking for cards/stories to be completed. "Quality" has been redefined as "meets requirements" or "passes UAT." Actual quality is just not in anyone's KPIs or OKRs.

That's why it's way easier for indie developers to deliver high-quality software—their incentives are directly aligned with the user.

  • RandomThoughts3 14 hours ago

    Note that passes UAT could actually be nearly enough if UAT are properly written, your merge rules are strict enough and you do a minimum of complementary testing on release.

    The issue is that the people writing the UAT generally have no incentive to write good UAT. They want to ship. They don't want to get code blocked because it doesn't meet requirements. Amusingly and to get back to the core of the discussion, AI is generally pretty good at helping write good UAT.

cdf a day ago

AI code assistants are amazing when you start from zero and just need a 80% working prototype. But once you start trying to refine the product from there, that's where the automation gets counterproductive. If you can exactly specify the problem, eg "Password input crashes when the password has an apostrophe", AI can probably fix it. But if the bug report comes in as "Password input randomly crashes", I will be very surprised if AI can figure out why and fix it. Where a human wrote the code, he or she may figure out why fairly quickly. Now, if you want a human who didnt write the code to understand the AI generated code, it may take a lot longer. In fact, in all likelihood, the AI assisted products are likely to be buggier and stay so longer, esp if companies start to think they can fire the senior devs and hire less skilled devs and fill the gap with AI. At some point, the pendulum will swing back, and companies will be chasing devs again.

  • bruce511 a day ago

    It's certainly going to be fun seeing this all play out over the next 10 years.

    In some ways it's like taking over a project written by someone else that's "80% done". You're locked into their design, get to analyze all that code 1 bug report at a time, get frustrated by obvious mistakes, confused (and mislead) when they relied on some clever side effect.

    The quality of life in this maintainence mode depends enormously on the quality of the original coder. "Why did you choose this over that?" Is a common question I have for earlier devs. The AI answer is the least satisfying "it seemed like a good probability at the time".

    IME writing the app from scratch to "done" is 10% of the lifecycle of the code. It's the other 90%, spanning over decades, where the quality (or lack thereof) reveals itself.

    Personally I'm finding AI useful as a tool. Would I want to be the human fixing AI bugs? (From human bug reports which are pretty vague?) I'm not so sure about that.

JeffeFawkes a day ago

My theory... Being able to code well or fast doesn't one to one translate to a good end user experience. The strength of your org's ability to determine good features and iterate on them from a product perspective is what matters, and that /can/ potentially happen faster if AI is enabling faster development, but it's not guaranteed.

Even if we had a magic box that results in perfect code coming out every time for a given feature description, that doesn't mean the feature itself is good or well thought out.

  • horacemorace 15 hours ago

    This. AI coding can almost turn an excellent product manager into a one man shop. We all know precious few exist.

alentred 15 hours ago

Because Code ≠ Product. Code ∈ Product, among many other things.

I would go even further and say that that relationship between the two is weak, but also very peculiar: bad code can ruin a good product; but good code alone says very little, if anything at all, about the quality of a product.

  • dsco 15 hours ago

    Another riff on this is that you won’t get better products, but more products. AI reduces the barrier of entry and you’ll have many products doing niche things (often for free!) instead of platforms capturing a lot of the incremental value.

    It’s like mobile cameras, they reduced barrier of entry to photography, but you didn’t end up getting better photos necessarily. Instead you had more of them.

  • cooljacob204 15 hours ago

    I would argue good code helps a product through allowing faster iteration.

  • rdlecler1 14 hours ago

    This is a great point. LLMs may flood an app with more features without making it better. At the end I do the day you still need to make something people love. AI may help you build that faster if you know what do build, but it’s not going to make a bad product great.

poniko a day ago

My experience with AI as a coding parter is yes, great at just doing boring things like take this list and give me an enum or add a form for this class etc .. but when I do anything remotly advanced it breaks apart, especially bad at dotnet where there are 25 years worth of history it source code from .. often it creates somthing that is long out of date, and jesus it tried to rewrite the same code ten times to solve a problem that was not supported by framework and could not be solved. So yea .. give it some years I guess. Still use it daily and I still need to fix Ai generated bugs ..

ado__dev a day ago

We're still very much in the early days.

Code AI tools today absolutely crush at creating proof of concept apps. You can test your idea and get market validation in days vs months.

They are getting better at medium/large codebases, but still have a ways to go before being super useful and it translating to a huge increase in productivity. Currently it's really good for helping with the menial tasks (creating docs, unit tests, understanding and onboarding) but not quite there yet when it comes to integrating gen ai code in large codebases, but it's only a matter of time.

  • vunderba a day ago

    Not discounting your claim, but the fact that you work at a company (Sourcegraph) whose business is literally AI coding / coding assistants calls your objectivity somewhat into question.

    I will agree that they're very useful for boilerplate tasks particularly around deployment (cloudformation, github actions, etc.)

    • ado__dev 6 hours ago

      Def don't take my word at face value. I'm just sharing my experience and knowledge, but the great thing about most of these tools is that many offer usable free tiers or very low-cost plans that don't require any sort of lock in. So try the tools yourself, for your use cases, and see if it's a fit or not.

      Personally, I have seen a ton of change in the last 4-5 months, so if you haven't tried these tools recently, I encourage you to try them today and see what's possible.

  • bamboozled a day ago

    This is such a wildly unsubstantiated claim. Where is the evidence of this happening ?

    • ado__dev a day ago

      Which one?

      I run DevRel at Sourcegraph and our AI coding assistant, Cody, is used by tons of individuals, small business, and large enterprises. I get to talk to a ton of customers and see how their adoption of AI is going. And it's certainly increasing and developers are finding a ton of value.

      • bamboozled a day ago

        Finding value in a product does make development go from months to days , which is the unsubstantiated claim. Even your customers can talk shit to sometimes in order to curry favour for a discount.

        • ado__dev a day ago

          Since gpt came out I have built tons of throwaway apps, plenty of specialized apps for side projects, and experimented with tons of ideas that I likely wouldn’t have if I didn’t have access to a tool to build it for me from just asking it to do it and explain what it did. Claude artifacts has been awesome for this. Cody when I actually want to build it out. I recommend trying it before you knock it.

          • DrillShopper 16 hours ago

            Cool, so we're re-visiting the 90s with RAD tools, just this time charged with AI.

            Everything old is new again

          • KronisLV a day ago

            > Since gpt came out I have built tons of throwaway apps, plenty of specialized apps for side projects, and experimented with tons of ideas that I likely wouldn’t have if I didn’t have access to a tool to build it for me from just asking it to do it and explain what it did.

            GitHub Copilot, ChatGPT and Phind are all a bit like this for me - they both lower the barrier of entry and save me a lot of time for trivial algorithms and boilerplate code, in addition to helping me find things better than search engines sometimes do, especially when given a look at the code that I'm working with.

            It might not be an order of magnitude difference in my case, but things that wouldn't have happened with the higher barrier of entry are now happening and that's quite the difference in of itself! I'm cautiously optimistic about LLMs and other forms of "AI". If nothing else, so far we basically have a more versatile form of IntelliSense, even if it's not always going to output correct code.

            I wonder if some day it'll be feasible to feed in the entirety of a larger codebase and reason about it better than people who only know a part of it could.

            • skydhash 14 hours ago

              > If nothing else, so far we basically have a more versatile form of IntelliSense, even if it's not always going to output correct code.

              That's the real issue for me. I remember learning programming and I either had not so good intelligence (Codeblocks, IDLE, Netbeans) or none at all (notepad++,...). This forces me to either follow the book attentively (and hunting down errata) or read the manual and getting explanations from forums or friends. When you're a beginner, uou need a good source of truth, not something that can be subtly wrong.

          • csomar a day ago

            So you can give us a couple examples of apps you built with AI that are actually useful?

            • ado__dev 14 hours ago

              Many services that power https://videotap.com/ have recently been rewritten with the help of AI.

              Recently I used Cody to rebuild the entire video processing pipeline and made it much more efficient and scalable, and I actually learned a ton about ffmpeg by pair programming with the AI. Now I'm building additional features into this app, mostly w/ just iterative prompting or chat-oriented programming to replace 3rd party services that are still in this pipeline and it's been a blast.

              I've also used AI tools to really brush up on frameworks, like Laravel, that I haven't touched in a while, and it's been a great experience. Also started building a game w/ Godot and found AI super helpful there in walking me step by step. So for me it's been great.

            • bamboozled 16 hours ago

              Since gpt came out I have built tons of throwaway apps,

              Throw away apps are the easiest to make. No scaling, no bug fixing, no long term maintenance considerations, no consequences for poor architecture, no need to consider data models, you just write some shit. Bravo.

  • talldayo a day ago

    > They are getting better at medium/large codebases

    If we are strictly speaking about the cutting-edge models like OpenAI's o1, their context is getting smaller, not larger.

    • ado__dev a day ago

      I believe o1's context window will increase over time as well. It's a new model that takes a different approach compared to other ones, so testing it with less context seems logical and expanding the context window as it's quality is validated.

      Even in a large code-base, you don't have to have the full context of every single file for every single question, can usually get away with half a dozen files, it's figuring out which ones to provide to the LLM to get the best response.

blibble 16 hours ago

bad developers emitting more code has never led to anything good

unless you're being paid to clean up the mess

it's like outsourcing on steroids

animal_spirits a day ago

All of the products you are using today were likely created before ChatGPT was released. For those products, you are not going to see any visible improvements because many of them will suffer from poor implementation due to lack of adequate knowledge of code/frameworks. Most software best practices are learned after the software is created and release. The code is probably very spaghetti and hard to maintain. Refactoring is still hard for AI, but writing from scratch is much easier with AI. For the software that is currently out, bugs will be fixed faster, and features might be added sooner.

The real explosion of great software will happen in 3-5 years. AI is huge for the beginning of projects. You know _what_ you want the app to do but you don't know _how_. That's where AI adds huge value. People are now starting new projects with AI help, and they are building foundations of codebases that will be much more maintainable and sustainable as development continues compared to the current suite of software products we interact with today.

  • jsheard a day ago

    > You know _what_ you want the app to do but you don't know _how_. That's where AI adds huge value. People are now starting new projects with AI help, and they are building foundations of codebases that will be much more maintainable and sustainable

    I'm not following. How are these codebases going to be more maintainable and sustainable if the developers are committing code they don't even understand?

    • sdenton4 a day ago

      I think you misinterpret the 'don't know how' part. LLM's are fantastic for boilerplate (which is a big part of getting things started). From there, I might not know the right incantation for (say) handling a button click, but it's not too hard to validate that an LLM-generated handler a) works, b) looks sensible, and c) fits reasonably in the codebase. In fact, most of my use of LLMs for coding is about generating snippets of functionality, which go into a codebase that I'm maintaining the shape of... which helps maintainability.

      You can also generate tests more efficiently, meaning you can get better test coverage cheaper. This leads to better maintainability as well, as you know more quickly when you've broken things with a change.

      • animal_spirits a day ago

        Yeah precisely. In my case I’ve been building a Django rest app over the last few years. I started off writing way too much of my own code rather than using plugins I had no idea existed. After finally getting ahold of ChatGPT I was able to expand my knowledge of Django tenfold, and was able to rewrite the app from the ground up using proven libraries and design decisions.

giantg2 a day ago

"If AI is helping people code better, why aren't products getting better?"

Because it's not helping them code better. It might be faster, but the quality in my experience is worse. Then the user is trying to verify or troubleshoot code they didn't write.

The bigger issue is garbage in, garbage out at the requirements level. The business hardly ever documents their business system before turning it into a technical system. How can we create a system to meet requirements that nobody knows and didn't have a chance to really think about while writing the code?

userbinator a day ago

AI only helps those who are below-average to become (barely) average --- and that average is dropping. Also, quantity is not quality.

  • jamalaramala 15 hours ago

    *Nobody is above-average in everything*.

    You can be above-average in your favourite programming language, but suck at the system or library that will be required in your next project.

    AI will help you get up to speed.

furyofantares a day ago

1. They probably are. I'm sure I would notice a 500% increase in the rate that products get better, but I'm also sure I wouldn't notice 10%, which is much more realistic (but imo it's probably still less.)

2. Productivity gains don't go directly to products getting better. Individual developers may choose to realize some gains by spending more time with their family. Of course the company will claw that back but it takes time. And when they do, some of the gains may instead be realized by higher profit margins rather than better products and it will take time for consumers try to claw that back using their market choices.

3. Companies have lots of moving parts and a speed they're used to going; it will take time to adjust if one part goes a little faster.

4. LLM-assistants help a lot with getting up to speed in a new field or making stuff from scratch, and a lot less for a skilled team who already knows all the product code and surrounding tools. So "products you use regularly" benefit the least.

CM30 13 hours ago

Because the quality of the code is at best very, very loosely related to the quality of the end product. I mean, what are many of the issues you see in poorly designed sites, apps, programs, etc?

Usually a mix of poor design choices and hostile design.

Neither of these directly correlate to the quality of the code, how quickly it was created, how many bugs it has, etc.

If everyone working in tech (or any sort of programming related project in general) was an expert level programmer with decades of experience, neither of these things would be noticeably better. They'd still create software that's miserable to use because of bad design, and we'd still have companies trying to scam the users by making basic functionality hard to use (see cookie notices, unsubscribe processes, etc).

spit2wind 15 hours ago

Programming with AI, so far, tries to specify something precise, algorithms, in a less precise language than what we have.

It's the difference between Euclid and modern notation, with AI programming being like Euclidean notation and current programming languages being the modern notation:

"if a first magnitude and a third are equal multiples of a second and a fourth, and a fifth and a sixth are equal multiples of the second and fourth, then the first magnitude and fifth, being added together, and the third and sixth, being added together, will also be equal multiples of the second and the fourth, respectively."

versus

a(x + y) = ax + by

If AI programming can find a better way to express the problems we're trying to solve, then yes, it could work. It would become a matter of "how well the compiler works". The current proposals which use natural language as the notation is not better than what we have.

  • lmpdev 15 hours ago

    The only problem with this is the fact that the 99%+ of issues with software products aren’t the fact that a parsimonious language was used and tightly coupled to the compiler

    The vast majority of issues is missed edge cases between what the user wants and expects, and the design and function of the software

    Higher productivity would in theory allow programmers more opportunities to address more issues

    Programs don’t exist in a vacuum, users and other actors need to interact with them

    Whether or not these LLMs result in increased productivity with the same or better quality is a more pertinent question

Jtsummers a day ago

Some possibilities:

1. The products you use are not developed by people using LLMs.

2. The products you use may be using LLMs in development, but only recently so you'll see a delay before any improvement.

3. The products you use are using it, and maybe it's helping with quality, but not anywhere that users care about or notice.

4. The products you use are using it, and it's not helping with quality, just churning out more code.

DonsDiscountGas a day ago

Writing the same code 10% faster isn't necessarily going to make it better. Also the biggest improvements have been among novices, and the products you regularly use were predominantly written (or at least reviewed) by more experienced people.

dfxm12 15 hours ago

Crucially, products are more than "code". They are UX, they are support, they are maintainability, etc.

Also, in my experience, AI is helping people who don't know how to code to write code they don't understand and can't support. In my experience, the people who are already making products aren't getting much benefits from AI (not yet anyway).

layer8 15 hours ago

AI makes it easier to build a mediocre or just barely working product, hence rather a decline in quality is to be expected.

csallen a day ago

I signed into Zapier yesterday for the first time in a while. You can seemingly run their entire UI right now via AI. I typed a simple idea into their AI box, and it created a multi-step "zap" for me that was more-or-less what I wanted.

So at least some software is getting better.

  • sexy_seedbox a day ago

    How is Zapier's AI for more difficult tasks? Is it a "Yes" man like almost all other AI services/LLMs?

obirunda a day ago

I think the primary reason is that datasets contain a lot more average/bad code than exceptional, and to add to that problem judging between those is possibly a subjective issue.

Developers using AI will get mostly average solutions faster but exceptional ones will be obviously rare. And, crucially if the idea itself is average or bad there isn't much an elegant coding solution will do for the idea.

I think this ultimately is the divide between the hype and reality of how AI will impact products. If you just give a product manager the keys to do all the coding as no code "prompt engineer", more than likely will lead to further enshitification of features in products with unmaintainable code bases. At the current state, understanding algorithms and thinking computationally is a requirement to improve a code base.

The hopes of having a "build me a $1 billion app" prompt capability, or "improve my shitty app" are too long horizon and subjective requests to bypass the hardships of product ideation and iteration to have the LLM deliver on the requests. It's not magic, it's probability. Averages are the end goal here, not excellence.

If we arrive at a point where LLMs translate general prompts into idealistic versions that are more like version 100 of the idea while still capturing the user's intent, then we will see these improvements. Otherwise it's copy pasta on steroids, and done mindlessly, will mostly lead to enshitification rather than improvements.

benreesman a day ago

Because like any tool, it can be used thoughtfully and it can be used carelessly.

There are use cases where LLMs help with coding (and they are growing as the things get better), but even if they could do as well as an experienced engineer at doing a first draft (which, debatable in any setting and falls off sharply once it's not a highly mainstream setting), a first draft is almost never a high-quality artifact.

They can also be used to get a sort of minimum viable diff that represents a liability to the codebase and those who maintain and depend on it, to do this with very little effort and therefore impose the negative externalities on someone else. Anecdotally this seems to be a distressingly common use case. I'm more than a little concerned that software quality is about to take an abrupt turn for the worse in aggregate.

More broadly, if you're anything like most people I know, the products you're using are getting better all the time... at making money for the companies that build them. Consumer Internet profits and/or valuations are at something like an all time high. All that lag and jank and spam and shit? That's not easy code to write or simple infrastructure to operate. That's full-metal-jacket monetization at great effort and expense.

Chris_Newton a day ago

I think of the current generation of AI coding tools as being like a developer with a little bit of experience in almost any tech stack and field of application.

If I’m investigating a new field or trying out a new language or library, relative to my own experience, then it’s quite common for an AI code generator to use idioms or libraries I hadn’t yet come across. That alone sometimes saves me a useful amount of time doing research.

However, it’s almost all breadth and very little depth. The quality of the generated code is rarely better than something a junior-to-mid-level developer might have written. It needs to be reviewed and corrected with similar diligence.

Similarly, the quality of a generated review of existing code or of generated supporting assets like test cases or documentation is often superficial and error-prone. I rarely find it an overall win to use current AI-based tools for these things instead of existing tools that can’t do as much but are consistent and reliable at what they do do.

So I wouldn’t necessarily expect current AI tools to help me code better, only sometimes a bit faster, and that mostly in new areas I’m exploring rather than areas where I’m doing professional work that is going to get shipped in the near future.

jccalhoun a day ago

I have been messing around with a project using python on a raspberry pi. I know a little javascript but I'm no programmer and didn't know any python and hardly anything about linux before this. Chatgpt and Gemini have helped me a lot by writing code that included common libraries that I didn't know existed. Then I can modify it and tweak it to suit my needs.

yatz 13 hours ago

In my experience, AI is helping people code faster, not precisely better! It does not take long before we find the limitations of AI code-gen running you in circles.

As far as I know, most of us do research with AI to get ideas and find pros and cons but we are still the ones mostly driving the logic with AI filling in the function level blocks.

SkyPuncher 15 hours ago

Product development decisions are essentially completely independent of software development decisions. Product development is challenging in it's highly contextual, highly political, and essentially unique to every product.

It doesn't how effective software development is if you're not doing anything to improve the discovery and planning process.

mindwok a day ago

An improvement in productivity does not necessarily imply improvements in quality. Where those extra engineering hours get spent is determined by management, and management are optimising for profit, which could mean optimising for quality but more likely means optimising for feature development.

nrjames 15 hours ago

I don't find them useful for coding much, but I do find them useful for searching through documentation and making suggestions. With Cursor, you can have it reference the docs for the library/framework you are using and then ask it questions about the docs. While this does not always work well, it can save a lot of time.

gadders 15 hours ago

I think it is helping them generate the same code faster, rather than generate code with different functionality.

mrtomservo a day ago

This is a personal anecdote, and just one data point, but still. I work on websites for customers, and frequently I'll need to (for example) iterate through some spreadsheet data, or convert a big object of this format to some other format. These are tedious tasks that don't take a _huge_ amount of time, but instead of grinding on a particular function for 30 minutes, I have a workable thing I can tweak in five minutes. I'd say this helps me "code faster and better."

Does it make the end product better? Not really: I would have gotten there with a function written by me or some LLM. But like everything I've been asked to do my professional career, it allows me to do more with less. More dumb functions in less time.

amadeuspagel 15 hours ago

At this point, I'd expect to see more new products, especially more web apps. Are there? Would be interesting to look at /show with that in mind. I know I'm creating things I couldn't without AI. I expect more people who aren't web devs to make web apps.

mergisi a day ago

AI tools, like what we develop at AI2sql https://ai2sql.io/ , are helping speed up coding, but better code doesn’t always mean better products. Product development involves design, user experience, and business strategy, which AI doesn’t fully solve. Also, many AI-driven improvements happen behind the scenes, focusing on performance or stability, which users might not notice right away.

Cthulhu_ 15 hours ago

Because the quality / goodness of a product has little to do with the underlying code. Second, is it actually helping people code better? That's a claim that needs some paperwork to back it up, and first off all with a definition of "good".

zabil 15 hours ago

Because coding assistants depend on the baselined code they are trained on for their suggestions?

I don't believe they offer creative solutions, just a faster way to refer. So it's still the responsibility of developers bring their creativity to the process.

TechRemarker 19 hours ago

AI is helping more people code problems they might not be able to on their own or help people do it more quickly for a particular issue but don’t think that in general is making people better coders just like Google auto correct isn’t making people better spellers.

analog31 15 hours ago

One could look back at history and ask how long it took for better coding tools to result in better software. This could go all the way back to programming languages, IDEs, frameworks, etc.

HumblyTossed a day ago

I don't think things are going to get "better". I think you'll see some homogeneity, where a lot of code will just converge at "average".

jasfi 15 hours ago

The AI coding tools I see today help to make coders somewhat more efficient. That isn't something that's very visible in the software landscape as a whole.

medion 15 hours ago

Because technology generally makes things faster, not better.

tonyoconnell a day ago

My products are so much better because of AI. I can now build things I didn't even dream of building before. I am amazed that so many humans find it really hard to accept this new reality - computers can write better code than we do.

  • player1234 21 hours ago

    What products? Better how? How have you quantified this? Give us the numbers for these extraordinary claims.

rsynnott 20 hours ago

If pigs can fly, then why aren't there more bacon-scented aviation accidents caused by them getting sucked into jet engines?

Like, there's little reason to think that LLMs are helping people code better.

j7ake a day ago

They don’t code better but they code faster.

I imagine for first pass prototypes, AI will greatly accelerate the process. But getting to the fine details and getting things done well will still take same amount of time.

AI-guided coding will help code up “good enough” implementations, which is great for research and testing ideas but not for production.

ddgflorida 10 hours ago

Too early and code quality isn't as good as you think.

sitkack a day ago

W/o a different incentive structure, nothing will get better from what it is now. And if there is a way to produce something cheaper and a functionally equivalent lower quality, that is what will happen.

For most part, it will be same quality or lower at a cheaper cost delivered faster.

thefz 19 hours ago

Because AI is trained on public available code with no indication of its quality, so it's churning out the already low quality code it finds online.

jaredwiener a day ago

My guess would be that the hurdle in creating better products isn't in code completion.

User research, UX improvements, feature ideation and creation, etc, are all the same as they have always been. Getting the code out faster doesn't help if its in service of a bad feature.

trumbitta2 14 hours ago

AI is helping people code a lot faster. For "better", you still have to put the work in.

t0bia_s 21 hours ago

There are two categories of technology. Those which get job done faster and those which get job done easier. Most of nowadays technology belongs to first category.

vouaobrasil 15 hours ago

Duh. Products are not about making life better. They are about stimulating basal instincts for information and novelty. If a company can do that with a horrible user experience, they will. Although I personally have never used substances or drugs, I think it must be similar to drug addiction. Provide just enough of new technology as fast as possible to keep the users high and addicted.

  • amarant 15 hours ago

    Nah, drugs are all about the experience. It's only novel the first few times you try a new drug, and sustained use requires the drug to be pleasant!

    In other words, drugs are better than the internet, and we could all learn a thing or two from drugs, to make our products better!

erichmond a day ago

What is the correlation between "writing code" and product management getting better?

jedimastert a day ago

Does anyone know if Github has any sort of public telemetry data (of if anyone from GH is around here somewhere)? There was a ChatGPT outage about a month ago and I'm DEEPLY curious if there was an overall drop in commit volume.

protocolture a day ago

Dunno, what products do you regularly use?

I know my hobby stuff is getting better. I generally dont make front ends for my projects, but genai has helped me build widgets for other front ends, and css/js frontends for a lot of my nonsense.

apwell23 15 hours ago

followup question: Why is AI productivity gains showing up in quarterly reports.

leshokunin a day ago

The way the question is framed invites a justification: "why it could be improving while not improving".

I'm sure there are hypotheses. But it's also likely that things are simply not improving.

ridicter a day ago

I'm a designer=engineer, and I feel like AI has given me super powers. Design has always been easy for me--it was just the incredible grind of coding that made creating my own thing difficult.

lmm a day ago

Because code quality probably wasn't the bottleneck on product quality, and even if it was then some analogue of Amdahl's law applies.

readyplayernull a day ago

Better products require better features and better quality. Features are defined by managers, quality is controlled by testing. These haven't benefited from AI as much as coding did.

tensility a day ago

It's partially a product of Amdahl's Law. Coding and related textual activities are only a fraction of the work required for product design, implementation, and maintenance.

cnotv 15 hours ago

Because the problem is not in the code itself? :D

AlexCoventry a day ago

I don't think AI code helpers really understand how to make code which is readable and maintainable, at this stage.

lvl155 a day ago

Because it won’t help subpar coders and “100x” types have moved into AI. Meaning, they’re the ones building the tools.

feverzsj 15 hours ago

A toy can only make you happy ... for a while.

outlore 11 hours ago

High quality products result from the accumulation of fixes and polish. They also add new desirable features in response to user feedback.

Whether AI is used to write code is irrelevant to a product getting “better”. AI copilots can be used to bootstrap early stage concepts which might be unpolished, or can be used to add polish by writing bug fixes.

I neither subscribe to the mania around AI, nor do I think it will enshittify products. I believe it is just another tool that we can use.

Yawrehto 12 hours ago

The key is that if. What if it isn't, yet? What if right now the AI is in a phase where it's still learning and doesn't produce good stuff yet?

Besides, products have been getting worse for a while. Enshittification is a potent force, and even if AI was axiomatically helping people code faster and better, enshittification might still lead them to add in annoyances, privacy risks, et cetera.

adamnemecek a day ago

It has been around for what, a year?

  • throwaway2016a a day ago

    This was my first reaction. It hasn't been that long and giant ships can't change course quickly.

    And for much of that year there were a lot of questions around the ownership of AI generated code as well as information security. In fact, most of those questions are still not satisfactorily solved. So I'm not so surprised that we haven't seen massive results "yet"

SkyBelow 15 hours ago

Better programming, task to task, does not result in better applications.

For a simple example, consider a would be program that takes 100 tasks of 16 hours each to build a program with a quality of 75%. With AI, those tasks can take an average of 12 hours each, meaning the software can be delivered faster. Unless someone purposefully invests the saved time into improving the program, you'll end up with the same 75% quality program faster.

Now what if AI makes the code slightly worse, leading the quality to drop to 70%, but some of the savings are used to improve quality, bringing it back up to 75%? Same outcome of the product not being any better to the end user.

Even if the code is higher quality, how much of that 25% of missing quality is the result of bad code verses bad designs or a mismatch between what customer wants and what those designing the project think the customer wants? Even a perfect AI that solves all bugs won't improve that.

In short, programming better can mean many different things, some of which might translate to a better or worse product, but with no consistency.

stevage a day ago

Where did that premise come from?

Copilot mostly helps people code faster, or with less knowledge required. I'd expect output quality to go down, not up.

And it's like anything in capitalism. Companies could choose higher quality, but instead they do whatever gives the highest profit. which is usually adequate quality at low cost.

stephenr 17 hours ago

> If AI is helping people code a lot faster and better

You may as well ask why the advent of StackOverflow didn't massively increase app quality, it's the same target audience.

johnea 10 hours ago

How many times are these naive questions going to resurface?

Of course this all depends on what you mean by "better", or for whom the product is better.

AI, like every technical advance in history, will be deemed "better" if fewer people make more money from it.

This has nothing to do with you, mister insignificant user...

gloyoyo a day ago

Was just thinking this same thing this morning.

ayushl a day ago

Because they're getting fired

rmellow a day ago

Digital distribution and cloud apps lowered the cost to correct mistakes and therefore lowered the barrier to release.

No longer are developers bound by physical media, or have to force clients to troubleshoot, manually download updates on their website and install them.

... and as others said, LLMs impact is greater on junior developers, and at that, more on speed than quality. For experienced developers, the impact is greater on speed.

I have no data, only sense to make.

whoomp12342 a day ago

ahhh the classic "I confused speed with quality" argument

at_a_remove 15 hours ago

I believe it increases the speed of the coding, but not the quality. It doesn't do your QA for you. It doesn't do your UX for you. And if you're a code monkey assigned to implement a feature you know will drag the product down, well, you'll get that out the door faster. AI won't force the marketing people to make good decisions, or create sane deadlines, and so on.

stego-tech a day ago

It's complex - an essay in and of itself if we're to respond properly, but I'll try to keep things brief here.

First, let's address the tooling side. While the current crop of "code completion tools" built out of or around LLMs are quite capable in their own right, they're not exactly "free thinkers" like we can be. Rather, their output is limited by a combination of training data, the model itself, and - increasingly - the user's ability to put their ideas into a prompt that can generate the desired output consistently. So there's already a huge hurdle just on the tooling side to overcome before we can begin "improving", one tied just as much to the capabilities of the product as the capabilities of the end user. I would argue that this is the most immediate hurdle to cross if we want to see meaningful improvements to code as a whole.

In addition to that immediate hurdle, there's three more issues on the tooling front:

* The existing training data is largely bad, bloated, or insecure code samples (generally from publicly-available social media and repositories), because code security and efficiency are only relatively recent prerogatives of large development companies or outfits as they seek to dodge lawsuits (security) and increase margins (efficiency)

* LLMs aren't very good at teaching a user how to think better about a problem, only making them better at phrasing their prompt to get closer to a possible solution

* LLMs are stuck in a predictive framework that mandates an answer for the customer, as opposed to a human who is able to say "I don't know" and going off to learn more about that thing they're stuck on.

Ultimately, the tooling is helping novice or entry-level developers and hobbyists write better code, but only because the models were trained on code from more senior or professional developers that was also shared publicly. Senior developers and above may find utility in writing faster code with LLMs, but aren't nearly as likely to write better code as a result of the tooling, at least from my subjective reasoning.

Now let's switch to the business side of things, which I already touched on above. Businesses haven't been interested in secure or efficient code until very recently, as we began bumping up against the limits of physical hardware in x86-64 land and lawsuits for failures became more of an existential threat. This means a lot of the code from public samples fits the "done is better than good" mantra of modern business practices, rather than being an improvement to prior releases; even if a business has taken the time to create more secure or efficient code, they likely haven't shared it as it's a core part of their competitive advantage or product line. This will take years, maybe a decade before the LLM training sets have enough "superior" data to outscore the "inferior" training set data, during which time the status quo - barring a literal revolution in computing - is likely to remain.

Admittedly all of this is my subjective POV from infrastructure-world, and could be way off base; YMMV, buyer beware, caveat emptor, etc.

breck a day ago

You're using the wrong products!

1. Look for products that don't have (c)opyright. Any product still using that or licenses is going to evolve too slow and go extinct.

2. Look for products built on revolutionary simpler stacks like PPS.

I thought this was going to be an essay and not just a Tweet, so I did record a long winded response, which I think contains a lot of relevant info: https://news.pub/?try=https://www.youtube.com/embed/KhDvFNef...

j45 a day ago

Maybe things that weren't being built (available capacity to develop) is increasing, thereby lifting up the floor of things that never get built.

Further, non-coders being able to become equivalent to a junior developer is a huge leap.

What active developers do with AI remains to be seen. It really could 20x the average developer, but it doesn't seem like a huge chunk of developers are really using AI in a way that it's the rage on the developer level broadly.

Maybe that's why Cursor going "viral" on youtube seems different when it was known to some, and not others.

rerdavies 8 hours ago

Based on my experience, at the present moment in time, I think you should be expecting more code (groundbreaking improvements in productivity), but not necessarily significantly better code. But certainly not worse code. Current generation AIs are writing tactical code absolutely brilliantly (often better than my own code), but often making very odd strategic decisions (functionally decomposing code in odd ways, for example).

But, the rate of change in this area is breathtaking. I reasonably expect my AI to improve in the coming months, or even weeks. And I find it difficult to keep up with which AIs are best for generating code at any given moment. There may be AIs that are good at reviewing multi-million-line code bases for security flaws. But I am not currently using one at the present time.

What I do know: my AI coding partner this year is writing code that is more accurate and more stylish than any AIs were producing this time last year. The code that's being produced is often strategically brilliant -- elegant, concise, only very occasionally using hard-coded constants instead of including the correct headers, and almost completely absent of "hallucinations". And I'm using it to regularly generate code in three different languages (C++ for the app server, typescript for the web client, java for the Android client application).

I've frequently found myself adopting coding conventions that my AI has shown me. I particularly like

    namespace fs = std::filesystem;
And the solution it came up with for writing a std:filebuf implementation stills leaves me speechless. I've done that a few times over my career, and the solution the AI uses is infinitely superior to anything I've ever written -- not something I've EVER seen, but clearly the horrible, never documented way the original authors of the iostream libraries MEANT people to do it, which provides substantial advantages over the way I've been doing it. And absolutely nowhere to be found in the first 30 page of google searches, or among the strangely variously broken and obsolete fragments of code on StackExchange.

But my current AI often falls short when it comes to strategic thinking. Functional decomposition is often odd. I often have to refactor code that my AI generates -- sometimes by coaching it through refactoring, and sometimes doing it myself when I move the generated code into production code. But that may change next week. Who knows?

Have I used it for debugging existing code? A couple of times. I'm not currently seeing a huge productivity boost in this area.

Today, I coached it to write me a bash shell script to generate a graph of a key development metric using gnuplot. Bash: as a former Windows programmer, bash still terrifies me. Gnuplot: a documentation set that could be called unforgiveable if you were feeling particularly generous. Took me about 20 minutes. "Change the font of the title, please". "Take input from this program, which produces a column of ISO 8601 dates, and an integer value". "Rotate the date labels 90 degrees anti-clockwise" (It mistakenly rotated them clockwise. The only flaw in an otherwise fantastic performance). Etc. It took me about 20 minutes to do what would have taken me a couple of hours. I wouldn't have done it if I didn't have an AI at my disposal.

Context: Using Claude 3.5, 40 years of very senior Windows development experience, but only about 3 years of Linux development experience.

deterministic a day ago

> If AI is helping people code better

I don't think it is?

30+ years of experience here and I haven't seen any AI coding examples that makes me want to use AI for coding.

It might happen one day but so far nope.

black_13 a day ago

I work for Boeing and i deal with shitty legacy code and shitty legacy ideas