bluetomcat 2 days ago

What a mess of an article. A pretentious mishmash of scattered references with some vague abstract claims that could be summarised in one paragraph.

  • flohofwoe 2 days ago

    Sort of fitting though, because C++ coroutines turned out quite the mess (are they actually usable in real world code by now?).

    I think in the end it's just another story of a C++ veteran living through the inevitable Modern C++ trauma and divorce ;)

    (I wonder what he's up to today, ITHare was quite popular in game dev circles in the 2010s for his multiplayer networking blog posts and books)

    • pjmlp 2 days ago

      They have been always usable in the real world, as they were initially based on async model of doing C++ programming in WinRT, inspired by .NET async/await.

      Hence why anyone that has done low level .NET async/await code with awaitables and magic peoples, will fell right at home in C++ co-routines.

      Anyone using WinAppSDK with C++ will eventually make use of them.

    • TuxSH 2 days ago

      > C++ coroutines turned out quite the mess (are they actually usable in real world code by now?).

      They are, they are extensively used by software like ScyllaDB which itself is used by stuff like Discord, BlueSky, Comcast, etc.

      C++ coroutines and "stackless coroutines" in general are just compiler-generated FSMs. As for allocation, you can override operator new for the promise types and that operator new gets forwarded the coroutine's function arguments

      • simonask 2 days ago

        They are compiler-generated FSMs, but I think it's worth noting that the C++ design was landed in a way that precluded many people from ever seriously considering using them, especially due to the implicit allocation. The reason you are using C++ in the first place is because you care about details like allocation, so to me this is a gigantic fumble.

        Rust gets it right, but has its own warts, especially if you're coming from async in a GC world. But there's no allocation; Futures are composable value types.

        • captainmuon 2 days ago

          > The reason you are using C++ in the first place is because you care about details like allocation, so to me this is a gigantic fumble.

          I wouldn't say that applies to everybody. I use C++ because it interfaces with the system libraries on every platform, because it has class-based inheritance (like Java and C#, unlike Rust and Zig) and because it compiles to native code without an external runtime. I don't care to much about allocations.

          For me the biggest fumble is that C++ provides the async framework, but no actual async stdlib (file io and networking). It took a while for options to be available, and while eg Asio works nicely it is crazily over engineered in places.

          • pjmlp a day ago

            I like what Rust offers over C++ in terms of safety and community culture, but I don't enjoy being a tool builder for ecosystem gaps, I rather spend the time directly using the tools that already exist, plus I have Java and .NET ecosystems for safety, as I am really on the automatic resource management side.

            Zig, is really Modula-2 in C's cloathing, I don't like the kind of handmade culture that has around it, and its way of dealing with use after free I can also get in C and C++, for the last thirty years, it is a matter of actually learning the tooling.

            Thus C++ it is, for anything that isn't can't be taken over by a compiled managed language.

            I would like to use D more, but it seems to have lost its opportunity window, although NASA is now using it, so who knows.

        • pjmlp 2 days ago

          The C++ model is that in theory there is an allocation, in practice depending on how a specific library was written, the compiler would be able to elide the allocation.

          It is the same principle that drives languages like Rust in regards to being safe by default, in theory stuff like bounds checks cause a performance hit, in practice compilers are written to elide as much as possible.

        • gpderetta 2 days ago

          The required allocation make them awkward to use for short lived automatic objects like generators. But for async operations were you are eventually going to need a long lived context object anyway, it is a non-issue especially given the ability to customize allocators.

          I say this as someone that is not a fan of the stackess coroutines in general, and the C++ solution in particular.

        • TuxSH 2 days ago

          You can write stuff like this:

            void *operator new(std::size_t sz, Foo &foo, Bar &bar) { return foo.m_Buffer; /* should be std::max_align_t-aligned \*/ }
          
          and force all coroutines of your Coroutine type to take (Foo &, Bar &) as arguments this way (works with as many overloads as you like).
        • uep 2 days ago

          I think you missed an important point in the parent comment. You can override the allocation for C++ coroutines. You do have control over details like allocation.

          C++ coroutines are so lightweight and customizable (for good and ill), that in 2018 Gor Nishanov did a presentation where he scheduled binary searches around cache prefetching using coroutines. And yes, he modified the allocation behavior, though he said it only resulted in a modest improvement on performance.

gsliepen 2 days ago

Early programming languages had to work with the limited hardware capabilities of the time in order to be efficient. Nowadays, we have so much processing power available that the compiler can optimize the code for you, so the language doesn't have to follow hardware capabilities anymore. So it's only logical that the current languages should work the limitations of the compilers. Perhaps one day those limitations will be gone as well for practical purposes, and it would be interesting to see what programming languages could be made then.

  • flohofwoe 2 days ago

    > Nowadays, we have so much processing power available that the compiler can optimize the code for you, so the language doesn't have to follow hardware capabilities anymore.

    That must be why builds today take just as long as in the 1990s, to produce a program that makes people wait just as long as in the 1990s, despite the hardware being thousands of times faster ;)

    In reality, people just throw more work at the compiler until build times become "unbearable", and optimize their code only until it feels "fast enough". These limits of "unbearable" and "fast enough" are built into humans and don't change in a couple of decades.

    Or as the ancient saying goes: "Software is a gas; it expands to fill its container."

    • adrianN 2 days ago

      At least we can build software systems that are a few orders of magnitude more complex than in the 90s for approximately the same price. The question is whether the extra complexity also offers extra value.

      • flohofwoe 2 days ago

        True, but a lot of that complexity is also just pointless boilerplate / busywork disguised as 'best practices'.

        • Trex_Egg 2 days ago

          I am eager to have an example to explain how a "best practices" is making the software unbearable or slow?

          • flohofwoe a day ago

            Some C++ related 'best practices' off the top of my head:

            - put each class into its own header/source file pair (a great way to explode your build times!)

            - generally replace all raw pointers with shared_ptr or unique_ptr

            - general software patterns like model-view-controller, a great way to turn a handful lines of code into dozens of files with hundreds of lines each

            - use exceptions for error handling (although these days this is widely considered a bad idea, but it wasn't always)

            - always prefer the C++ stdlib over self-rolled solutions

            - etc etc etc...

            It's been a while since I closely followed modern C++ development, so I'm sure there are a couple of new ones, and some which have fallen out of fashion.

            • pjmlp a day ago

              > - put each class into its own header/source file pair (a great way to explode your build times!)

              Only if you fail to use binary libraries in the process.

              Apparently folks like to explode build times with header only libraries nowadays, as if C and C++ were scripting languages.

              > - generally replace all raw pointers with shared_ptr or unique_ptr

              Some folks care about safety.

              I have written C applications with handles, doing two way conversions between pointers and handles, and I am not talking about Windows 16 memory model.

              > - general software patterns like model-view-controller, a great way to turn a handful lines of code into dozens of files with hundreds of lines each

              I am old enough to have used Yourdon Structured Method in C applications

              > - use exceptions for error handling (although these days this is widely considered a bad idea, but it wasn't always)

              Forced return code checks with automatic stack unwinding are still exceptions, even if they look differently.

              Also what about setjmp()/longjmp() all over the place?

              > - always prefer the C++ stdlib over self-rolled solutions

              Overconfidence that everyone knows better than people paid to write compilers usually turns out bad, unless they are actually top developers.

              There are plenty of modern best practices for C as well, that is how we try to avoid making a mess out of people think being a portable assembler, and industries rely on MISRA, ISO 26262, and similar for that matter.

            • aw1621107 a day ago

              > put each class into its own header/source file pair (a great way to explode your build times!)

              Is that really sufficient to explode build times on its own? Especially if you're just using the more basic C++ features (no template (ab)use in particular).

              • pjmlp a day ago

                Not at all, you can write in the C subset that C++ supports and anti-C++ folks will still complain.

                Meanwhile the C builds done in UNIX workstations (Aix, Solaris, HP-UX) for our applications back in 2000, were taking about 1 hour per deployment target, hardly blazing fast.

  • j16sdiz 2 days ago

    The problem is: "the platform" is never defined.

    When you decouple the language from the hardware and you don't specify an abstraction model (like java vm do), "the platform" is just whatever the implementer feels like at that moment.

  • lmm 2 days ago

    Isn't that the tail wagging the dog? If you build the language to fit current compilers then it will be impossible to ever redesign those compilers.

    • rcxdude 2 days ago

      Maybe, but if you don't consider the existing compilers you run the risk of making something that is unimplementable in one of the existing compilers, or perhaps at all. (C++ has had some issue with this in the past, which is I think why it's explicitly a consideration in the process now)

    • gsliepen 2 days ago

      Why would that be impossible? Most programming languages are still Turing complete, so you can build whatever you want in them.

      • lmm 2 days ago

        You said this was an efficiency issue, and Church-Turing says nothing about efficiency.

      • gpderetta 2 days ago

        "Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy."

        - Alan Perlis, Epigrams on Programming

  • simonask 2 days ago

    It's not really about "limitations" of the hardware, so much as it is about the fact that things have crystallized a lot since the 90s. There are no longer any mainstream architectures using big-endian integers for example, and there are zero architectures using anything but two's complement. All mainstream computers are Von Neumann machines too (programs are stored; functions are data). All bytes are 8 bits wide, and native word sizes are a clean multiple of that.

    Endianness will be with us for a while, but modern languages don't really need to consider the other factors, so they can take significant liberties in their design that match the developer's intuition more precisely.

    • gsliepen 2 days ago

      I was thinking more about higher-order things, like a compiler being able to see that your for-loop is just counting the number of bits set in an integer, and replacing it with a popcount instruction, or being able to replace recursion with tail calls, or doing complex things at compile-time rather than run-time.

      • flohofwoe 2 days ago

        At least the popcount example (along with some other 'bit twiddling hacks' inspired optimizations) is just a magic pattern matching trick that happens fairly late in the compilation process though (AFAIK at least), and the alternative to simply offer an optional popcount builtin is a completely viable low-tech solution that was already possible in the olden days (and this still has the the advantage that it is entirely predictable instead of depending on magic compiler tricks).

        Basic compile time constant folding also isn't anything modern, even the most primitive 8-bit assemblers of the 1980s allowed to write macros and expressions which were evaluated at compile time - and that gets you maybe 80% to the much more impressive constant folding over deep callstacks that modern compilers are capable of (e.g. what's commonly known as 'zero cost abstraction').

  • deterministic a day ago

    Nope. Performance really matters. Even today. And even for web applications! Just remember how you feel using a slow sluggish website vs. a snappy fast one. It's night and day.